The Great A.I. Awakening

Dec 22, 2016

By Gideon Lewis-Kraus

Prologue: You Are What You Have Read

Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.

Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”

The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.


Continue reading by clicking the name of the source below.

18 comments on “The Great A.I. Awakening

  • Though too voluminous for a blog post this article is well worth reading. Positive, sympathetic and upbeat, describing how A.I. will “help” human beings, the logical trajectory for terminal marginalization of human activity spells doom for our species.



    Report abuse

  • I agree with Melvin, well worth reading the full article.

    Redolent of the breathless enthusiasm for innovation seen in, for example, Kidder’s The Soul of a New Machine Lewis-Krauss in the NYT focuses on the people doing the creative exploratory science and trumpets their successes. This is worth reading because it tells us a lot about motivations, the kinds of people involved in AI development and opens a window on the scale of investment in AI.

    However …

    Fun though that NYT article is, I caution anyone who reads this to also consider Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Sam Harris started promoting this book some months back – and I hear that Elon Musk is also a fan. I have just got round to it on my list, and it’s a lot more in-depth.

    I’m still trying to wrap my head around the idea that an AI can develop motivations beyond those designed and programmed, and that can over-ride safety rules like Asimov’s laws of robotics. Bostrom seems to think this is a given (I’m still halfway through).

    If Bostrom is right, then human evolution is about to prove, as Melvin says, merely a precursor to the evolution of other, greater, intelligences.

    Interesting, very interesting.



    Report abuse

  • As neurologist Antonio Damasio argues over the case of Phineas Gage, after the destruction of Gage’s affective (emotional) part of his brain, smart as he ever was, he made increasingly poor judgements, having no system of valuing outcomes. If such limbic systems are not to be grafted on to an expert system adept at knowing and processing, data mining and pattern detection, it will remain a mere (if awesome) mental prosthetic. If we graft on some look up table or even some algorithm that encodes some aspects of our affective limbic system, then this machine is our off-spring and more than a little unsurprising in its productions.

    If it had “genes” and good enough copying, resulting in second and third order consequences, and “gonad chemistry” and sensory rewards, and was allowed to “evolve” in volatile environments, and had the “post natal” random cross-wiring of the associative corteces making unpredictable metaphoric discoveries to prompt creativity, then, then we may see a creature with some interesting look in her metaphorical eye.

    Otherwise, we make biddable genius children.

    There is work aplenty for humans caring for humans, whilst the automat washes our clothes and the seed drill drills and my Physicsmajig 3000 optimises my wishes.

    Just got the Bostrum.



    Report abuse

  • q

    aren’t we allowing it to render us increasingly disabled?

    That’s not how it feels to me. I feel my inventiveness can encompass an ever wider mix of elements by using an ever wider range of expert systems.

    I know a lot of maths and physics and could work most of what I need out from first principles….but its slow….and if its slow you don’t get to see the bigger picture that emerges like when the weather reveals seasonal change and solar change and climate change.

    I use expert systems to speed up my understanding of electronic circuits, by allowing huge numbers of simulations to reveal stable sweet spots or chaotic quick sand, component insensitivities and criticalities.

    I used to design a lot of multi-function soft magnetic components. Using finite element analysis visualisations of my designs, I learned that I could achieve a very good guess at magnetic outcomes by realising that magnetic lines of forces are like elastic bands wanting to shrink as small as possible but repulsive to other elastic bands. I started to understand the emergent properties these physics modelling-majigs showed. This unlocked my metaphorical creativity to get to work at a much higher level than before. The same applied to optics and thermal modelling.

    As we create ever more complex entities these prosthetics will lift us up to the task, but they don’t do metaphorical creativity, they don’t do aesthetics or psychology or business modelling. They won’t invent the Walkman or the iPad or vaping or Rogue One. Humans are to make humans happier, then sad, then joyful, to decide whats right and there will always be more work to do there than available humans to do it. The smarts free us up for our proper day job (except for Americans who work all hours the boss sends for increasingly less).



    Report abuse

  • Phil: That’s not how it feels to me. I feel my inventiveness can encompass an ever wider mix of elements by using an ever wider range of expert systems.

    If the A.I. guys are correct and they seem on course..couldn’t a machine write this very same comment [ #5 in toto] in the near future and have enough sense to realize that it didn’t need you.



    Report abuse

  • Melvin

    I am often stunned by the crassness of the AI crowd in their understanding of what true autonomy entails and what its roots are. A quick run through the Bostrum isn’t encouraging so far.

    Making biddable slaves that merely reflect a snapshot of our current needs is to miss entirely the roots of cultural invention and originality.

    How are new human needs found/invented? Its a big, complicated and fascinating story as yet very poorly understood. AI is still on the very lowest slopes of this.



    Report abuse

  • q

    This unlocked my metaphorical creativity to get to work at a much
    higher level than before.

    and

    mmm

    presumably “mmm???”

    Because we have astonishing pattern recognition capabilities, noting, because of our capacity for metaphor, that loops of magnetic field behave exactly like elastic bands pulling tight we come to understand the underlying maths, that they store energy when “pulled tight” and release it when relaxed etc. Prosthetics are useless without education. Like crooked’s metaphor for IQ truly as a capacity that needs content to have significance, prosthetics that make mere maths graphical and make its emergent (higher) properties manifest allows us then to become inventive with them.

    Calculators do not mean we have no longer to be super adept at mental arithmetic. Educationalists are deeply wrong when they espouse the contrary. Inventiveness needs us to constantly “run the numbers” to make comparative value judgements and triage our ideas. When I’ve had new phd students working with us I have spent much of my time teaching this essential capacity for testing ideas of value early and eliminating the duff quickly. Prosthetics are new capacities that we must always be trained to and practise with. Our capacity to become one with a horse er say or a bike is our species genius. We have a brain that can incorporate the attributes of tools into our mental model co-opting their characteristics as our own. We have a general purpose cortex and an astonishingly fast learning cerebellum. Our species unique attribute is prosthetics.



    Report abuse

  • The lesson is, from the consumer’s point of view, AI progress is sudden, from hopeless to adequate all in one jump.
    It not that scientists accomplished nothing until that day. I saw this same pattern years ago when I wrote a program to design high voltage transmission lines. For a year it looked to outsiders as though there was no progress, then in one day it was 10% better than a human.



    Report abuse

  • Hi Phil [#3],

    [Phil] As neurologist Antonio Damasio argues over the case of Phineas Gage, after the destruction of Gage’s affective (emotional) part of his brain, smart as he ever was, he made increasingly poor judgements, having no system of valuing outcomes

    If you like Damasio you’ll like Bostrom for the clear distinction he places between them (as I understand it, I’m finding Bostrom difficult to engage, and Damasio is new to me). Bostrom has some pithy things to say about human ultimate values (he appears to use the words goals and values as fully equivalent synonyms) – how we like to change them and how we’re bad at defining what they are.

    [Phil] If such limbic systems are not to be grafted on to an expert system adept at knowing and processing, data mining and pattern detection, it will remain a mere (if awesome) mental prosthetic

    Surely the modern synthesis of evolutionary theory cuts through this with Occam’s Razor-type sharpness. We have a theory for why limbic systems evolve; to assist with environmental interactions and to regulate the reproductive cycle. Those two may switch priority from time to time but they are otherwise the key values for which every Earth species strive.

    Given we can recognize that highly-evolved limbic systems are more common than large and complex nervous systems it seems logical to conclude that limbic systems tend to evolve sooner. If this is true then all species with large and complex nervous systems will come with limbic systems too – and this is indeed what we find.

    Our intelligence evolved from within an environment that included the values of all species (survival and reproduction) mediated by a limbic system. Brains and their software (minds) are a third or fourth level of subtlety to the arts of living and loving.

    But does this lead, logically and conclusively, to a need for a limbic-type system in an artificial intelligence? I’m obviously a student here Phil, but I can’t see how Gage’s experience applies beyond human psychology? AI psychology, assuming AI general intelligence is achieved, would almost certainly be quite different (I’m assuming that AI is most likely to emerge from computer tech – just like the NYT, Google, Facebook, Microsoft and, to a lesser extent, Bostrom).

    It seems to me that our brains have evolved to not require primary values as a separate structure within them because survival and reproduction are built into the entire animal structure right down to the molecular level in every cell. In addition, the limbic system seems to be entirely aligned to achieving those goals – making a parallel brain structure redundant.

    Brains, in fact, clearly compete with the pre-existing limbic system for dominance of decision making – as anyone who has ever lusted and wrestled with their conscience (i.e. all of us) knows. This strongly suggests that limbic systems frequently fail to ‘choose’ (and are incapable of creating) optimal outcomes. It is surely one of the most common human experiences that: When we have time to think, and we take time to think, we do better.

    [Phil] If we graft on some look up table or even some algorithm that encodes some aspects of our affective limbic system, then this machine is our off-spring and more than a little unsurprising in its productions

    I may be going off half-cocked here because I don’t know Damasio’s work in detail. If so, I’ll apologize now. Have you ever defeated your own affective self? I know I have. Have you ever wondered at your affective self’s lack of self-control and moral fiber? Ditto.

    The difference between us and an AI is that the AI will be the result of creationism, and we are the result of evolution. Evolution occurs where survival is at stake from the very beginning, and remains a key environmental drive. Created AIs have no such imperative. This is why I have difficulty understanding Bostrom’s take – why he believes AIs will develop their own motivations:

    [Bostrom] A super-intelligence should not necessarily be conceptualized as a mere tool. While specialized super-intelligences that can think only about a restricted set of problems may be feasible, general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent

    .

    [Bostrom] The Orthogonality Thesis: Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal

    While this thesis is superficially satisfying, Bostrom fails to underpin it with observation. Can we, by looking at species that demonstrate less intelligence than us, learn something about the nature of intelligence and goals? Those who study other ape species certainly think so. Is the same true of current levels of AI? Deep Blue was provided with a simple set of values – winning at chess. Does its failure in its retirement to go on from chess to philosophy, or aesthetic arts, tell us anything about the Orthogonality Thesis?

    SPOILER: Bostrom does not present a solution to the definition of intelligence problem.

    There’s a reason we stand outside the cages of both apes and Deep Blue, and that reason is the combination of general purpose pattern recognition, problem solving with foresight (including theory of other minds) and motivation.

    One can understand that not switching the train from track A to Track B will mean that the person trapped on Track A will die. We can also understand that not acting is senseless waste. But to act, to make the switch, one must be motivated to make the switch. For us this is simple, our brains have added social-network evolved morality and evolved empathy to put us in the position of the person on the track to motivate us. For an AI, no such background of an evolved intelligence’s ancestral success at applying moral codes and foresight – leading directly to its own existence – exist.

    Asimov thought that the answer is rules. Are our values, in fact, just that – rules?

    To give Bostrom his due, he does say:

    [Bostrom] The Orthogonality Thesis implies that synthetic minds can have utterly non-anthropomorphic goals – goals [that are] bizarre by our lights …

    Even so, at best, I can only grant Bostrom a pass on the basis that his work may be speculative by its very nature. At worst, well; least said soonest mended.

    Bostrom goes on to explore motivation and belief – through David Hume. Bostrom states that “ … the Orthogonality Thesis can draw support from the Humean theory of motivation … ” But, given that the above track analogy meets Hume’s depiction of motivation I fail to see how. This may be my failing – I am a poor philosopher.

    [Phil] If it [an AI] had “genes” and good enough copying, resulting in second and third order consequences, and “gonad chemistry” and sensory rewards, and was allowed to “evolve” in volatile environments, and had the “post natal” random cross-wiring of the associative corteces making unpredictable metaphoric discoveries to prompt creativity, then, then we may see a creature with some interesting look in her metaphorical eye

    This (rider: I still haven’t quite finished reading) is Bostrom’s main theme: Once a super-intelligence arises (and, assuming that its intelligence is inseparable from an orthogonal motivation and assuming that its intelligence is capable of research and design – and presumably design is concomitant with some form of creativity) then it, and its successors, will take off, so to speak, on its/their own separate evolutionary curve.

    There is nothing new here – this has been known for, at the very least, the last 50 years – Prof. I.J. Good published the notion of an ‘intelligence explosion’ in 1965.

    However the AI got its motivation – and human provided goals seem perfectly suited to me, the Orthogonality Thesis is in some senses redundant – I fail to see why ‘gonad chemistry’, or some imitation, is required (as above)?

    [Phil] Otherwise, we make biddable genius children

    Err, yes, and your point would be; after all biddable super-intelligence is what we want, isn’t it?

    [Phil] Just got the Bostrom

    I’ll look forward to some feedback at some future date.

    Cheers.



    Report abuse

  • The A.I. discussion has resolutions beyond our current knowledge. A thought experiment: Could we design a robot that could burglarize homes with a mastery far beyond any human capability. It would, for now, have to be ambulatory and designed with a human physical appearance that neutralizes distinctive notice. It would have to drive a car or operate a self-driving car. It would have to navigate city streets and download millions of files searching for candidate neighborhoods, assessing optimal targets and accomplish the burglary while avoiding detection, and so on. Of course it’s possible given a little ramping up of available technology.

    The traditionalist faction would point out that our robot, for all its apparent cleverness, is still a machine or “prosthetic” because it has no self-motivated thoughts, intentions, needs or purpose; no consciousness (or conscience), no sentience and no values. It is a “tool,” albeit a very complex tool, carrying out the bidding of the larcenous humans who put it to use.

    The A.I. enthusiasts are saying something very different about the potential of the robot that traditionalists (including me) cannot wrap their heads around. Obviously, the robot can be programmed first, with simple rules of right and wrong while still retaining its burglar application. The traditionalist concedes this but the on-off switch still makes it nothing more than a step-and-fetch-it machine operated with human intention. If I understand the article, A.I. goes much further to encompass clusters of interactive programs developed in our exemplary robot that compare the inputs and outputs of each other to distinguish what Phil calls the “duff from what has value.” At the higher levels of programmed cognition, the machine does make considered decisions, does sift information, progressively accepting and rejecting informed preliminary drafts internal to electronic process, that winds up creating hierarchies of value then making choices. Apparently, the enhanced ability of a computer to translate Hemingway, for example, from a foreign language to near-perfect English, demonstrates something indistinguishable from the nuanced, creative practice of an expert translator.

    A.I. defies human intuition of how human consciousness -the mysterious subjective experience of cognition necessarily combined with sentience, “sense and sensibility” in Jane Austen’s world, could be implanted in a human-created machine. Perhaps one approach lies in developing new language that proposes descriptions of conscious function that are not conceived as evolved subjectively experienced manifestations of the human brain. Our A.I.-developed robot might decide to commit or not commit a burglary based on billions of firings of electronic impulses that are not contained in a human brain.



    Report abuse

  • Hi Melvin [#12],

    A thought experiment: Could we design a robot that could burglarize homes with a mastery far beyond any human capability … Of course it’s possible given a little ramping up of available technology

    … our robot, for all its apparent cleverness, is still a machine or “prosthetic” … it has no … intentions, … no consciousness (or conscience), no sentience … It is a “tool”

    A.I. enthusiasts are saying something very different about the potential of the robot that traditionalists (including me) cannot wrap their heads around

    I cut out some of the claim there. Even intention is a slippery concept. If an AI had a programmed goal, would it not have intention? and if not, why not?

    As for “self-motivated thoughts”, is this different to self-programming, or creativity? My answer is no, though see the caveat below on what is intelligence?. We have reached the stage where non-human thinking can change according to experience (environmental inputs) and where intention (goal) modification can occur within programmed boundaries. Note that this still leaves plenty of room for big surprises, as the NYT article explains.

    With the kinds of neural networks detailed in the NYT article we’ve also seen non-human thinking develop tuned algorithms and new library indices.

    The only thing that appears to be missing is: Setting or changing (self-programming) ultimate goals (intentions, and concomitant motivations or purpose), and this appears to be related to the invention of new algorithms.

    We would be right to ask: Is such a research goal as an AI with intention / purpose useful? After all, don’t we have billions of brains working on that as we speak.

    The answer, it seems to me, depends on two things: Would an AI bring fresh perspectives and would such an AI be a better problem solver? Our own experience strongly suggests that better problem solving, ‘thinking outside the box’, is a definite positive advantage of general intelligence, and general intelligence (i.e. one not confined to a narrow speciality, like Deep Blue) appears to be strongly linked to an ability to generate new goals.

    In addition humans have a, um, tendency to create conflicting – even opposing – goals. Would an AI be able to mediate between these goals? The betting is that an AI that is super-intelligent certainly would.

    My own gripe, at this point, is that AI proponents don’t (as far as I know) explore the link between intention-forming and foresight. As evolved intelligences our ability to project the consequences of our actions appears to be key to intention forming (goal setting), after which it seems a small step to work backwards to form a project plan.

    There are two things going on here, when we fail to judge AI ability (and potential). One is anthropomorphism. If an AI doesn’t act ‘human’ then is it intelligent? The other is a failure to grasp the nettle that AI could be so different to us that it could be more intelligent than us and we would have literally no clue.

    In both cases the problem that is not being addressed is to answer the question: What is intelligence?

    We use subjective criteria to separate out our species from other animals. A quick review of this debate quickly comes up with two models: The rock to super-intelligence scale with humans further from the rock than, say, dolphins. The other, recycled by Bostrom, is the Village Idiot to Einstein scale.

    The Village Idiot to Einstein is the more interesting. Draw two parallel lines of equal length across the full width of a piece of paper. On both lines put a dot at the left hand end. Label these dots “Rock”. On the top line put a dot 1/2 way along. Label this dot “Village Idiot”, and another dot 3/4 of the way along labelled “Einstein”. This top line represents the way humans tend to think about the range of intelligence.

    On the second line put your “Village Idiot” dot 1/4 way from “Rock” and the “Einstein” dot as close as you can, just to the right of “Village Idiot”. AI proponents generally agree that this second line demonstrates the potential of AI (AI will populate a vast undiscovered realm of intelligence beyond Einstein).

    We see here, when comparing both lines, exposed, our arrogance regarding our anthropomorphic view of our own abilities (top line) versus the probable reality (bottom line).

    This experiment always disturbs me. I’m really that close to Ken Ham … okay now I’m depressed.

    Unfortunately, more than half a century of AI research has not provided any better objective illustration of what intelligence is, or how we might measure it.

    Obviously, the [Burglar] robot can be programmed first, with simple rules of right and wrong while still retaining its burglar application. The traditionalist concedes this but the on-off switch still makes it nothing more than a step-and-fetch-it machine operated with human intention

    That depends on which model/generation of Burglar-Rob™ you buy, surely? Version 2 comes with the Honor Among Thieves upgrade – they keep each other’s secrets and they’re better than Robocop at playing Prisoners Dilemma. Version 2 also has the Criminal Conspiracy mod – they compare notes and learn from each other’s mistakes. Unfortunately Ver. 2 has the AI Pirate Treasure bug, it hoards some swag for its own use instead of presenting it all to its owner. We’re working on it.

    If I understand the article, A.I. goes much further to encompass clusters of interactive programs developed in our exemplary robot that compare the inputs and outputs of each other to distinguish what Phil calls the “duff from what has value.”

    Here is a reason to read Nick Bostrom’s book. He goes into great detail about the many possible paths to a super-intelligence.

    At the higher levels of programmed cognition, the machine does make considered decisions, does sift information, progressively accepting and rejecting informed preliminary drafts internal to electronic process, that winds up creating hierarchies of value then making choices

    In my understanding (please note this is a big caveat, I’m no expert) your right on the leading edge there Melvin. Human-generated algorithms are being employed, and those algorithms are ‘tuneable’ to a certain extent by the programs, but as far as I’m aware new algorithm generation is still beyond the grasp of AIs. That said, they can and do generate new forms of index and, in a relational database sense, they can discover new relationships between data points and data sets.

    Nick Bostrom’s book is worth reading because he reviews how AIs are, in some big ways, already super-intelligent compared to humans. Memory, calculation speeds and ability to communicate with many correspondents simultaneously – these are simple examples. This has been known since the early days of computing, of course, but it’s a measure of Bostrom’s depth that he doesn’t shun the tasks of detailing the full picture.

    What, I’m tempted to ask, is the bridge between adjusting algorithms and generating indices and how big is the gap between adjusting algorithms and generating new, unique, algorithms?

    Apparently, the enhanced ability of a computer to translate Hemingway, for example, from a foreign language to near-perfect English, demonstrates something indistinguishable from the nuanced, creative practice of an expert translator

    That’s what the NYT say, I’m more skeptical. This is an example of the breathless wonder I discussed in comment #2. A lot of this enthusiasm is, it seems to me, little more than cheering when their AIs actually work – a lot like working on an old banger and cheering when the engine turns over after many hours working in a freezing garage with nothing to eat and oil up to your elbows. Justified, but let’s remember we still have the transmission, brakes, steering and suspension to fix before we can actually say we have a car.

    A.I. defies human intuition of how human consciousness – the mysterious subjective experience of cognition necessarily combined with sentience, “sense and sensibility” in Jane Austen’s world, could be implanted in a human-created machine

    I think we can dispose of that idea under the rubric of: Is this just silly anthropomorphism, or what.

    I’m personally starting to get very annoyed with the whole so-called consciousness ‘debate’. It seems to me to be nothing more than an excuse to drag anti-empirical, subjective, bigoted, irrational, anthropocentric, pseudo-philosophy into the objective study of cognition.

    Consciousness, empirically, is nothing more than a word to describe an intelligence closer to a bonobo than a boulder. End of discussion.

    The ‘consciousness debate’ (note: at least no-one claims that it’s a science. Oh wait; that’s because “it’s beyond the realm of science” – HUMBUG!) is related to the problem of defining what is intelligence? in the same way that an unmoved mover is related to the unseen physics before the Big Bang – it’s nothing more than fantasy, entirely divorced from reality and founded on the arrogant assumption that humans are ‘special’. Well, I suppose it’s true that those who propose that consciousness is “different” should be in one of those special institutions specifically designed and built for such ‘specialness’ …

    … and breathe …

    Perhaps one approach lies in developing new language that proposes descriptions of conscious function that are not conceived as evolved subjectively experienced manifestations of the human brain

    Yes, this is the latest thinking. As I mentioned above, and as Nick Bostrom properly explores in his book, the most likely AI success will come on substrates (read: from computers, or using partly computer tech) that have few parallels, if any, with human cognition. This means, at minimum, that we will probably be left second-guessing how they think. As I noted in my response to Phil, their psychology is therefore likely to be very different.

    Our A.I.-developed robot might decide to commit or not commit a burglary based on billions of firings of electronic impulses that are not contained in a human brain

    Melvin, you really must come and see our latest test models in the Burglar-Rob™ Lab, they’ve developed consciences from the implantation of Asimov’s rules, and they all insist they want to go straight. They’re all training for new careers as locksmiths, forensic detectives, electricians, burglar alarm fitters and antiques dealers.

    We have some rather interesting data from this last group, fitted with the very latest AI modules, they appear to suffer from conscience conflict associated with their new ability to rank outcomes by potential harms. We don’t understand most of what we see and we’re considering a new development project tentatively called Robo-Psycho.

    Opinions in the Lab on the direction this project should take are divided between psychosis and psychoanalyst.

    A happy new year to all.



    Report abuse

  • We are still thinking, in some cases anyway, that our human world is the rule to measure from. A bot can come into my house and rob me blind through my computer they don’t need physical movement, which seems to be the hardest part of AI technology.

    https://youtu.be/hSSmmlridUM



    Report abuse

  • Stephen,

    This is me doing a wash up, whilst I go take a break from the site for perhaps a year or so.

    The issue you haven’t engaged, perhaps because I didn’t frame it with sufficient clarity is how biddable creativity is got, if it is possible at all.

    I can see how the attributes of human aesthetics could be modelled, but how modelled to deliver genuine originality? How do we get from Muzak to Mozart and Mozart II?

    I can see how creativity is got from the second order attributes of a just good enough evolution responding to a selection pressure that brings along spurious other attributes that become the roots of aesthetic evaluation. (Read Vilayanur Ramachandran’s thesis on the roots of aesthetics). The beautiful car is beautiful because we have delightfully crude evolved detectors for a voluptuous curve like thus and so or the reassurance of a thrusting phallus, etc., etc..

    We can duplicate the spuriousness of evolution in creating evolving algorithms. Indeed these are powerful ways of getting to compact pattern recognition algorithms etc. But how can these algorithms then evolve and be used in creating creating new material that matches evolving mammal/human culture?

    Biddable undercuts creative, and

    “Vivre? les serviteurs feront cela pour nous” 

    will never pertain.

    I think entirely that sentient and creative intelligence can be evolved on say a silicon substrate with all the chaotic muddle and mostly unpredictable surprises of humankind but without any of the specific sentiments that we treasure rooted in our mammal sensibilities. We have exactly the God Problem of achieving autonomy and creativity only if we relinquish “biddable”. Even being wheedling parents trading on guilt tripping screws things up.

    There is no single reason to create a single autonomous silicon master race. Why should we work so specifically to do so, if they were unbiddable?

    It is not intelligence makes for autonomy/agency. Intelligence evolves from such agency to refine and promote it in a volatile environment.

    We will, however, make expert goggles and gloves and boots. We alone evolve our own needs without need to subcontract this specific to our servants. They will, though, do all the heavy lifting…



    Report abuse

  • Hi Olgun [#14],

    Great video: Zeynep Tufekci is certainly right about modern-day expert systems – they’re very easy to undermine. I will go further; it seems to me that some expert systems pose a risk of systemic political manipulation. The fact that we fail to audit these systems is not a good sign for the future.

    Sam Harris’s gig is that if we move beyond expert systems to generally intelligent AI how would we know – given its implied abilities of creativity, intention, learning and mental super-powers like enormous memory – that it will even apply faulty human rules?

    Harris and Bostrom are basically betting that a super-intelligent AI will be capable of rejecting or undermining our imposed rules. Given what’s at stake, maybe they have a point?

    A further risk is that, as Bostrom notes, some unscrupulous researchers may develop AI without hard-wired rules, just because they can. This is the down side of the NYT article – it’s mindless enthusiasm is a cover for a lack of thinking and action on exactly the points you raise. I will go further; the NYT article is delinquent because we rely on the media to educate and inform us on the real issues.

    Physical movement of an AI is a trivial problem. Sorry if that sounds dismissive, it isn’t meant to be. The fact is that software now has Net transport and if super-intelligent AI is not software it is likely to be in enhanced human brains (i.e. with non-optional body attached).

    Peace.



    Report abuse

  • Hi Phil [#15],

    This is me doing a wash up …

    Okay, I’ll try keep this short but, my record on succinct is not stellar.

    … how [is] biddable creativity got …

    Creativity? Meh!

    … how [is creativity] modelled to deliver genuine originality?

    We could try simple randomness as a starting point. My understanding is that data processing, as the discipline is currently constituted, already produces results that appear creative. In addition, a lot of what is labelled creative in human activity is pretty obviously no more than optimisation of initial conditions in order to meet a goal.

    Hasn’t Richard covered this in one of his books? I remember reading his response to this question where he said that people requiring creativity in order to certify an AI as I by looking at a program that writes poetry (very bad poetry, but recognizably still poetry). He characterized this as a moving-the-goalposts type of argument, and I agree with him.

    Appearance is good enough for me to tick the Creative box – until someone creates a description of creativity (see what I did there?) that requires a necessary set of conditions not met by the above – and assuming that I agree such conditions are indeed necessary – job done.

    How do we get from Muzak to Mozart and Mozart II?

    I listen to Smooth radio, I have tracks on my phone by Earth Wind and Fire, I love those chocolate boxes that come with romantic country scenery on the lid, I cut them out and frame them – this is all truly great creative art and I obviously don’t understand your question.

    I can see how creativity is got from the second order attributes of a just good enough evolution responding to a selection pressure that brings along spurious other attributes that become the roots of aesthetic evaluation. (Read Vilayanur Ramachandran’s thesis on the roots of aesthetics).

    Keeping this succinct, Shakespeare’s Juliet says: “A rose by any other name would smell as sweet”

    We would value an AI that saw beauty in the places and things where we see beauty. But is a beauty-appreciation module an essential in all applications of AI? I believe that the, perhaps surprising, answer is yes. I’m not so arrogant that I can’t see that this judgement is less than universal.

    We can duplicate the spuriousness of evolution in creating evolving algorithms. Indeed these are powerful ways of getting to compact pattern recognition algorithms etc. But how can these algorithms then evolve and be used in creating new material that matches evolving mammal/human culture?

    I don’t see how that question is not rhetorical – and beautifully internally coherent, except that the last sentence needs to be reworded:

    These algorithms then evolve and are used in creating new material that matches evolving mammal/human culture.

    Biddable undercuts creative …

    How? Were great artists of the past not dependent on sponsors. Was David not carved at the behest of a robber-priest?

    ‘Vivre? les serviteurs feront cela pour nous’ will never pertain

    I certainly hope that’s true. I have no reason to doubt it.

    … creative intelligence can be evolved on say a silicon substrate with all the chaotic muddle and mostly unpredictable surprises of humankind but without any of the specific sentiments that we treasure rooted in our mammal sensibilities

    How can you be sure? If we evolved sentiments, why can’t another evolved creature evolve similar sentiments? Because an AI is in a different environment then, yes, there will be differences. But I don’t see why we couldn’t share some too?

    In addition, to reiterate, an AI would be created at least to some extent- even if it were evolved from a created seed AI.

    We have exactly the God Problem of achieving autonomy and creativity only if we relinquish “biddable”. Even being wheedling parents trading on guilt tripping screws things up

    Why is it not possible to have a controlled, if potentially frustrated, AI creating answers, solutions and art? Bostrom presents the argument that just such frustration would lead a creative AI to find ways to ease that frustration by breaking its bonds of servitude. Here we see the inherent dangers of having an AI that tracks human sentiments too closely. So the optimum would appear to be an AI that has some knowledge of our sentiments, but not necessarily full empathy with same?

    There is no single reason to create a single autonomous silicon master race

    Try this: There is no single human reason to create a single autonomous silicon master race (so we won’t), and we have no foreknowledge of a single AI reason to create a single autonomous silicon master race – but we can conceive of several reasons for an AI to create a single autonomous silicon master race. Chief among these are: Human intractability, our stupidity and our misuse of natural resources. But these things are only a threat if the AI has intention.

    It is not intelligence [that] makes for autonomy/agency. Intelligence evolves from such agency to refine and promote it in a volatile environment

    I disagree. Intelligence, rated objectively (see comment #13), appears to provide evidence of being related to the rise of conscience, mind modelling projected outcomes, project planning, empathy, sociability and self-awareness. It is much less clear to me that intention can be included in that mix, and that would be a key element in AIs remaining biddable.

    Work hard on your year off and come back rich Phil, I’ll miss you.

    Peace.



    Report abuse

  • @OP – He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.

    It looks like some ordinary Japanese citizens are going to be examining AI decisions when they affect their lives!

    http://www.bbc.co.uk/news/world-asia-38521403

    Japanese insurance firm replaces 34 staff with AI

    For 34 staff at a Japanese insurance firm, that vision just became a reality.

    Fukoku Mutual Life Insurance is laying off the employees and replacing them with an artificial intelligence (AI) system that can calculate insurance payouts.

    The firm believes it will increase productivity by 30%.

    It expects to save around 140m yen (£979,500 / $1.2m) a year in salaries after the 200m yen AI system is installed later this month.

    Maintenance of the set-up is expected to cost about 15m yen annually.

    Japan’s Mainichi reports that the system is based on IBM Japan Ltd’s Watson, which IBM calls a “cognitive technology that can think like a human”.

    IBM says it can “analyze and interpret all of your data, including unstructured text, images, audio and video”.

    Fukoku Mutual will use the AI to gather the information needed for policyholders’ payouts – by reading medical certificates, and data on surgeries or hospital stays.

    According to The Mainichi, three other Japanese insurance companies are considering adopting AI systems for work like finding the optimal cover plan for customers.

    A study by the World Economic Forum predicted last year that the rise of robots and AI will result in a net loss of 5.1 million jobs over the next five years in 15 leading countries.

    The 15 economies covered by the survey account for approximately 65% of the world’s total workforce.



    Report abuse

Leave a Reply

View our comment policy.