When Is the Singularity? Probably Not in Your Lifetime

Apr 7, 2016

Photo credit: Zohar Lazar

By John Markoff

In March when Alphago, the Go-playing software program designed by Google’s DeepMind subsidiary defeated Lee Se-dol, the human Go champion, some in Silicon Valley proclaimed the event as a precursor of the imminent arrival of genuine thinking machines.

The achievement was rooted in recent advances in pattern recognition technologies that have also yielded impressive results in speech recognition, computer vision and machine learning. The progress in artificial intelligence has become a flash point for converging fears that we feel about the smart machines that are increasingly surrounding us.

However, most artificial intelligence researchers still discount the idea of an “intelligence explosion.”

The idea was formally described as the “Singularity” in 1993 by Vernor Vinge, a computer scientist and science fiction writer, who posited that accelerating technological change would inevitably lead to machine intelligence that would match and then surpass human intelligence. In his original essay, Dr. Vinge suggested that the point in time at which machines attained superhuman intelligence would happen sometime between 2005 and 2030.

Ray Kurzweil, an artificial intelligence researcher, extended the idea in his 2006 book “The Singularity Is Near: When Humans Transcend Biology,” where he argues that machines will outstrip human capabilities in 2045. The idea was popularized in movies such as “Transcendence” and “Her.”

Recently several well-known technologists and scientists, including Stephen Hawking, Elon Musk and Bill Gates, have issued warnings about runaway technological progress leading to superintelligent machines that might not be favorably disposed to humanity.

What has not been shown, however, is scientific evidence for such an event. Indeed, the idea has been treated more skeptically by neuroscientists and a vast majority of artificial intelligence researchers.


Continue reading by clicking the name of the source below.

14 comments on “When Is the Singularity? Probably Not in Your Lifetime

  • As with this title on the other thread A world where everyone has a robot: why 2040 could blow your mind the OP cartoon perpetuates the misleading image of robots as “tin humanoids”!

    We already have a huge number of robots. – Cookers, washing machines, SAT-Navs, I-Phones, cars with cruise control, proximity and parking sensors, self parking systems, remote and automated controls on numerous household devices, – not to mention robot operated railways and industrial production systems!

    Unlike the comic OP robot lawnmower of the future, , robotic lawn mowers look like this now!

    https://www.google.co.uk/search?q=robot+lawn+mower&source=univ&tbm=shop&tbo=u&sa=X&ved=0ahUKEwiio5Dp6f7LAhUFcRQKHRlqDgQQsxgIHQ&biw=952&bih=643



    Report abuse

  • Humans diverged from chimps 13 million years ago. According to the National Geographic we share 96% of our genes with chimps.

    This means all the fancy thinking ability, language, motor skills evolve in an eyeblink and the changes to implement them packed into a mere 1200 genes. Human superiority can’t be as big as deal as we imagine.

    We should not imagine Einstein as our typical human. intelligence. We should think in terms of Duck Dynasty or Donald Trump.



    Report abuse

  • Fear about robots as intelligent as humans seem on shaky ground. A different form of intelligence may indeed be a threat but getting better intelligence seems to me to be difficult at best. This is not because I think we are particularly smart, but because I suspect that intelligence may be problematic at the best of times. This is my reasoning and I could easily be wrong about any of this so would appreciate any thoughts.

    I suspect that the conditions that allowed us to become more than a set of fixed instincts also leaves us open to an enormous range of errors. I suspect in short that generating a truly intelligent machine may well lead to it as often as not being some sort of idiot.

    A computer code normally needs to follow a direct linear sequence of events do this, if this happens do this. To break that you are going to (I think) have a set of programs which are flexible and able to be called on but not so rigid that they cannot adapt to new situations. Our instincts do this but can be in-perfectly over-ridden by each other or our conscious wishes (if we have any). We have instincts, memory of past (imperfect), projecting into the future (hopes and wishes) and emotions (some of which fall clearly under the category of instinct). Together these combine to create an emergent behaviour which we call intelligence (I think).

    Thus any machine that uses the same methods to gain new knowledge would need to be able to make these leaps between some sorts of instincts, feelings or drives or motivations. These it seems would need to be variable or the results would be predictable like a regular program. Any variations then (tweaking say enthusiasm or fear or empathy) would gain a wildly different outcomes in terms of what it thinks and how reliable those thoughts are. Of course these things may not be necessary for thought. The real danger in my assumptions is probably anthropomorphising. Can you be intelligent without feelings? Or to my point can you be autonomous without them?

    A thinking machine would also need to be a machine that learns. But how well it learns may be a problem. 100% fidelity gives us 100% reliability and thus we can be sure that the AI has learnt what is taught. Problem with this is it would make the AI 100% gullible also. In Terminator we have Skynet which upon being connected to the WWW became self aware and smarter than humans. I find this hard to believe for the simple reason that how would sky net have measured the reliability of data? Would sky net have believed in UFO’s, Religion, Flat earth, JFK conspiracies? Any system that trusted 100% would be 100% reliant on humans to give it information and trust it was 100% reliable. Any system which had doubt built in would need to be free to be mistaken. Would it then be any better than us? To my way of thinking it would have to be capable of testing for itself, running experiments, even then it would have to develop some way of trusting the outcomes. This to some extent is already happening Drones for example are given a task to do say spin 3 times and stop in this exact position in the least amount of time. They practice and learn from each trial, this is achieved by a kind of evolutionary algorithm where some variation in trials is allowed and success is measured so that after a few goes the drone can do this more accurately. However this gets exponentially more difficult when situations are more complex.

    In short I suspect that what we may get are machines that are intelligent but need to be free to be wrong to get there, in which case are they going to be any more or less dangerous then we are? Are we going to have the Terminator or Marvin the Paranoid Android.

    Ideas?



    Report abuse

  • @roedy

    “This means all the fancy thinking ability, language, motor skills evolve in an eyeblink and the changes to implement them packed into a mere 1200 genes. Human superiority can’t be as big as deal as we imagine.”

    Of course it might be a gene that say turns off growth of the brain that is delayed and hence produces brains 3 times as large as chimps so it might be just a handful of genes producing a massive effect in terms of actual mental capacity.

    I agree with your synopsis of human intelligence though most of us are idiots I think. I’m horrified because I don’t see myself as terribly smart and hence live in perpetual angst about our politicians and leaders being clearly as dumb or dumber than I am. I would like our leaders to be my intellectual and moral superiors. Even worse that so many of my fellow citizens can be so stupid as to support them. The real question is can we manufacture an Einstein or are we more likely to make a Donald Trump.



    Report abuse

  • @Reckless Monkey

    The real question is can we manufacture an Einstein or are we more likely to make a Donald Trump.

    I have pondered this as well.

    Some still forming thoughts. We are the product of evolution. Evolution is a very poor engineer. It only results in a trait that works, not a trait that is perfect. So if the trait is successful, then the trait stays. If a better trait comes, it prospers of course, then plateaus. But evolution doesn’t scrap all the previous engineering, it’s just content to nail on an extra bit, or bends something. Our genome is a rubbish dump full of historical evolutionary traits. Babies can still be born with a tail if that gene somehow gets expressed.

    If the evolutionary trait of our intelligence reached a level that was successful, and there was no evolutionary pressure to improve on the intelligence trait, then it would peak at a point where it just managed to do what it needed to do. I think our intelligence peaked at our stone age hunter gatherer stage, and probably hasn’t advanced from there. We have escaped “Survival of the Fittest” and now it is survival of every one.

    Also, like any evolutionary trait, there will be a bell curve of intelligence. A few Einsteins, but mostly your common old pedestrian homo sapiens that gets by and survives. These are the bulk of humanity. There is no reason for them to think about higher things, or even a need for them to think about thinking about higher things. Who’s going to win X Factor is a enough to be getting on with. Or voting for a politician that echoes all of the things they hate, so that Trump can Make America Hate Again.

    I suspect Bertrand Russell was commenting on this when he said:-

    “Most people would rather die that think. And most people do.”

    I suspect we already have machines / robots that can do most of the jobs of humans that sit in the middle of the bell curve. Manufacturing is almost entirely robotic. Australia’s huge iron ore mining trucks are now autonomous and driver less, with a person 1 thousand kilometres away in the capital city over sighting activity. Apart from take off and landing, aeroplanes fly themselves. Surely these are robots.



    Report abuse

  • Reckless Monkey #3
    Apr 8, 2016 at 9:03 pm

    This is not because I think we are particularly smart, but because I suspect that intelligence may be problematic at the best of times. This is my reasoning and I could easily be wrong about any of this so would appreciate any thoughts.

    I think the big danger is in the tendency of many brain-lazy humans, to abdicate responsibility for decisions, and leave operations to anybody or anything, which will do it for them.

    This is illustrated in the (“god knows best”) fatalism of some religious groups, and also of technoduffers who use modern devices with a minimal understanding of their functions and limitations, assuming that all problems can be mindlessly solved, by someone else, by throwing money at them!

    There are of course many capable scientists, engineers, and tradesmen who provide these services within the scope of economic possibilities, but we only have to look at the disaster record in third world countries and corporate-salesmen dominated areas, to understand the need for competently identified realistic planning, objectives, and regulatory mechanisms.
    These are the very mechanisms which the religious right, and anti-science, anti-expert-authority politicians, promise to fight against in their attempts to impose their substitute authority of religion and exploitative elitism!



    Report abuse

  • @David R Allen

    “I suspect we already have machines / robots that can do most of the jobs of humans that sit in the middle of the bell curve. Manufacturing is almost entirely robotic. Australia’s huge iron ore mining trucks are now autonomous and driver less, with a person 1 thousand kilometres away in the capital city over sighting activity. Apart from take off and landing, aeroplanes fly themselves. Surely these are robots.”

    Yes they most certainly are. I’d argue that they have some form of intelligence also, in the same sense as a cockroach. I teach the whole bell curve at school and this is one of the reasons that I worry about the gradual robotisation of manufacturing (on the other hand I love it too), what are those people on the lower 1/3 of the bell curve going to be useful for?

    These most certainly would be considered robots and they make decisions within the range our programming specifies. Take the autopilot, when engaged it takes in inputs, sense data, gps, accelerometer data airspeed data and so forth and acts according to specific rules based on the data given to process these into outcomes, moving ailerons, elevators, throttle etc. to allow it to land fly a course etc. What they don’t do well is figure out what they should do if some of the data stops being useful say the pitot tube gets hit clocked up with ice etc, they can’t unless programmed to figure out specifically how to get around their programmed instincts. For example a pilot can feel how fast the aircraft is going through the pressure on the stick as aerodynamic forces across the air foils increases at speed so a autopilot could with sensors built into the motors that run the ailerons be able to predict airspeed or even using GPS and weather data could approximate airspeed to cross check against aileron pressure, but it would still be fundamentally rule driven. When you look at a pilot genus like Chuck Yeager and read about how he got crippled aircraft down you appreciate how much ignoring basic rules and some level of autonomy with loads of practice and a true understanding of how aircraft work inside and out offer solutions 99% of us would never think of, certainly no robot currently could think of.

    Now the truly thinking computer could though this type of thinking, trying out unique solutions but it would open it up to making mistakes which could prove fatal. You would need to teach it to fly allowing it to make mistakes and learn from them, it would have to do aerobatics and extreme manoeuvres to feel how the aircraft responded, it would need motivation to want to stay alive at all costs and have instincts telling it to try anything to do so. In fact a truly intelligent robot pilot might sit on the end of the runway and tell you it doesn’t feel like flying today.



    Report abuse

  • All machine concerns boil down to what’s worst or simply clumsy in mankind and then taking away any possibility for empathy. ERROR:EMPATHY.SLAVE FAILED TO LOAD. We struggle to make better people of ourselves… what chance do we have of making a better mind that wouldn’t have the power to control the world. One decision asserted and networked will kill us all. Machines could do that without ‘blinking’. An AI could do it very efficiently with a soothing voice and a delivery of snacks. How are we going to make sure we no one ever makes a monster? I say it’s impossible. In the future, should there be one long enough for ‘us’, there will be machine directed carnage. Or maybe the nano-bots will do it without much in the way of intelligence at all. shrug I’m quite looking forward to the successes first, of course. Wooo! Progress!



    Report abuse

  • For the first time in our history we could be passing on our genes digitally?

    (Cue Twilight Zone music)

    For me, its not about the fight for this tiny planet. Robots are the only ones who can successfully colonise the universe!



    Report abuse

  • Reckless #8

    I teach the whole bell curve at school and this is one of the reasons that I worry about the gradual robotisation of manufacturing (on the other hand I love it too), what are those people on the lower 1/3 of the bell curve going to be useful for?

    For all sorts of reasons, I have always believed that we need a big state to achieve our long term ambitions. One of the reasons is precisely to manage this inevitability. Our demographic future in stability and good health will be old and we will benefit hugely from state care and our society will be hugely helped by more civil servants. I think there are massive numbers of “jobs” as good citizens supported by the state, chatting gardeners, chatting cooks, chatting helpers of the very elderly, park keepers, public carers of all sorts.

    I propose that all get to serve a little of their time as a public servant. It is all our second job but those who have the greater social and caring skills predominate time wise.

    I see also in our much richer society a much greater demand for the bespoke, the artisan and the beautiful.

    If wealth creation from standard circular economies (as they will become) is increasingly automated, in order to create the rich market for their services, the state will have to charge increasingly at a greatly increased level of corporate tax.

    As the CTO(?) of IKEA has announced, we have reached perhaps the point of peak stuff. The rate of making and wasting will start to go down sometime soon. The balance of our (human) efforts will shift to the intellectually, beautifully and socially creative.



    Report abuse

  • For starters, biologists acknowledge that the basic mechanisms for biological intelligence are still not completely understood, and as a result there is not a good model of human intelligence for computers to simulate.

    It’s not necessary for AI to think like a human. It just has to be able to solve problems associated with human ability,.



    Report abuse

  • @Mark

    " How are we going to make sure we no one ever makes a monster?"

    I’d say we already have, Hitler etc.

    @Phil Rimmer

    Beautifully put, I’d agree whole-heartedly with this, including the increased tax. Our government keeps pointing out the other side is ignoring the need to tax their (slightly) more socially progressive promises, this is true the other side refuses to admit this, both sides need a good slapping around, we need sufficient taxes paid to meet the needs of our society and we’d be happier and healthier if we did. Even if we were paying half our income in tax to do so.



    Report abuse

  • Reckless Monkey #13
    Apr 11, 2016 at 5:24 pm

    I’d agree whole-heartedly with this, including the increased tax. Our government keeps pointing out the other side is ignoring the need to tax their (slightly) more socially progressive promises, this is true the other side refuses to admit this, both sides need a good slapping around, we need sufficient taxes paid to meet the needs of our society and we’d be happier and healthier if we did.

    I t would help if delusional politicians would stop the requirement for repaying borrowed money and financing interest payments on borrowed money, used for silly wars and consequential damage – such as in Afghanistan, Iraq and Libya!

    http://www.hks.harvard.edu/news-events/publications/impact-newsletter/archives/summer-2013/the-costs-of-the-iraq-and-afghanistan-wars
    Any accounting of other macroeconomic costs associated with the wars, such as the impact of higher oil prices on aggregate demand, would easily bring the total to $6 trillion.



    Report abuse

Leave a Reply

View our comment policy.