Photo credit: Zohar Lazar
By John Markoff
In March when Alphago, the Go-playing software program designed by Google’s DeepMind subsidiary defeated Lee Se-dol, the human Go champion, some in Silicon Valley proclaimed the event as a precursor of the imminent arrival of genuine thinking machines.
The achievement was rooted in recent advances in pattern recognition technologies that have also yielded impressive results in speech recognition, computer vision and machine learning. The progress in artificial intelligence has become a flash point for converging fears that we feel about the smart machines that are increasingly surrounding us.
However, most artificial intelligence researchers still discount the idea of an “intelligence explosion.”
The idea was formally described as the “Singularity” in 1993 by Vernor Vinge, a computer scientist and science fiction writer, who posited that accelerating technological change would inevitably lead to machine intelligence that would match and then surpass human intelligence. In his original essay, Dr. Vinge suggested that the point in time at which machines attained superhuman intelligence would happen sometime between 2005 and 2030.
Ray Kurzweil, an artificial intelligence researcher, extended the idea in his 2006 book “The Singularity Is Near: When Humans Transcend Biology,” where he argues that machines will outstrip human capabilities in 2045. The idea was popularized in movies such as “Transcendence” and “Her.”
Recently several well-known technologists and scientists, including Stephen Hawking, Elon Musk and Bill Gates, have issued warnings about runaway technological progress leading to superintelligent machines that might not be favorably disposed to humanity.
What has not been shown, however, is scientific evidence for such an event. Indeed, the idea has been treated more skeptically by neuroscientists and a vast majority of artificial intelligence researchers.
Continue reading by clicking the name of the source below.