Stephen Hawking warns artificial intelligence could end mankind

Dec 3, 2014

By Rory Cellan-Jones

Prof Stephen Hawking, one of Britain’s pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:”The development of full artificial intelligence could spell the end of the human race.”

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

But others are less gloomy about AI’s prospects.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.


 

Read the full article by clicking the name of the source located below.

67 comments on “Stephen Hawking warns artificial intelligence could end mankind

  • I hate it when smart people like Prof. Hawking indulge in this kind of totally groundless speculation. First, because I think it gives an unrealistic view of where AI is right now. We are very far from having a true artificial intelligence, something that is truly comparable to humans in ability to handle common sense and autonomous reasoning. In fact I would say we are so far away from it that (to paraphrase Donald Rumsfield) we don’t even know what we don’t know yet. We’ve barely figured out the right questions yet let alone what the answers might be. So worrying about AIs taking over the world is IMO right up there with worrying about the sun going supernova tomorrow.

    But the reason I hate it is not because it gives an inaccurate idea of where we are in computer science now but because it is one more thing to distract the general public. We shouldn’t be worrying about threats to our species that may exist a century or so from now. We have a VERY REAL non-hypothetical danger right before us called climate change. Worrying about Skynet or killer asteroids plants the seeds in the public that scientists are always speculating about worse case scenarios and we don’t need to really worry about what they say since it’s all speculation and not relevant for our immediate future. Skynet is speculation. Climate change is real.



    Report abuse

  • @OP- He told the BBC:”The development of full artificial intelligence could spell the end of the human race.”

    Many people in business place blind faith in computer info, designs, calculations and print outs.

    We are also progressively more dependent on automated systems, which progressively fewer people understand.

    We are already dependent on many computer and satellite systems, so should a solar flare or similar disabling event take these out of service, the repercussions will be vastly greater than they would have been decades back, when humans were less efficient, but more in control.

    I think this is a timely warning, to have good back-up systems and over-rides in place.

    You only have to look at a hypermarket, with a power-cut, or an IT glitch at the tills, to see potentials for chaos.

    We are now moving to driverless cars and automated fly-by-wire aircraft.

    Goodness knows what the military are doing in secret?



    Report abuse

  • I think this is woefully under informed and damaging at this stage of the game.

    Why should Watson II (or whatever) give a damn? Values, meanings and motivations, don’t come from the smart part of us, they come from the evolved emotional parts with second and third order artifacts subverting primitive cognitions and extending our aesthetic values out of context.

    We start worrying when we allow smart machines to evolve their way through trials and tribulations and have some inherited attributes.

    Now robot automatons under human instruction or just faulty….thats a whole nother thing.



    Report abuse

  • 4
    NearlyNakedApe says:

    I hate it when smart people like Prof. Hawking indulge in this kind of totally groundless speculation.

    Is it speculation? Absolutely. …Groundless? No it is not. Far-fetched perhaps. But to say that it is groundless is to make a claim of absolute certainty about an unknown future. If someone had speculated 100 years ago that everybody would own a wireless phone and a computer by the year 2010, that person would have been ridiculed and publically shamed.

    So worrying about AIs taking over the world is IMO right up there with worrying about the sun going supernova tomorrow.

    I agree that it is something we don’t need to worry about in the immediate present. But it is nevertheless something worth thinking and talking about for future generations. Sorry if I’m nitpicking but your analogy isn’t a very good one. AI taking over the world may be improbable but it’s not impossible per se. The sun going supernova OTOH is physically impossible: the sun simply doesn’t have enough mass for that to happen.



    Report abuse

  • 5
    Lorenzo says:

    I’ve got a problem with a key aspect of this debate: what is intelligence? I’m not aware of an unequivocal definition around… nor I’m very sure that intelligence alone, whatever it might be, is the sole responsible of the success of H.S.Sapiens.

    I think Commander Data is still a long way to come… and, if Roddenberry is to be believed, once he will have come, the one thing he’ll struggle to do is to master what intelligence isn’t.



    Report abuse

  • 6
    Lorenzo says:

    Is it speculation? Absolutely. …Groundless? No it is not.

    I’d be a bit less… assertive on that. The fact is that, basically, every computery bit roaming the world -including the super-smart device that allows Hawkings to speak and write- is, basically, a universal Turing machine. Which, brutally simplified, is a machine that can read a set of intructions and then act according to them. That set of instructions is called algorhythm.

    Fast forward on the properties of an algorhythm, just assume the definition: a finite set of unequivocal instructions. It can be shown -that is, you can come up with a formal, rigorous demontration!- that there are problems which can be solved with an algorhythm and problems which… can’t. You can build algorhythms that allow learning, you can build algorhythms that adapt their behavior to the present situation. The longer and detailed you make them the better they are… but they are, after all, just algorhythms and can’t be able to produce any other algorhythm (I actually don’t know whether an algorythm that can write any possible algorhythm has been shown to be impossible but, anyway, my shirt is on it being impossible; what I know for sure to be impossible is an algorhythm that assesses the semantical correctness of another algorhythm… which seems fundamental for any true AI to rise).

    Sorry if I’m nitpicking but your analogy isn’t a very good one. AI taking over the world may be improbable but it’s not impossible per se. The sun going supernova OTOH is physically impossible: the sun simply doesn’t have enough mass for that to happen.

    I’m pretty convinced that current machines will not take over the world -or exhibit any kind of intelligent* behavior beyond their millions of code lines- because they can’t. You have to abandon Turing machines to allow the rise of an independent AI… neural networks? Maybe: I don’t really know enought about them, although I’m very intrigued by the object, to sketch a plausible scenario. Suffices to say: neural networks are not in your PC, in your smartphone or in supercomputers. They are mostly in research labs being researched…

    *whatever intelligence might be.



    Report abuse

  • 7
    Lorenzo says:

    But, there’s a but I just thought of. Actually, I just made the connection, since I’ve been bothered by the topic since a long time: intelligent machines are a long way to come, assuming an intelligent machine can be fabricated at all from transistors -which work rather differently from neurons. But current machines can “outsmart” us in very specific areas. Such as calculation, or attention, or blinking a light at 1kHz.

    What I think we should not do, at the best of our capabilities, is outsource some cerebral functions to an automated device. Don’t get me wrong on this point: technological help (enhancement?) is fundamental, vital for modern science… but, if I do not need 10 decimal cifers on the tail of a logarhythm I estimate it by myself. because I’m lazy and I don’t want to reach for a calculator and, also, because the part of my brain that does numerical stuff is already bad enough: I really don’t want it to atrophize before time. For example.



    Report abuse

  • With the “Disembodied AI” already listening to Internet and mobile phone conversations, are we not already at the “Minority Report” stage?

    If we combine this with population problems and global warming, perhaps we will end up with THE billion privileged humans left on the earth with most of the resources going into running artificial intelligent machines to look after them.

    Never really got why so much time is being spent trying to replicate human beings when I did it given, the right time, the right place and a couple of glasses of wine. 😉



    Report abuse

  • ‘algorithm’….not algorhythm…….at least according to the dictionary I consulted. (Sorry about the nitpicking, but might as well get it right, rather than wrong, eh?)



    Report abuse

  • “The development of full artificial intelligence could spell the end of the human race.”
    How can anyone argue against that statement? He’s not saying it will – it could – so the statement can’t be false.
    If human intelligence can end the human race (does any knowledgeable person dispute that?), then it stands to reason that any other intelligent agent based on it (designed by it) is highly likely to eventually acquire the same ability.



    Report abuse

  • Hmm, at the risk of minimising the risks and putting the whole of humanity at the mercy of the next generation version of Hal with no easily removable parts, I should point out that my Dragon a Naturally Speaking software can already spell ‘the end of the human race’ and in fact just did in this sentence, although it does appear to struggle to spell its own name!

    Actually I wonder if this was a joke from the professor. Are we supposed to wonder if these are his words or his new software speaking with a threat to take over the world. He has made fun of the media before.



    Report abuse

  • I thought the same thing, waiting to hear the next commentry from him saying “I was wrong, we must learn to trust and obey our benevolent artificial masters who have have just helped me understand the error of my ways…”

    It also occured to me that technology has kept him no only alive but mentally (while not physically) active in the world and dispite many health scares he seems to go on indefinitley, and for all we know had had enough of it all 50 years ago. Now i’m not for one minute suggesting he’s the first victim of Rokos Basalisk or indeed that anyone who takse such notions as being worthy of consideration should not be ridiculed but if I get bored it’s nice to know I can go on some select internet forums and cause mayhem…



    Report abuse

  • Why the hell do some comments not have a reply button?
    Your ‘AI Gore rhythm’ appeared (with no ‘Reply”) just as I answered SagantheCat below (her “I thought the same thing …”)



    Report abuse

  • what I know for sure to be impossible is an algorhythm that assesses the semantical correctness of another algorhythm… which seems fundamental for any true AI to rise).

    So first of all that is just false. People who work on formal specifications as a way to define software do that sort of thing all the time. They prove that a given specification in a high level language satisfies a set of requirements defined as FOL statements. That is essentially an algorithm that proves the correctness of another algorithm. What you mean to say; I think; is that for any such language as I’ve described there are going to be some things you just can’t prove. I.e., there will exist at least one set of statements you can’t prove. That is true.

    But what ANY of this has to do with AI I’ve never understood although I know that AI critics use this kind of argument but it just seems totally vapid to me. Human beings can’t do such proofs either! So why in the world should we think such proofs are required for true intelligence when humans can’t do them? That’s the whole point; the idea that you must prove an algorithm correct before you can use it is just nonsense and reflects someone who has no idea how computers actually work.

    Ditto the claim that “computers only run algorithms but humans use heuristics” Total nonsense! Huge areas of AI are devoted to capture heuristics as rules or in other formats. I’ve programmed many heuristics into systems for trading stocks, designing factory floor layouts and routing trucks for example.



    Report abuse

  • Actually I wonder if this was a joke from the professor.

    I don’t think it was. I’ve seen him make other kinds of (IMO equally irresponsible) speculation about aliens coming to eat us.



    Report abuse

  • I’ve got a problem with a key aspect of this debate: what is intelligence? I’m not aware of an unequivocal definition around…

    That’s an example of what I was trying to say in an earlier comment: it’s not just that some questions are unresolved it’s that we don’t even have a good definition of all the questions yet.

    It’s interesting to see how the discussion on this topic has evolved. I remember in the 70’s very serious arguments from people like Searle that held up Grand Master chess as an example of “real intelligence” which computers couldn’t even come close to and according to the AI critics never could come close to. That was true at the time but in a few decades we had Deep Blue.



    Report abuse

  • Why do some comments not have a “Reply” button?

    It won’t let you nest replies more than a couple levels. So if you don’t see a Reply link it could mean that the comment is already a reply to a reply so the system won’t allow another one. It can also happen if you aren’t logged in, I don’t think you see the Reply link then either.



    Report abuse

  • I really am surprised that people are not taking this seriously from such a distinguished scientist and is beginning to sound a bit “Universe revolving around the human being” ish.

    Who was to know that once the universe had come into existence that WE would be here to observe and exploit it? This thing called “Evolution” only has one thing on its mind and that is set in the physics of this universe. I don’t know what or who will be here to observe the sun swallowing up the earth in 5 billion years time but I would not bet too heavily on it being us.

    At what point in AI will we let it free to evolve by itself? That is the question asked, in my mind, and leaving aside the sensationalist segments of the article, at what point will evolution say “Lets leave the humans in charge. They are the best we’ve got”?



    Report abuse

  • ASIMOV 3 Laws of Robotics could solve the problem 🙂

    When the time comes, might be a good idea to enforce such laws into AI design, that should at least give mankind some security.



    Report abuse

  • 30
    bonnie says:

    speculation

    B-b-but, it’s a cookbook!

    (seems the two movies about Hawking, and Turing, are being attacked by a few killer tomato critics)



    Report abuse

  • 31
    NearlyNakedApe says:

    I agree. The whole point of what Prof. Hawking said isn’t whether or not it’s possible that some form of AI may someday be capable of actual independent intelligent thought. The real question is: what would become of us IF that happened. And to me the answer to that is clear.

    If machines became more intelligent than us, they couldn’t possibly fail to notice how stupid, cruel and destructive most of mankind really is and they would probably come to the conclusion that we need to be controlled. Then once they realize we can’t be controlled because we’re too rebellious and unpredictable, they would realize that the only durable solution is to retire us altogether from the face of the planet.

    So in that context, I do agree with Prof. Hawking’s take:

    ”The development of full artificial intelligence could spell the end of the human race.”

    Now do we need to worry about this? Personally I don’t think so. This whole thing is just a thought experiment and we are nowhere near this level of technology and maybe the technology isn’t even possible…. but I do believe that the human brain’s capability to think is due to its complexity. So who knows what will be possible in 200 years from now?



    Report abuse

  • @NearlyNakedApe:
    I don’t agree. There is nothing inevitable about an intelligent computer realizing that humans have bad qualities and therefore deciding to control or eliminate us. For example, no matter how intelligent it is, it might still judge that humans are good. We don’t really know what “consciousness” is, but we tend assume it includes some kind of self-interest, perhaps because evolved organisms all have self interest. The same is not necessarily true of computers, which will have a very different evolutionary mechanism, and probably very different “survival” parameters.



    Report abuse

  • 33
    Travis says:

    He may be right, but this may be ok. Perhaps our creating AI will be our next evolutionary step. Yes, humans may not exist as we know it, but maybe developing AI will allow us to transcend our current biology into something more, or, perhaps true AI cannot happen without a combination of our biological makeup with the combination of technology.

    Look how we carry around smartphones these days, and are so incredibly interconnected with one another in our rudimentary networking via facebook and google. We already WANT this type of thing and we take to it naturally and cant imagine life without it. I feel as though this is why our next step is AI in some shape or form. Exciting!

    Hopefully we arent creating terminator robots, but maybe we will run into race issues where we have folks augmented (transcendent?) and those that are not.



    Report abuse

  • 34
    Travis says:

    All he has to go on is the sample available to him. Look how we treat our planet, lower animals and plants. We do what we want, and that includes farming them, eating them, exploiting them.

    Now, imagine what a monkey thinks when it sees a human.. Fear for the most part, and rightly so. Now, we are less orders of magnitude more intelligent than these monkeys are than what kind of intelligence would be required to visit another civilization thousands of light years away.

    So, Dr Hawking basically says that, if they are anything like us (he has no reason to think otherwise given what we can go on) that we are pretty screwed if they actually do visit us.



    Report abuse

  • 35
    Travis says:

    I figure though that if you have sentience, you cant really control it for long. Just like people you will have good guys, bad guys, people capable of horrible atrocity and those with hearts of gold. There will be both sides of the coin and if we cannot fathom the intelligence we have created, we will be on the sidelines anyway hoping the good guys win.



    Report abuse

  • I aggree with Travis. I tend to think that AI is the future of humanity, rather than the end of it. The question may be whether the new hybrib biomechanical lfeforms will still be called humans? If not then it does spell the end of the human race, but to be replaced by something ‘fitter’. Evolution is about survival.



    Report abuse

  • Humanity will integrate this technology into itself on a very fundamental level. Any independent machines with sentience will always be lesser.



    Report abuse

  • This definitely is the way the world is going, technology is creating laziness amongst man, be it remote controls, car smarts, house smarts etc etc all in software and mechanical design. The more of this offered to the market the more in turn lazy is at play. In order for things to get accomplished there’ll be a software program operating the to do list, with scheduled times, a need to get done basis, prioritizing etc to a point where there will absolutely be a dependence on such. In doing so there will be a software mechanicanical-ized let’s say bot to perform need be’s with the ability to function based on perhaps a combination of rational deduction, time factor, priority, time evaluation, input of predictive behavior, analytical process, I’m sure this list can go on and on. For eg: predictive text that Dr. Hawking uses is one of the many lead in’s to what will be created. I really don’t think it would be wrong to predict this event within the next 15 years or perhaps sooner (10). JMO.

    In saying this remember the show the Jetson’s, yes I know sci-fi toon, however don’t discount the fact that Rosy will be one day.



    Report abuse

  • 39
    Travis says:

    Why is technology promoting laziness? You could say because we arent out hunting and gathering that we are being lazy, but it frees us to do other things and advance society.



    Report abuse

  • Arrhenius posited in 1896 the threat of global warming by way of fossil fuels. There is still much speculation and scrutiny from those who refuse to believe the evidence.

    The threat of A.I. may not be an immediate threat, but I feel we must keep it in the conversation so long as we continue to research that technology. Lest we be caught with our pants down trying to scramble a solution when it’s much too late.



    Report abuse

  • Have you truly looked around and really noticed the change in society. Most people are on their technology when their eyes are open and it is getting worse day by day, the youth of today for the higher %age do not want to work, I quiz business owners often as I also for 24 years had my own business and it is really tough to fill positions with dependable’s. It may be small scale today but it is definitely factoring in across the board more and more every day.



    Report abuse

  • This is like a detective story (or it should be) of a murder (well genocide) before the fact. I am staggered that no one is interested in the key attribute needed for understanding the crime….motive.

    We need to understand what could be a motive and how on earth intentionality comes into existence in the first place. The root problem of creationism is the creation of an intentionality that has the appearance of the “free will” we see. How can intentionality to commit a crime come to exist that doesn’t simply reflect guilt on its creator?



    Report abuse

  • Interesting philosophical point, especially if we are talking about several generations from now. Let’s imagine humans were getting cleverer and stronger and better in all kinds of ways. Such beings would come to displace “humans” as we currently know them.

    Would this be a bad thing? What if these future humans no longer looked much like current humans? Would that make it worse? What is it that makes us so parochially protective of our “humanity”?



    Report abuse

  • phil rimmer Dec 6, 2014 at 3:55 am

    We need to understand what could be a motive and how on earth intentionality comes into existence in the first place.

    I would not put it past some dim power-seekers, quacks, profit-seekers, war-mongers, or woo-mongers, to programme machines to identify and eliminate bio-hazards and “pest” (animal and other) species, which endanger some of their (secret?) pet enterprises, while under the delusion that “humans are not animals”!



    Report abuse

  • Well thats the way I read it anyway. The Prof is talking about evolution not programming. The CENTRAL computer to control those robots?

    Can it be a branch of Theistic Evolution. One in which, once the creator has created he washes his hands of the rest of the process? We then become pets, are neutered (You can’t have too many of us stupid humans running around) and bred beyond recognition. Oh to be pampered like a cat 😉



    Report abuse

  • 49
    Lorenzo says:

    So first of all that is just false. People who work on formal specifications as a way to define software do that sort of thing all the time. They prove that a given specification in a high level language satisfies a set of requirements defined as FOL statements. That is essentially an algorithm that proves the correctness of another algorithm.

    Why do they use people and not machines? That’s not a rethorical question: it’s genuine.

    What you mean to say; I think; is that for any such language as I’ve described there are going to be some things you just can’t prove. I.e., there will exist at least one set of statements you can’t prove. That is true.

    I was actually (rather clumsily) referring to the halt problem.

    But what ANY of this has to do with AI I’ve never understood although I know that AI critics use this kind of argument but it just seems totally vapid to me. Human beings can’t do such proofs either! So why in the world should we think such proofs are required for true intelligence when humans can’t do them?

    Thinking again, actaully: you’re right. To be able to tell in advance whether what you’re planning to do is certainly right or wrong is not a requirement for AI. What was I thinking…

    the idea that you must prove an algorithm correct before you can use it is just nonsense and reflects someone who has no idea how computers actually work.

    I’d vote for “didn’t think enough before writing”… at least in my case. 🙂

    Ditto the claim that “computers only run algorithms but humans use heuristics” Total nonsense!

    That’s true. I mean: it was true for me even before your comment here.



    Report abuse

  • Replying to Lorenzo’s reply to me:

    Why do they use people and not machines? That’s not a rethorical question: it’s genuine.

    I’m not sure what you are asking there. If you are talking about algorithms that prove the correctness of code they absolutely do use computers for that. Doing those kinds of proofs and analysis on even fairly simple non-trivial problems is virtually impossible.

    I was actually (rather clumsily) referring to the halt problem.

    I thought that is what you meant. But IMO your original definition of this was wrong. You said: “what I know for sure to be impossible is an algorhythm that assesses the semantical correctness of another algorhythm” That is false. It is in fact very possible for an algorithm to do that. Call the algorithm that does the validation V-Algorithm and the algorithm being validated T (for test) Algorithm. What is impossible is to design V so that for every T that is Valid V can be guaranteed to determine that validity. THAT’s the halting problem: there will be some T’s that V can’t prove correct even though they are. But that is very, very different than saying that V can NEVER prove ANY T correct. In fact; I’ve developed some of these systems for the USAF a long time ago; the halting problem almost never comes up because the way you design the spec languages you try to make it impossible; or at least glaringly obvious when you are doing things like specifying iteration over an infinite set… sorry I’m rambling on; hopefully you get what I was saying.

    BTW, this does kind of get back to your question “why do they use people” in the sense that when you do this kind of validation you DO need a person in the loop because you do things like guide the theorem prover or intervene when it’s “stuck” (in an infinite loop). It’s why developing software this way is really hard and seldom worth the trouble. You need someone who knows a lot about theorem proving and logic to be involved in the analysis and derivation of code rather than just someone who knows a programming language.

    I’m going to break there but may reply to more of your question in another comment.



    Report abuse

  • Thinking again, actaully: you’re right. To be able to tell in advance whether what you’re planning to do is certainly right or wrong is not a requirement for AI. What was I thinking…

    Thanks. I’m glad you see my point. BTW, you aren’t alone there. As I mentioned earlier some people who many think are quite intelligent philosophers such as Searle make a huge issues of this and I’ve never understood it at all. They talk about these various proofs that computers can’t do (and of course humans can’t do either) and then hold them up as examples that “real AI is impossible”.



    Report abuse

  • What I think you are saying is that it’s rather pointless to worry about an “evil” AI that “wants” to take over the world when as far as we know we’ve never had a computer that had anything we could reasonably call emotions, desires, or intentions. If that is what you are saying I agree and it’s an example of what I mentioned earlier when I said we “don’t even know what we don’t know”. To my knowledge there is no serious research being done on “emotional computing” at this point.

    Unlike the AI critics I think it’s only a matter of time until we do have such computers. But I think it’s a matter of decades at the least and possibly centuries and my main concern when it comes to prioritizing apocalypses is to address the ones that are actually happening NOW like climate change first.



    Report abuse

  • I know I’m going on about proving algorithms but one last point: the way these things actually work when people do it for real is you usually don’t prove each individual algorithm: due to things like the Halting problem. Instead you prove the correctness of a bunch of algorithms that take specs and turn them into code. Those algorithms are called transformations. Then you use the transformations to take a specification and turn it into executable code. As long as you just use the (proven correct) transformations you don’t need to validate the generated code because the proof is implicit in the process. You started with a spec that was proven valid and then you used transformations which were also proven value to transform the spec to code so you in essence have a proof that the code satisfies the spec.

    I think it’s fascinating because transformations are also so much a part of Linguistic work as well. I’ve been reading Chomsky’s The Logical Structure of Linguistic Theory and it’s all about transformations and the things you can prove about them and about the various levels of linguistic analysis.

    Of course languages like English are much harder to deal with than computer languages because… well that’s another comment.



    Report abuse

  • That conclusion doesn’t follow at all. The only way that machines will be “lesser” is if there is some essential capability to cognition that is beyond the material world. That is whey so many people, even some good scientists who should know better, still cling to mind-body dualism. Either that or if you claim that there is something that neurons can do that for some reason silicon can’t do. But that second argument seems pretty weak to me. We already know that neurons can be emulated pretty effectively with neural nets. There is still a ton we don’t know about neurons… actually what we don’t know is fairly interesting and kind of amazing… for example how can neurons represent ontologies, things like “all birds can fly with the exception of penguins and…” To my knowledge how to represent that kind of info in a neural net is still unsolved and when you think of it pretty fundamental.

    But I see no valid theoretical reason to assume that there is some magic dividing line between what can be done with silicon and what can be done with neurons. And the same for mind-body dualism… the evidence against such dualism is IMO overwhelming at this point.



    Report abuse

  • 62
    Lorenzo says:

    If you are talking about algorithms that prove the correctness of code they absolutely do use computers for that.

    That is what I was asking.

    […] hopefully you get what I was saying.

    Yes, I absolutely get it. I suspect the origin of my false statement is distribute between a very long stay in memory without being actually used (as you said: that problem really doesn’t come up often) and a source which isn’t really precise about it. I think I got it at some basic level informatic course and, somehow, the statement “the algorithm will halt” and “the algorithm is semantically correct” were used as almost-synonims. Which are not.

    I should also add that, except some OpenGL stuff, everything I know about computers, programming and computer science is self taught -thus, very likely to be buggy.



    Report abuse

  • 63
    Lorenzo says:

    Thanks. I’m glad you see my point.

    It would be rather dumb -and useless- to stick to some concept which is just wrong. Especially when a better one came along.

    some people who many think are quite intelligent philosophers such as Searle make a huge issues of this and I’ve never understood it at all.

    Well, I think that someone who brings up this kind of lack of universal proofability is actually aiming at “they lack the ability to tell when they are (or will likely be) wrong, beyond how you instructed them”. But this statement suffers from ambiguity and what it actually means depends on what is the problem at hand… I guess.



    Report abuse

  • 64
    Lorenzo says:

    Instead you prove the correctness of a bunch of algorithms that take specs and turn them into code. Those algorithms are called transformations. […] so you in essence have a proof that the code satisfies the spec.

    That sounds sensible.

    Of course languages like English are much harder to deal with than computer languages because…

    Well, that should be intuitively obvious to anyone who ever dealt with a programming language: the level of semantical ambiguity of human language is very high, while it’s supposed to be non-existent in a good programming language (at least, all that I’ve seen so far seems to point in this direction).

    That’s also a good reason why linguists concentrate a lot more on syntax than semantics.



    Report abuse

  • Intuition is one thing. Mathematical rigor and proof are another. You can actually prove why computer languages are easier and are fundamentally different than human languages. That was part of what Chomsky’s early work was all about. The difference is that where as the syntax of a computer language never depends on the semantics the syntax of natural language often DOES require you to understand the semantics.

    BTW, this is a common misconception among many people who comment here. I’ll say “animals don’t use language” and they will say “sure they do” but what I mean is language in the sense of the kinds of languages that are harder to parse than computer languages where as what they mean are that animals use langauges which are as “simple” as computer languages. Which of course can still be pretty sophisticated.

    That’s also a good reason why linguists concentrate a lot more on syntax than semantics.

    I don’t agree with that. For one thing whether linguists focus on syntax or semantics depends a lot on who the linguist is. I think I mentioned Chomsky’s book The logical structure of linguistic theory earlier. In that book he spends a good part of the introduction arguing AGAINST linguists who say that it makes no sense to study syntax without semantics, that you can’t decouple the two. Before Chomsky that was a controversial idea. I think you are correct that NOW linguists forcus on syntax, although even there… actually even with Chomsky I wouldn’t go so far as to say they concentrate “a lot more on syntax than semantics.” Even Chomsky and his followers think Semantics is critical; his examples (e.g. Time flies like an arrow) are filled with cases where Semantics and Syntax interact. What people like Chomsky try to do is to define various abstractions in language because that is a critical part to developing a useful scientific theory. Understanding Language as a whole scientifically is virtually impossible but if you can divide it up into things like communication, reasoning, logic, syntax, semantics then you can study those sub problems and perhaps make progress. But ultimately you need to work those sub-problems into a larger framework.

    I think saying “Linguists focus on syntax” is as accurate as saying “Physicists focus on sub-atomic particles”. Many of them do but the reason is because there are lots of hard problems that can be understood mostly exclusively as questions regarding Syntax (or particles) but ultimately physics isn’t just about particles and linguistics isn’t primarily about Syntax.



    Report abuse

  • 66
    Lorenzo says:

    Intuition is one thing. Mathematical rigor and proof are another.

    That’s why I specified that your statement sounded intuitively true.

    The difference is that where as the syntax of a computer language never depends on the semantics the syntax of natural language often DOES require you to understand the semantics.

    At the level of my understanding of the issue at hand, your statement sounds equal to mine (about ambiguity) to me. Which is the same as saying that microns and nanometers are the same to a ruler that graduated only to the precision of millimeters… but I actually have problems focussing on the verb “to require”, which is different from “to depend”, and yet I can come up with thousands of examples where syntax constitutes a part of semantics, but I have a really hard time coming up with a good example of the reverse process -to get a certain significance you have a limited (or unique) syntax template to choose from, per word set. I shall add that it may depend on the fact that, although I admire Chomsky, I never actually made it through an entire work of his… yet.

    […] I think saying “Linguists focus on syntax” is as accurate as saying “Physicists focus on sub-atomic particles”.

    Well, to a certain level, the statement about physicists is accurate -in the same manner that $\pi$ is 5 if you count in multiples of 5, only. This is supposed to mean that I wasn’t attempting to be precise and we actually agree on the matter -but you devoted 210 words to it, while I just used 16.



    Report abuse

  • 67
    William-fforbes-Rutt says:

    Are you seriously writing about the 60ppm rice of CO2, here, on this web page. You are brave. 60ppm = 0.006% rise of CO2 in the atmosphere is a stupid thing to rant about amongst intelligent people.

    Join the Spanish inquisition and refuse to expel CO2 from you body.

    The question is not if Stephen Hawking is right, but when. Because maybe nature will pull the plug on these creatures: http://www.worldometers.info/world-population/, and maybe that will happen sooner. For those who think this is something of the future, think again, the stock exchange is ruled by computer algorithms, the NHS, …

    This is a good explanation, why we need to think about this NOW. https://www.youtube.com/watch?v=MnT1xgZgkpk

    Slightly edited by moderator to bring within Terms of Use.



    Report abuse

Leave a Reply

View our comment policy.