Can humans and machines work together to tackle ‘wicked’ challenges?

Jan 13, 2016

by Lonnie Shektman

Despite development of increasingly intelligent computers, scientists from Cornell University and the Human Computation Institute in Fairfax, Va., say that they wouldn’t leave the task of solving the world’s most complex problems – from environmental to economic to social – to computers alone.

Instead, the researchers call for a sophisticated form of “human computation,” a computer science technique that taps the strengths of humans and computers to accomplish tasks that neither can do alone. A human-computer collaborative system could incorporate human experiences, reason, and creativity into computer intelligence to solve the world’s most nuanced problems, say researchers in a column published in the January 1 issue of the journal Science.

Today, human computation works when computers assign micro tasks to many people, or to sets of people who can analyze and improve on preceding contributions. Wikipedia is an example of how this works. So is reCAPTCHA, a Google security feature websites use to weed out spammers, and the search giant simultaneously uses to collect wisdom from the crowds.


Read more by clicking on the name of the source below.

 

6 comments on “Can humans and machines work together to tackle ‘wicked’ challenges?

  • @OP – Despite development of increasingly intelligent computers, scientists from Cornell University and the Human Computation Institute in Fairfax, Va., say that they wouldn’t leave the task of solving the world’s most complex problems – from environmental to economic to social – to computers alone.

    The need for human oversight seems to be illustrated here!

    http://www.bbc.co.uk/news/technology-35301279
    Google drivers had to intervene to stop its self-driving cars from crashing on California’s roads 13 times between September 2014 and November 2015.

    The disclosure follows a local regulator’s demand for the information.

    Six other car tech companies also revealed data about autonomous-driving safety incidents of their own.

    Google wants to build cars without manual controls, but California-based Consumer Watchdog now says the company’s own data undermines its case.

    Google operated its cars in autonomous mode for 424,331 miles (682,895km)
    There were 272 cases when the cars’ own software detected a “failure” that caused it to alert the driver and hand over control
    There were 69 further events when the drivers seized control without being prompted to do so because they perceived there was a safety threat
    Computer simulations carried out after the fact indicated that in 13 of the driver-initiated interventions, there would have been a crash if they had not taken control




    Report abuse

  • The most toxic problems have reasonable technological solutions. The trouble is in getting widespread acceptance. Consider the ideology-thinking patterns of the Republican party who control the house and senate in the USA.

    In my fantasies AI create persuasive tools, e.g. movies custom designed for the recipient to make real for example the consequences of global warming.

    Treat persuasion as a hard science.



    Report abuse

  • Can someone please explain the Christian Science Monitor (CSM) story to me.

    As someone who has worked in ICT for many years I think I understand database and program development. Such projects always begin with humans detecting a problem, or set of problems, and it always ends with the humans addressing the problem(s) by using the tools of computers, networks and software – with varying degrees of success.

    The CSM story appears to be saying that there is a new development in this area, but I don’t see one in their story.

    The CSM’s example of wikis exactly matches my above description – no change there then.

    The CSM’s example of CAPTCHA makes no sense to me. Computers have always required that humans define the terms of both the problem and the solution; input data, data relationships (links, rank, program prioritisation, meta-data, etc.), algorithm(s) (process, calculations, logic, calls, iterations, branches, etc.) and output are defined by humans. How is CAPTCHA different?

    As I remember it; this User-definition of how we work with computers was satirised by Douglas Adams in The Hitchhiker’s Guide to the Galaxy thus: A human-like race asks a computer “The answer to the ultimate question of life, the universe and everything”. After many years of mighty cogitation the computer’s answers is: “42”.

    As those of us in the know often say: “Garbage in = garbage out”.

    Adams was a good writer and if you haven’t read The Hitchhiker’s Guide to the Galaxy I recommend it as the story works on a number of levels that we don’t have time to consider here.

    Adam’s Computer goes on to explain that the answer, 42, is incomprehensible because it’s designers and programmers didn’t properly define the question, indeed they didn’t even understand what they meant by their own question.

    Is this, perhaps, what the CSM is driving at: Are they saying that we now have a new improved method of defining human problems for computers that will help us build computers, databases and programs that give us the answers we really need? If so, why don’t they talk about that method?

    Color me skeptical. ICT professionals and academics have been working on applying ICT to problems since the days of the relay and vacuum tube.

    Do I think that it is more likely that the people at the CSM don’t understand what they’re talking about …

    I didn’t check out the Science story the CSM linked to as the Science site appeared to be down. If anyone else got a peek please enlighten me.

    Or, maybe I just don’t get it?

    Educate me, please.

    Peace.



    Report abuse

  • This is about capturing human cognitions and valuations and learning how to exploit those directly in subsequent computer generated meta data and I assume, later, how to achieve something more like those initial cognitions and valuations entirely without human help.

    We humans can’t help but have complex cognitions and valuations. We have highly cross-coupled modular brain structures, incapable by themselves of any strict logical processes (its all Bayes and coincidence). We have weighted heuristics and associative corteces, having us notice, say, fleeting structural similarities, attractions and repulsions. Only then do we have our own overlay of a culturally derived and learned set of logical processes. This is being a computer by cultural software.

    Computers can do this latter logic easily but they can’t do the first bit that we do. (We barely understand it ourselves). But we know we are pattern cognisers able to extract such patterns from almost overwhelming noise. Sometimes we over-reach ourselves, so driven to do it are we. (Indeed this site is dedicated to the over-extracters). Part of what we do is because of pattern recognition achievement and the good-enough-in-the-right-sort-of-way reward it delivers. I suspect a lot of the rest is a subconscious Darwinian generation and testing of hypotheses until good enough. I also suspect that a huge number of heuristic valuations are involved in this, that like our metaphoric language these have their roots in our physiognomy and movement and are managed/involved in some way via our associative corteces.



    Report abuse

  • Hi Phil,

    Many thanks for your response. Just to get this straight in my own head:

    This is about capturing human cognitions and valuations and learning how to exploit those directly in subsequent computer generated meta data and I assume, later, how to achieve something more like those initial cognitions and valuations entirely without human help.

    We appear to be discussing:
    1. Record human reactions in database
    2. Extrapolate from database to cognitive process(es)
    3. Model human cognitive process(es) as computer processes – a kind of New AI

    Is that right?

    We humans can’t help but have complex cognitions and valuations. … We [use] weighted heuristics[, associations and approximate statistical weightings] … then … we [employ] culturally derived and learned … logical processes. … Computers [do logic, not] pattern [perception] from [background] … noise.

    I’m with you so far.

    The CSM, as I now understand it, is saying that it is possible to model some of our ‘intelligent’ behaviours … ?

    Frankly, this is old news.

    I was involved, on the periphery, of a project to capture the essence of a human expertise and code it (trading in energy contracts) in the ’80s. I’m still struggling to see a big difference between that process (as I described above: define the terms of both the problem and the solution; input, data relationships, algorithm(s) … and output) and the CSM’s piece.

    The CSM is saying: “Researchers call for a sophisticated form of ‘human computation’, a computer science technique that taps the strengths of humans and computers to accomplish tasks that neither can do alone.”

    This is exactly the goal of my ’80s team (when I say “my team” I should clarify I was a very young, very junior, member). Computers were fast and could therefore beat humans to the ‘hit the Trade Now’ button when a logical sequence of market movements indicated good trading conditions – but they were rubbish at extrapolating to long-term (or even end-of-day) position management because they did not have the skills to recognise market movements that stem from human interaction and news feeds (not to mention attempts at market manipulation). But I know from that experience that it is possible to extract that expertise, synthesise the expertise as algorithms and code those algorithms.

    As far as I can tell what the CSM is calling “human computation” also involves the so-called Wisdom of Crowds? Surowiecki’s book is more than a decade old – hardly cutting edge.

    So what’s new?

    Perhaps it really has taken more than ten years for AI researchers to realise that they can approximate human ‘expertise’ (of a highly questionable quality, it seems to me, if based on CAPTCHA because the data is very thin – my ’80s team were interviewing expert traders at length) and then code for that.

    That seems unlikely, to say the least.

    That expertise extraction would be my Step 2, above. From the CSM description it is impossible to see what “human computation” researchers are doing in this area, which is what first alerted me to the possibility that CSM actually have no idea what they’re discussing.

    I think it highly likely that the CSM has seen a Science article and its journalists used their (faulty) intuitions to extrapolate to an idea that has nothing to do with real human computation research. The CSM people believe that humans are special. They therefore extrapolated to: Human Computation research is confirming our cognitive bias that humans are special.

    CSM: The way we think cannot be put into a computer (false), researchers are calling for that specialness to be added to computers (i.e. we rely too much on computers, we trust them to do things on their own which is wrong, etc.) (false) and AI researchers are working to make computers think more like humans because what they are doing is unconnected to human problems (absolutely false – computers’ very existence is founded on our need to solve our problems, QED).

    The journal Human Computation (HC) is well worth a visit. I particularly liked the article promoting the idea that HC research should underpin the development of the Semantic Web as it would greatly improve the utility of same.

    Sometimes we over-reach ourselves … Indeed this site is dedicated to the over-extractors …

    Well … quite …

    That doesn’t bode well for productive results from crowd-sourced material, does it.

    It is perfectly possible that I’m still missing the point, in which case I apologise for being as thick as pig … err … pea soup. If you feel you could have another crack at it I would be most grateful.

    Peace.



    Report abuse

Leave a Reply

View our comment policy.