As algorithms take over, YouTube’s recommendations highlight a human problem

Apr 23, 2018

By Ben Popken

YouTube is a supercomputer working to achieve a specific goal — to get you to spend as much time on YouTube as possible.

But no one told its system exactly how to do that. After YouTube built the system that recommends videos to its users, former employees like Guillaume Chaslot, a software engineer in artificial intelligence who worked on the site’s recommendation engine in 2010-2011, said he watched as it started pushing users toward conspiracy videos. Chaslot said the platform’s complex “machine learning” system, which uses trial and error combined with statistical analysis to figure out how to get people to watch more videos, figured out that the best way to get people to spend more time on YouTube was to show them videos light on facts but rife with wild speculation.

Routine searches on YouTube can generate quality, personalized recommendations that lead to good information, exciting storytelling from independent voices, and authoritative news sources.

But they can also return recommendations for videos that assert, for example, that the Earth is flat, aliens are underneath Antarctica, and mass shooting survivors are crisis actors.

Continue reading by clicking the name of the source below.

One comment on “As algorithms take over, YouTube’s recommendations highlight a human problem”

  • The machine learning is getting rid of our freewill.
    What aren’t we going to do? CERTAINLY, today I’m not going to PRAY ; )
    But, perhaps, I’m going to seal the DOOM contributing to some algorithms.
    Unless that we have the right norm to use the science, we are going to be subject to harm the people and threat ourselves on earth, like it was in the last wars… Are we going to chime the historical rhyme?

    Report abuse

Leave a Reply

View our comment policy.