Skip to content

Share

  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page
PA IN THE MEDIA

Algorithms narrow your view - black boxes block innovation

What product recommendations we see, what movies we watch, who we go on a date with: algorithms influence all aspects of our lives. That's great if you want to choose a movie that you like quickly on Netflix. But algorithms also determine what political and scientific information you see, whether you are invited for an interview, and whether you are creditworthy.

Algorithms are black boxes and the average user has no idea how they are put together. Google's search algorithm has become a better-kept secret than Coca-Cola's recipe! Partly for this reason, these technologies are distrusted. And it is good that there is this push back.

Transparent algorithms?

Algorithms have a growing influence on our lives. From dating sites and stock trading to online retailing and online searches - they are increasingly shaping our future. And as the influence of algorithms grows, the discussion about these algorithms becomes more important.

The black box problem of algorithms cannot be solved. Some algorithms are simply too complex to explain, other algorithms are trade secrets (just think of social media). Some algorithms are already so good that their operation can no longer be fathomed, creating the problem of "deepfake" videos that cannot be distinguished from the real thing.

Full transparency about how algorithms work is therefore an illusion, especially when they become more complex. But there are techniques that can guarantee the quality of algorithm decision-making – this is called algorithm assurance.

Algorithms limit you to your comfort zone

So we can check whether algorithms work as intended. But a more important discussion is: how do we use algorithms? Because just as the workings of an algorithm can be a black box for us, they can also create black boxes for our own minds.

Just think of the typical internet search. We get to see what we want to see. We find what we expect to find. Algorithms designed for ease of use destroy the chance of happy accidents. If you don't put in extra effort, it's hard to be surprised these days; to get new ideas from outside your algorithm bubble. Most algorithms make suggestions based on what you already like. They usually do not suggest things that you definitely do not like, but that you might find really cool after you have got used to them. People are staying more and more in their own comfort zone, and innovation rarely takes place there.

Force algorithms to "template thinking"

On social media, young and old are guided by algorithms - who to follow, what to like, but also what to do to be followed. This mindset is also reinforced by our education, where the most points are awarded to those who are the best at colouring within the lines. The message therefore seems to be everywhere: just follow the algorithms and you will end up in the right place.

Algorithms that increasingly determine what we can see and when, stand in the way of innovation. Not only because the use of recommendation algorithms narrows the field of view, but also because potential innovations are tested using algorithms. For example, algorithms are not only used to improve the success potential of drug development (which is unarguable), but also to predict the commercial potential - which leads to a discussion that is not necessarily about the best innovation, but about the most lucrative.

Algorithms should support human ingenuity, not limit our innovative capacity

To innovate, you have to experiment, learn from mistakes and, above all, dare to fail. But if, for example, an algorithm has a gloomy view of the chances of success of a new startup, investors and potential innovators are more likely to walk away from a concept. Yet it could have been the next big thing.

As a result, "innovations" are turned into templates. "This worked here, so it will work there too." Everything is possible and allowed - as long as it fits within the algorithm. The reality is of course different. For example, various companies tried in vain to sell tablets for years. Based on the available data, the iPad looked doomed, but against expectations Jobs made it a success. In other words, not all successful innovations can be accurately predicted.

Colouring outside the lines

Does this mean that we should ignore algorithms and rely solely on intuition? Not at all. We just shouldn't be guided by algorithms. Technology should support human ingenuity, not limit our innovative capacity. Streaming platforms should offer us new experiences and not just send us down the well beaten path. Search engines should think along with us about what we miss, and not just show us what we want to see.

Challenging students

We have to learn again to colour outside the lines. Outside the lines of expectations, of algorithms. And that's exactly what we're trying to teach at PA Consulting's annual Raspberry Pi schools competition. Students are challenged to use the small Raspberry Pi computer to invent something to solve a social problem. Based on the same data and basic components, but with the freedom to spontaneously come up with new solutions that no one could have thought of before!

By Willem van Asperen, Director of Applied Artificial Intelligence at PA Consulting

This article first appeared in LinkMagazine

Inspiring the next generation of innovators with our ninth Raspberry Pi Competition

Find out more

Contact the author

Contact the data analytics and business intelligence team

×

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.