by

Scribe

21/02/2020

In his bestselling book Homo Deus, historian, philosopher and author Yuval Noah Harari writes that everything revolves around algorithms. Everything. From our DNA to Amazon’s Kindle software, which has led to a wondrous reversal whereby we no longer read books, but books read us instead. The story we read has in fact turned into the story that writes us, and in which we function as characters. Algorithms will read and eventually control us, Harari predicts.

Actually, it always used to be like that: movies, speeches and stories inspire and transport us, just as they mislead us. It’s up to us to decide whether we want to listen to a story, and if we do, how to reflect upon it. That makes us openminded.

That freedom of choice underlines the importance of ethics, and privacy in particular, in a world where Mark Zuckerberg has become the new Shakespeare. Facebook users have – often unconsciously – decided to read ‘his work,’ and no longer realize that it might perhaps be a bad story. There is no more conscious decision to reflect, in fact, it was stolen from us.

And that is where I do not (yet) agree with Harari, or perhaps I’m reading his warnings between the lines too carefully. Harari’s contention: we are increasingly using algorithms. That’s a given, because (artificial intelligent) algorithms have enabled us to do and want much more than ever before. However, I dispute the assertion that algorithms are taking over our lives.

We can use an algorithm, and perhaps we can no longer do without it. However, when we no longer trust an algorithm, we will look for another one, adjust the one we’re using, or not use it at all. If Facebook continues on its current course, a better alternative will come along. Not because the algorithms want it – they were invented by humans – but because humans (should) take center stage in a desired future.

An example of the desired future, the desired algorithm, can be found in legal texts. People are free to do what they want, and when they break the law – the algorithm –, a judge will pass sentence. Changes to legal texts and interpretations by the judge lead to constant adjustments to the law as algorithm. The spirit of the law plays an important role in our current daily practice, otherwise we would be shackled by the law, says TU/e emeritus professor of Law and Technology Jan Smits.

Incidentally, the role of the University Council is similar to that of a judge: we keep a check on the board, and we interpret the law and the rules. And it’s important that the University Council, too, acts according to the spirit of the rules. Interpretation and sound arguments play an important role in this. Naturally, it’s tempting to automatize this algorithm of interpretation. But the act of interpretation is something only humans are capable of, and it should be their prerogative alone.

People often use rationalization to justify bad behavior these days. In its original meaning – explaining behavior – this represents one of the major challenges in the field of artificial intelligence (AI). AI, as a chain of algorithms, is capable of solving complex problems. But how do you substantiate those solutions, how dependable is it?

In their search for answers, people look into the direction of transparent AI, reliable AI, or trustworthy AI. Harari is a historian, an astute analyst and a brilliant author, but he is no AI expert. Viewed from a historical perspective, he extends bad human behavior into the future. That’s called extrapolating and is always dangerous, especially when you use it to try to rationalize your own prediction.

Share this article