#209: We Need to Control Artificial Intelligence

In the recent months, we have seen a race to implement artificial intelligence systems seemingly everywhere. Suddenly, AI is everywhere, and so is the concern about its safety. Things are developing quickly, and even my own recent interaction with a previous version of ChatGPT may now already be outdated.

This development has created a backlash already in form of an open letter calling for pausing AI development for a while to rethink what we are doing, signed by the current luminaries of technological and human development such as Elon Musk, Steve Wozniak, Yuval Noah Harari and Andrew Yang – none of them luddites. I have just signed as well, for whatever that is worth.

So, what is the problem?

I do believe that technology is helpful, but we need to listen to critics of technological development as well. Not just Marshal McLuhan and Neil Postman, but also Plato already have described how our use of technology has a direct influence on the ways we think and live. This is not just a question of seeing media and technology as a tool that we can choose to use or not to use. Every such tool, especially those that directly influence how we perceive and interact with the world, will eventually push its own logic, its own ways of structuring the world unto us.

The written word severely impacts, even destroys, the dialogical nature of human communication embedded already in an oral conversation, as Plato observed. Nuances get lost, irony and satire become difficult, and momentary flashes of thought are then perceived – often falsely so – as entrenched “opinions” that the other person may have forever, even though they may be transient, erratic, or already overcome. I may say something in a conversation for the sake of argument, and take it back later, or another day. But if I fix this in writing, and only that piece is circulated, I will be reduced within someone else’s perception as holding on to an opinion that I never even intended to hold. In the age of Twitter and Facebook, this phenomenon has multiplied with abandon. As we communicate, we are also perceived as communicators, and it is difficult to correct what has been perceived by others. We already are seeing how this has impacted our societies drastically.

Additionally, we have increased our capacity to create false and distorting information and circulate it globally. This problem has always existed, but nowadays, it has gained a new momentum, coupled with the impersonal nature and phenomena such as trolling. The professionalization of “fake news” has reached levels beyond human control, and artificial intelligence is already used to create so-called “deep fakes” that are now even able to easily and quickly falsify images, video, and audio – document types we thought difficult to fake. Human beings are very susceptible to fake visual and audio information. We may have learned that writing can lie, but images of people and video and audio recordings of them have been more trusted, rightly or wrongly so.

Now, enter artificial intelligence, a tool that can amplify all of these developments with ease and abandon. As anyone can see utilizing these tools, at the moment, they do not reveal their sources. They provide “information” without context, without caveats, with the illusion of certainty. This is the path of disaster. For millennia, philosophers have thought about epistemology – the science of knowing how and what we know and can know. Now, a machine seemingly sifts through the amassed human information and decides what is true and not. I would not even trust one single philosopher or scientist with such a judgement, not even dozens of them. Science and Philosophy need to be practiced in a community of scholars, with public scrutiny and transparency. There are reasons for this. Knowledge without ethics, information without context, is and has always been dangerous.

We need a pause. We need rules. We need transparency. We need to listen to those who have thought about these problems for a while – theorists and philosophers of science, media, information, epistemology, and also science fiction authors such as Isaac Asimov and Arthur C. Clarke. We should know better.