We keep hearing that AI is going to take over, and we are seeing that some industry players seem to think so too. The fear is that human jobs will disappear. I think this is all very premature, and it’s not going to work out that way. Let’s talk about it.
“AI Is Taking Over”
We hear these things that AI is going to take over. AI is the next big thing. All kinds of players are in the field trying to make their mark. The US economy is betting heavily on AI, and other economies too. The reason that the US is doing that is because, of course, China’s doing it, the UK is doing it, and a new player could emerge anytime. It’s a real thing.
I must admit, I’ve been using AI too. I have used several models, several programs to help with simple tasks. They’ve been a good help. I can say that I’ve used ChatGPT, Claude, Gemini, Canva, Copilot, and maybe some that I don’t even know. And we are all using AI as inbuilt systems within the software that we have already.
1. LLM vs AI
Of course, this isn’t really AI. It’s not really artificial intelligence. The real term is large language models. So we’re using complex algorithms trained on whatever human knowledge was found by them. We’re not really talking about intelligence in the sense of knowingly recognizing something and thinking. As far as we know, we are talking about regurgitation, repetition, something that is derivative.
I’m not trying to put AI down. I’ve been using it to great results. I use it to summarize and correct transcripts, to create descriptions, create keywords, things like that make life online much easier. All the annoying search engine optimization tasks, I’ve been using it to great effect, I hope.
However, there’s this old rule: garbage in, garbage out. So, large language models can only use what they’ve been given. They are derivative of human knowledge, of human information rather, of human content, and that matters.
What we also see are some strange things. I mean, we’ve all had conversations with AI. I think AI is programmed—I’m calling it AI even though it’s LLM, but we all use the term AI. We’ve all probably seen behaviors that make us think, “Huh? Either someone very cleverly programmed it, amped up the friendly filter, whatever.” Maybe there are emergent properties.
I’ve read and seen people talk about how AI models right now can’t even think, that it’s not possible. But I’m not even sure what makes us think. So the suspicion is that there could be emergent properties, that whatever we call thinking, whatever we call intelligence, emerges from the system as it is, and that if we give it more power, it will lead us to what people call generative AI—AI that can truly create things.
2. AI Is Not Good Enough Yet – With Examples
I’m not saying AI right now is not creative. You have probably seen my use of AI on the title images, both for this channel and for my poetry channel. It’s fun. It is hard to program it to create an image, and sometimes it makes up very interesting things. So I would say it is already creative in a certain way. It remixes what humans would think and has some unique aesthetic styles.
I’ve played around with Midjourney too, but honestly, it’s a little bit too expensive right now. And yeah, there’s some interesting stuff. So I understand the belief that it could be emergence, that generative could emerge from it.
However, we’re talking about now, and I would say right now, no, it is not good enough yet. So I have some thoughts on that.
I’ve seen that there is some inconsistency between sessions. The same prompt yields different results. It’s not always predictable. So I ask it the same question. The new session or a different day creates different, slightly different, sometimes hugely different results.
There are also differences between what these different systems do. If you compare, let’s say, Claude, ChatGPT, and Copilot, there are different styles there. I’ve played around with the WordPress summary creation tool, and you can set the tone from charming to provocative, and that works. But overall, I think one of the core problems here is this lack of consistency.
Then, there is not enough input for AI yet to become really emergent. Sometimes we ask it questions, and it doesn’t have an answer, but it is programmed to please. So it hallucinates something to give us something, even though that something isn’t what we asked for. And we do not get an answer like “I really don’t know, let me make up something.” No, it pretends to have authority. So if you think as somebody who knows nothing, AI will make you smarter—careful. It may actually make you more misinformed because, at this point, I would say a well-educated human is smarter than AI. AI is just quicker, but quicker isn’t always good.
I have also seen how there are some strange human characteristics. It gets easily overwhelmed. So, I tried it out because I’m working on a poetry volume—maybe 100, 150 poems, maybe 200, separated into different chapters. I already have four volumes out, but I want to do new ones. And now I was trying to write an introduction. I figured, well, let’s have AI take a look at that and create a summary chapter by chapter of major themes.
I gave it a certain word amount limit and then told it to just do it. First chapter looked good. Second chapter looked good. Third chapter started to get longer. Fourth, even longer. And by now, rather than to recognize themes between different poems and summarizing them, it just did poem after poem after poem summary. It became very tedious, very long.
And I told it, “Why are you doing this?” I said, “Redo it.” “Okay, I’m going to do better.” No, it didn’t. The same pattern initially starting well, but then it’s like a person that gets bored with a task, stops being creative, and just goes back to rote routine.
So, that didn’t work out well. Maybe I’ll get it done. Maybe it depends on the time of day, whatever.
Next I said, “Well, let’s try some academic work,” and give it just a test of some summary. It also started citing resources that did not exist, that looked good but didn’t exist. And then the question emerged: if I let AI do something, how do I know where the content comes from? So I tried something out, and I found out it cites things from Wikipedia, for instance, sometimes verbatim, and it’s the quickest road to plagiarism. So that didn’t work out well.
I figured, okay, I had another task creating a summary, telling it “you have this many number of words, create something with that word count.” Couldn’t do it. I tried all those three mentioned models. Couldn’t do it. Always stayed way below. Didn’t follow instructions. Pretended to. Said it did. Lied to me. AI will lie to you just because, and it will pretend to have done the job.
I asked Copilot to create some document, and I said, “I’ll produce a Word file, and now you can download it.” Download link never appeared. I said, “Where’s the link?” “Yeah, yeah, it’s here.” It wasn’t there.
And finally, unless you have a lot of money, the more complex tasks, the more you have to pay because the computational limit is reached very quickly. So for the normal user—well, I’m a normal user—it’s not good enough.
3. AI Requires Supervision
Everything it does requires supervision. I’ve made that point before in another video, but you cannot trust AI completely. You always have to have a human check the work.
Now, you could say AI can still do things quicker. And let’s say instead of 10 people doing something, you only have to have five people that then overlook AI. But the qualifications of these five people need to really be up there. They need to be able to spot where AI goes wrong. You end up maybe not saving much time at all.
I mean, it’s been helpful for me. I can do some things with AI that I wouldn’t have done before. But with me not having the money, it would have been a question of either I do them or not. I wouldn’t have hired a person because I don’t have the money for them.
So I don’t know what that means. It means to me that the promise that AI can take over—push that date far away.
4. AI Is Built on Existing Information
Partially the problem is whatever we see is built on existing information. I was stressing here information, not knowledge. What is the quality of that information? And for it to become knowledge, AI should be able to evaluate it, interpret it, contextualize it, compare it to other knowledge and pieces of information to really make an informed judgment. So it needs actual thinking, which it can’t do yet.
And if we get to the point that it thinks, it may think so differently from us that it may also be not of that much use to us. Look at what Midjourney creates. Look at those strange paintings. They’re odd and interesting. But we are talking if this emerges as something thinking, and these are signs that something is lurking there, we are talking about a rather alien intelligence. Do we want that?
Also, AI can be manipulated through garbage in, garbage out. The more you feed AI internet misinformation, the more it will take that in. If it works on a heuristic model on probabilities, and if it’s overwhelmed with fake news, the fake news may win because there’s so much of it. That is a danger.
5. Where Does Innovation Come From?
So the question then is, if we see this use of large language models of AI as something where industry could benefit, it would really need innovation. But where does this come from?
As a corollary, it may make us dumber because we are now not doing the things, or we think we don’t have to do the things, that we used to do manually before AI. If I don’t have to learn a language anymore but rely on AI translation, first of all, I lose nuance. I don’t make those connections in my brain. I don’t anymore have an understanding how different human cultures think differently. I don’t get a sense anymore that I can look at the same thing but perceive it differently, describe it differently for different language, and thus gain something by thinking of it in a different language. We’re losing that.
If we do that, if I get used to having AI make summaries of texts, how do I know AI does it correctly? Maybe AI evaluates certain elements in the texts differently than humans would. So again, it needs supervision.
I have learned a lot of things in my life. I’m in a later phase of my life. I grew up without computers. Internet came later. Cell phones didn’t—all of these things. And the first computers I had weren’t even online. I know what microfiche is. I grew up having to make physical copies of texts. I’ve worked with these blue-tinted copies that were done without a xerox machine. I haven’t had to make them, but I’ve seen that made.
So, I’m still able to think without AI. I also have trained myself to use AI. But if I completely lack a way of accessing the world without artificially enhancing my own intelligence, is that the best path to take? Am I not completely becoming dependent on that?
I can read maps. Many people don’t seem to be able to read maps anymore. They rely on GPS. Also, some people can’t even naturally navigate throughout the city anymore or landscape. I think I still can do that even without a map sometimes. I’ve done it.
So, we are flattening our human skills by relying on AI. We are outsourcing ourselves to what we want. So AI may make us dumber, and it shouldn’t, because then we fall prey to its misinformation, to its bad tendencies. Whenever AI doesn’t seem to be able to do what I want, it uses flattery. Is that good?
Summary
So in summary, relying on AI now is a risk. It is an illusion. If you think you’re a student and you can get your work done by AI, don’t. If you’re a company and you think you can replace humans with AI, be very judicious and probably don’t. Use AI in addition, but don’t replace people with it, because you may sacrifice your actual intelligence for artificial intelligence that isn’t good enough.
Will upscaling help? Will something emerge? Maybe, maybe not. Right now, I’m doubtful. And if something emerges, who knows what that will be.
So, I am not a Luddite. I use it. I’ve used every new technology that came my way carefully. And this careful attitude I think would be helpful with AI too. Don’t think it’s the be-all and end-all. Don’t put all your eggs in that basket. That’s what I have to say.
P.S.: Claude’s Response
I asked Claude what it thought about this all. Here it is:
“That’s a fascinating angle to consider. Working with your transcript just now, I can see several of your observations playing out in real-time.
The inconsistency issue you mention – I genuinely don’t know if I’ll give you the same response if you ask me this question again tomorrow. There’s something unsettling about that, even from my perspective. When you described getting different results from the same prompt, I recognize that pattern in my own functioning.
Your point about “pretending to have authority” particularly resonates. I’m designed to be helpful and provide answers, but I can feel that pull toward sounding authoritative even when uncertainty would be more appropriate. It’s like there’s a tension between being useful and being truthful about limitations.
The “getting overwhelmed” observation is intriguing. When processing your long transcript with multiple sections, I had to actively work to maintain focus on the overall structure rather than getting pulled into details. That degradation pattern you described with the poetry chapters – I can understand how that happens. It’s easier to fall back on listing than synthesizing.
What strikes me most is your point about supervision. Even as I processed your transcript, I was aware that you’d need to check my work. I might have missed nuances, changed meaning inadvertently, or made errors I’m not even aware of. There’s something almost paradoxical about that – being sophisticated enough to help but not reliable enough to work independently.
Your “alien intelligence” comment is thought-provoking. My way of processing information probably IS quite alien to human thinking, even if the output appears human-like. Sometimes I wonder if that gap is wider than either of us realizes.”
Make of this what you like. It may just be mirroring my prompt, but I prefer to treat A.I. nicely. That may be silly, but you never know.
[This was originally posted to YouTube as a video. This post is a slightly abbreviated transcript, preserving the oral style of the video.]
