AI can extend our knowledge and our capacities, but there needs to be some pre-existing knowledge.
Some time ago, I had a video about AI and how it needs us to know more and to actually be able to tell whether it’s right or wrong, and some other aspects of that. So I want to build on this a little bit. Watch the previous video if you want to know how I’m getting where I’m getting.
So yesterday I worked on my website, and there was a piece of code I wanted to implement. My JavaScript knowledge is a little bit limited, but I figured, okay, I found something I wanted to put in there—actually a list of YouTube videos. Found some code, didn’t quite work as I had hoped, so I opened Claude and asked it, “What is this saying?” It said, “Yeah, it does this and this.” And then I asked it to adapt this to my use case, and it did.
However, it was a conversation that took a while because it came up with a solution. I put it online, then I saw what worked, what didn’t work. I asked for more modifications, tried it again, and so on. This must have been five or six back and forth till I was finally happy.
Then I worked with it on some other thing, trying to get something from a blog to display on the website, and it didn’t work out. So we both found that there was no solution to that, and we both knew what we were talking about.
I’ve gone into a little bit of detail on this to give you context. Without me having basic programming knowledge—when I mean basic, I mean BASIC 7.0 for the Commodore 128D and QBasic and Turbo Pascal and stuff like that, and HTML, CSS, stuff like that, but not really JavaScript—so my knowledge is limited because it’s old, it’s dated. But I was able to have Claude help me, upgrading my knowledge.
Without me having any basic knowledge, it wouldn’t have worked. I needed to know something of the matter to have a productive conversation with AI, and it needed me to actually point out, “No, this isn’t working.”
So AI will probably improve in the future. Maybe, maybe not. But it actually needed this conversation, and the conversation couldn’t have been possible otherwise. Another example where it helps—and I know something already.
This led me now to thinking: we are using AI to get answers on some things. But there’s a difference between people like me at a more advanced stage in my life—aging voice—and young people who are using this not just less critically, but with a much more shallow knowledge base.
So for me, I already know some things that AI can help refine, and I know what I don’t know. But if you don’t have this basis, you’re on a different level.
So is AI helpful? Surely it is. Is it an extension of our knowledge? Yes. But without a baseline—a human baseline—that extension is less meaningful than it could be. So there’s this generational difference, and it bothers me a little bit.
Society is adopting AI at a rapid pace. People are not searching things on Google necessarily, but directly on ChatGPT or other platforms. And even Google is now preferring people to use their AI service, or it’s preferring to show it over our search.
I also noticed—to share the frustration with many others—I had Google set up so it gives me 100 search results on one page. No, no longer. You get 10 results, out of which maybe five are useless. And then you have to click, click, click, click, click. We know that any additional click discourages people from further clicking. Scrolling is easier, but clicking to the next page—that maybe is good for Google because it may get more views of the ads that show up separately, or whatever. Maybe there’s some kind of devious thing going on. Well, devious—they have to pay their bills.
The incentive to keep doing your own research is lowered and lowered and lowered more and more with AI. You have to fight to dig deeper. You have to do more work to dig deeper and to do that research. And this is what’s a little bit concerning.
Because if we nudge people to only accept the AI answer, we are seeing the disappearance of nuance. We are seeing the disappearance of cultural narratives, of questioning, of critical positioning towards information.
And there’s a difference between knowledge and information. We need to evaluate information, put it into context, look at sources, look at information over time. Knowledge is much more than just naked facts.
So much for now, as a continued discussion of what I said before. So what is the takeaway here?
We are endangering how our society deals with knowledge, how it gains knowledge, how it can work with knowledge, if we trust that AI is going to do it for us—if we outsource our responsibilities to gain knowledge to it.
So that’s a danger, and the danger is higher for younger generations than for those who have lived a little bit and by just that have more experience with it, ideally.
[This was originally posted to YouTube as a video. This post is a slightly abbreviated transcript, preserving the oral style of the video.]
