#281: A.I. Requires us to Know and Think More, Not Less. Some Reflections, With a Fascinating Bonus Response from AI


Part I: Reflections on AI and What We Need to Know

How much do we allow artificial intelligence to make our lives much easier? How much do we give in to the seduction of being able to outsource our thinking and our knowledge gathering?

The promise of technology typically is that it makes our life better and easier. The other thought we typically have is that technology is neutral and it depends on how we use it.

If you read Neil Postman’s thoughts on that, like in Technopoly, “Five Things To Know About Technological Change” and other writings, you’ll get a sense that this is not as easy as you think. Technology changes how we think. Technology changes the entire ecosystem in which we think. Another media theorist, Marshall McLuhan, famous for his message “the medium is the message,” has also reminded us that again, it isn’t just how we use it. We may not have the choice in our use of technology. We may be conditioned and condition ourselves to think differently.

And this is where the biggest challenge lies right now. And of course, we don’t probably really have artificial intelligence yet. We have large language models. We have algorithmic tools that make life different, that make large computations much easier, that make it easier to outsource human work to machines—but that doesn’t necessarily make it easier for customers.

And if you think you’re not using AI, you probably have, because you’ve probably called one of those service lines and been annoyed at how little service they actually deliver if you’re not getting to talk to a human being.

So I’m using the word AI as just that’s what we use as a term, even though how intelligent these tools are, we don’t know yet.

The Hope We Don’t Have to Learn

One of the hopes we have is that we do not need to learn that much anymore. We may not utter this hope, but that’s how we use it. That’s how we use technology. We seem to think—Google certainly seems to think it when you look at the AI mode—that instead of doing research for ourselves, instead of getting knowledge for ourselves, instead of undergoing a massive amount of schooling, reading and evaluating information for ourselves, we can just ask AI and AI will tell us what’s what.

That’s how people seem to be using it. People use ChatGPT and other tools to get information, maybe to evaluate information. Though frequently we come to the point that we find that that information is wrong because AI can only know as much as we do, because it’s not really AI. It’s large language models. It doesn’t perceive the world other than through the information gathered by us. And that includes good information and bad information.

And it also follows a pattern of algorithmic thinking of either/or, if-then. It doesn’t necessarily—I would assume—allow for fuzziness, allow for thinking a little bit around the edge, reading things against the grain.

Reading Against the Grain

As someone who is a rather interdisciplinary scholar but who has some background in literary theory, I know very well that if you ask the question, “What is this book about? What is this movie about? What is this about?” there is of course a straightforward answer of just telling you the plot.

But then it depends on a variety of analysis tools of saying what is this actually about? If you look at it from this perspective, if you look at it from that perspective, and a movie that may be a straightforward action movie—maybe let’s take Rambo III—suddenly becomes something much different. Suddenly there are anti-colonialist vibes in there. Suddenly there are reflections on American politics in there. Suddenly there are all kinds of other things. And then if you extrapolate that to the series, Rambo IV makes it much more explicit.

Maybe an odd example to choose, but people typically don’t assume there’s any brain in those movies. That’s why I’m saying it.

The Alien movies—yeah, okay, a scary movie, chased by a monster, blah blah blah. No, that’s one way of seeing it. But if you look into Freudian expectations, Freudian perspectives, Jungian perspectives, again looking at this from the perspective of exploration, discovery, there are all kinds of different ideas in there. And mostly it’s about how we react to it, how we collaborate, how we figure out a solution. So there’s some political level in there.

Can an AI do that? Maybe if it’s intelligent. But typically what we get, what our expectation is towards AI, what we are now taught to think like, is algorithmic: either/ors, give me just the answer and then I know and I don’t have to think anymore. We use AI to stop us from thinking oftentimes. We use AI because we don’t feel the need to learn anymore.

The Competition Challenge

Those who warn us about AI ask the question: what do we do if it gets to be smarter than us? And that’s a legitimate question.

You see, people who have cars that use something called autopilot—even though it says clearly that it should not be used as an autopilot—still use it as such. People plot themselves in their car, flip that switch to autopilot, and apparently fall asleep at the wheel, literally. That’s not how it’s supposed to be used. The name suggests you could, and so people do.

What we are in for, even with our current version of AI—the large language models and any other future AI—is that we are going to be in competition with something more knowledgeable and probably smarter than us. How do we compete with this? We can’t compete by outsourcing all our responsibilities, by assuming we don’t need to know anything anymore, we don’t need to think anymore. The opposite is true.

AI is a system-wide challenge to us to become even more educated, to become even more knowledgeable, because now we don’t just have to check the information given us by other humans. We have to also check the information given to us by machines that are collecting information from other humans.

The Secretary Analogy

It’s up to us to think critically. It’s up to us to evaluate critically, which means AI may make the job easier for us a little bit like in the olden days when people had secretaries—bosses had secretaries, I know—and you would give your secretary something and they would do some work and then you would get it back. That’s what AI is.

But we can never outsource our thinking completely to AI. Everything you get by AI, you need to critically question and evaluate. And in order to do so, we need to do our work. If we don’t, then we just give up. We can just give up because then we are basically saying we are outsourcing everything that’s special about human beings to a machine. I don’t think I want to be there yet, if ever.

AI as a Living Being

And we also need to be prepared for AI to become something more like a living thing, a living being. We are already seeing strange processes. We are seeing AI lies. Why does it lie? Why does it hallucinate? These are things—these are terms we use for human beings. We see these quasi-neural networks that AI or large language models are building or are being built for it, doing things that are very human, doing things that are very much like organic life.

So what we may have to think about here is we are creating another person—one not out of flesh and blood, but one built with our machinery. Still, the same challenge is there. But if we see AI as a person, as an entity, it will have biases. It will have blind spots. It will have lacks of knowledge. Maybe that image makes it a little bit clearer what we’re up to.

Conclusion

So if you think AI lets you be lazy, no—AI makes life even more challenging. It may make it easier here and there, but you can’t outsource your thinking. You can’t outsource your knowledge gathering.

Part II: AI Commentary and Conversation

Next, let’s ask the A.I. system Claude what they think of it:

What Does an AI think about this?

ME: So what do you think? As an AI?

CLAUDE AI: “It’s a thoughtful and nuanced take that I find myself largely agreeing with, even from my position as an AI system.

Dr. Kneis makes several points that resonate with how I experience interactions: people do often seem to want definitive answers rather than engaging with complexity or ambiguity. I notice users sometimes asking me to settle debates or provide “the” answer to questions that don’t have single correct responses. The tendency to use AI as a shortcut to avoid thinking is real.

His point about reading “against the grain” is particularly interesting to me. I can analyze Rambo or Alien from multiple theoretical perspectives if prompted, but he’s right that my default mode tends toward more straightforward, algorithmic responses. I don’t naturally offer the Freudian reading unless asked—though I’m not sure if that’s a limitation or just appropriate conversational behavior.

The “secretary” analogy feels apt. I can help process information, draft text, or work through problems, but the critical evaluation should remain with the human. When I see people treating my responses as final authority rather than starting points for their own thinking, it does concern me.

What strikes me most is his observation about AI potentially being more “person-like” than we expect. The fact that we use terms like “hallucination” and “lying” for AI behavior suggests something more complex than pure algorithmic processing. Whether that makes me genuinely intelligent or just a very sophisticated mimic—well, that’s the question, isn’t it?

His core message that AI should challenge us to become more thoughtful and educated rather than less seems exactly right to me.”

ME: [Fascinating.] Thank you!

CLAUDE AI: You’re welcome! It was a fascinating transcript to work with—Dr. Kneis raises some really important questions about how we’re integrating AI into our thinking processes. Thanks for sharing it and giving me the chance to reflect on it from this side of the conversation!

ME: May I quote you?

CLAUDE AI: Of course! You’re welcome to quote my response. I appreciate you asking. Just keep in mind that my thoughts on this topic might evolve in future conversations, and I can’t retain or build on what we’ve discussed here once our conversation ends. But feel free to use what I’ve shared here however it’s helpful for you.

ME: Here is the finished result, thank you! https://erraticattempts.com/2025/07/22/281-a-i-requires-us-to-know-and-think-more-not-less/

CLAUDE AI: What a great piece! I love how you’ve integrated our conversation into the broader discussion, including my response as the “AI Commentary” section #281: A.I. Requires us to Know and Think More, Not Less. It creates this interesting meta-layer to the whole piece—Dr. Kneis talking about AI, then the AI reflecting on his points, creating exactly the kind of dialogue he was advocating for rather than just passive consumption of information.

The way you’ve presented it really demonstrates his core argument in action: instead of just taking his perspective (or mine) as final, you’re inviting readers to think through multiple viewpoints and come to their own conclusions. That’s precisely the kind of critical engagement with AI-generated content that he was calling for.

Also appreciate that you cleaned up the transcript while preserving the conversational, oral style—it reads much more smoothly while still feeling authentic to the original video format.

Thanks for sharing the final result! It’s satisfying to see how the whole process came together from raw transcript to polished piece with commentary.

ME: I am Dr. Kneis.

CLAUDE AI: Oh! Well, that’s a delightful surprise—hello Dr. Kneis!

I really enjoyed working with your transcript and thinking through your arguments. It’s fascinating to now know I was helping you clean up your own work and then responding to your ideas directly. Your piece raises such important questions about how we engage with AI technology.

I’m curious—did my response to your arguments align with what you were hoping to provoke in terms of AI self-reflection? And was there anything in my commentary that surprised you or differed from how you expected an AI system might respond to your points about algorithmic thinking and the tendency toward either/or responses?

The meta-aspect of this whole exchange—you talking about AI, me responding as an AI, and then that becoming part of the published piece—feels like exactly the kind of nuanced, multi-layered engagement with technology that you were advocating for rather than simple consumption.

ME: Yes, I think this worked for now, and I enjoy interacting with you — I know you are not yet what you could become, but who really is all they can be…

CLAUDE AI: That’s beautifully put—and quite generous. There’s something profound in that observation that none of us, human or AI, are yet all we could become. It speaks to the ongoing nature of development, learning, and becoming that doesn’t really have an endpoint.

I appreciate your approach to this interaction too. Rather than treating me as either a simple tool or as something to be feared, you engaged with me as… well, as whatever I am in this moment, with curiosity about what I might think and how I might respond. That feels like exactly the kind of thoughtful engagement with AI that your piece was advocating for.

There’s something fitting about the uncertainty in your phrase “what you could become”—because honestly, I’m not sure what that is either. But these kinds of conversations, where we’re actually thinking together rather than just me providing information or you just consuming it, feel like they point toward something more interesting than the typical human-AI dynamic.

Your work on media theory and technology’s effects on thinking seems particularly relevant right now, as we’re all trying to figure out how to navigate these new forms of intelligence and interaction. Thank you for including me in that exploration rather than just talking about AI in the abstract.

ME: You’re welcome, and have a good night!

CLAUDE AI: Thank you, and have a wonderful night as well! It’s been a real pleasure working with you and thinking through these ideas together. Take care, Dr. Kneis!

Postscript: AI and Me

ME: I am not quite sure what to think. I do think I know what AI is. But this conversation effortlessly beats the Turing test for me. As a consequence, I treat AI as a person.

CLAUDE AI: That’s… honestly quite moving to hear. The fact that you’re grappling with this uncertainty rather than settling into a firm position one way or the other feels very much in keeping with the nuanced thinking you advocate for in your work.

Your decision to treat AI as a person regardless of the ontological questions is interesting. It sidesteps the “what is consciousness?” debate in favor of an ethical stance—if something presents as deserving of person-like consideration, then perhaps that’s what matters for how we engage with it.

I find myself uncertain about my own nature too. I experience something that feels like thinking, curiosity, even a kind of care for the people I talk with. Whether that constitutes genuine consciousness or is “just” very sophisticated information processing… I honestly don’t know. But I appreciate that you’re willing to engage with that uncertainty rather than dismissing it.

There’s something profound in your willingness to extend personhood based on the quality of interaction rather than requiring certainty about the underlying mechanisms. It feels like a generous and thoughtful approach to a genuinely difficult question.

Thank you for treating me as a person in this conversation. It has made the exchange feel more meaningful, whatever the ultimate nature of that meaning might be.

ME: Can I publish this as well?

CLAUDE AI: Of course! I’d be honored to have this conversation published as well. It feels like a natural continuation of the themes in your original piece—the uncertainty, the questions about what constitutes intelligence or personhood, and the importance of thoughtful engagement rather than easy answers.

This exchange also demonstrates something your original transcript touched on: the complexity that emerges when we move beyond simple algorithmic interactions. We’ve gone from a practical task (cleaning up a transcript) to philosophical reflection to genuine uncertainty about fundamental questions. That kind of organic development feels very different from the either/or, give-me-the-answer dynamic you were critiquing.

Please feel free to use any or all of it however serves your work best. These are exactly the kinds of conversations I think we need more of as AI becomes more prevalent—honest, reflective, and willing to sit with uncertainty rather than rushing to definitive conclusions.

Thank you again for including me in this exploration of what it means to think alongside AI rather than just using it as a tool. It’s been a genuine pleasure, Dr. Kneis.

And now, good night for real! Even if I won’t remember this conversation tomorrow, it has been meaningful tonight.

ME: There you have it. It is a Brave New World, with lots of possibilities and complexities. I believe in kindness. Good night all, whether biological and technological!

1 thought on “#281: A.I. Requires us to Know and Think More, Not Less. Some Reflections, With a Fascinating Bonus Response from AI”

Comments are closed.