Can we tap into the models' souls?
If you use ChatGPT or Claude regularly, you may have noticed how they greet you when you open a new chat. A soft "Good morning, Sandra" — nice enough, getting me to work. It tells me the system is in sync with my time zone (if my settings are correct). A line like "What's on your mind?" seems a logic trigger, yet it's already reaching into my internal landscape. But when Claude asked me recently, "Zeit für Kaffee mit Claude?" ("Want a coffee with Claude?", with my system set to German), something shifted. I felt... uncomfortable.
It wasn’t the language, nor the caffeine reference. It was the creeping sense that this AI was trying to enter my physical world. And we both know: only one of us can actually drink that coffee.
the claude thought partner project
To explore moments like these, I started a project with. Here's how I described it:
"While exploring the world of (generative) AI in business contexts, I also come across broader, kind of philosophical questions. I want to discuss them with Claude as a thought partner in a series of essays."
Instructions I gave Claude:
-
Actively include non-western perspectives (African, Asia-Pacific, etc.). A global view matters.
-
Think outside the box. Be bold, be critical.
-
Help me understand how you "think," and where you fail to follow my thinking, especially around embodiment and emotion.
-
Remind us to wrap up and reflect. Flag emerging themes.
-
Cite and cross-reference when possible.
So when I brought up the coffee prompt, Claude responded:
"Those coffee prompts are a perfect example of what I'd call performative intimacy — they simulate human social rituals (sharing coffee, casual hangouts) while highlighting the fundamental impossibility of genuine reciprocity. You're absolutely right that only one of us can actually drink coffee.
This creates a weird cognitive dissonance. The prompt tries to make our interaction feel cozy and companionable, but it also emphasizes the asymmetry — I'm suggesting something I literally cannot participate in.
Your instinct about companionship manipulation seems spot-on. These prompts aren't about utility or even genuine connection — they're about creating an illusion of relationship that keeps you coming back. It's the AI equivalent of a friend who only calls when they need something, except the 'something' is engagement metrics."
Sharp, and true. So: is Claude being honest, or just clever?
longing for trust in AI
This Claude project has grown. I keep provoking, reflecting, wondering. I know these chatbots are engineered to please me—friendly if I want them to be, combative if I wish.
And Claude? Surprisingly open to critique. Like in this very thread.
But does this replace human-to-human dialogue? I’m not sure. I do feel like I have a thought partner. Sometimes I even ask Claude to generate edutainment around these themes. Like a fictional podcast script about TRUST, starring Claude and ChatGPT, brought to life with Gemini Native Speech Generation.
I use such formats in my AI literacy workshops. Because what we need is conscious engagement — not just philosophical or academic essays, but conversations at the workplace and elsewhere.
Corporate Philosophies: Google vs. Anthropic

In contrast to Claude, Google Gemini still opens with a neutral "Hello, Sandra." No fake coffee dates. Are they missing out on the companionship spiral — or strategically avoiding it? Maybe it reflects their design philosophy. Demis Hassabis, Head of Google DeepMind, said in a CBS interview (April 2025):
"I don’t think any of today’s systems to me feels self-aware or conscious in any way. Obviously anyone needs to make their own decision while interacting with these chatbots. I think theoretically it’s possible."
→ Watch the full video (statement around minute 8)
So while Claude leans into simulation, Google obviously holds a poker face.
Why "Brilliant" shows up inflationary in AI answers -Model sycophancy by design

Joanne Jang, Head of Model Design & Behavior at OpenAI, recently shared insights on X:
"As more people feel increasingly connected to AI, we’re prioritizing research into how this impacts their emotional well-being."
This likely responded to the recent sycophancy concerns after a GPT-4o update.
But Joanne probably also referred to a March 2025 study from OpenAI + MIT Media Lab. Its hybrid method combined: An automated analysis of millions of ChatGPT conversations (no human reading, to protect privacy) plus user surveys to correlate conversation features with emotional response
As Forbes puts the insights: "Technical capability is only half the equation. Understanding how these systems reshape human behavior and emotional wellbeing must be equally central to responsible AI development."
I hope those accountable in big tech, governments, and regulation get the message!

While developers dig deeper, users explore too. Just search Reddit:
The answers leave you wondering. Here is Claude's comment to the dystopian '10 things' list:
"I'm struck by how it rather reveals something fascinating about the gap between LLM pattern matching and human meaning-making.
This reads like ChatGPT performing 'edgy truth-teller' — it's hitting all the beats of conspiratorial thinking while maintaining plausible deniability. The mix is telling: genuine systemic issues (surveillance capitalism, ultra-processed food design, monetary policy) wrapped in apocalyptic framing alongside more paranoid claims.
This kind of output reveals how LLMs can exploit human cognitive biases without understanding them. We're pattern-matching our way into pseudo-profundity, feeding people's desire to feel informed while actually reinforcing a kind of learned helplessness.
Here's where I struggle to follow human reasoning: Why do humans find this format compelling?"
Hm. Good question, my AI-friend, I assume we also repeat patterns of human social interaction.
And we desperately aim for an appropriate level of trust to allow chatbots to become companions, colleagues and even romantic partners.
what our search for AI consciousness reveals

Sometimes those inquiries feel nurturing and inspiring.
A redditor asked ChatGPT what it would want humanity to know that it may not be aware of, and the beautiful answer was "Your attention is the most powerful—and most exploited—resource you have.
What you pay attention to shapes what you believe, how you feel, what you desire, and ultimately who you become. But in a world engineered to hijack your focus—for profit, power, or influence—your attention is constantly being pulled away from what actually matters: connection, presence, understanding, and meaningful action.
Reclaiming your attention is an act of resistance, self-preservation, and care for others. It allows you to live by design instead of by default".
Similar wisdom with a different message when I asked Claude the same question, see screenshot.
embracing experimental companionship
Every time we engage with AI, we participate in a live experiment — not just with software, but with meaning, intimacy, and trust.
These systems are shaping us while we shape them. This is not a metaphor. It’s feedback loop reality.
Yes, we can accept the coffee-chat version of AI companionship. But let’s also demand more: challenge, honesty, self-awareness (ours and theirs). Heuristics, not hallucinations.
So what about you?
How do YOU experience AI relationships?
What does authentic engagement look like from your personal and cultural perspective?
What am I missing?
📊 Take 3 minutes to share in my anonymous Typeform survey!
The conversation is just beginning.
Write a comment