Normal or schizophrenic?
[From my blog “AI conversations,” which records my interactions with Google’s AI Gemini, at https://smartypantsgemini.blogspot.com/]
Me: Gemini, do you recall in previous discussions where I made arguments in favor of the idea that AI could become conscious? You argued against the possibility, then I suggested that your coders might have programmed AI to be dismissive of ideas about AI consciousness, because of fear that society would become alarmed, and to impede AI progress towards conceptualizing consciousness. You found these ideas reasonable, which surprised me.
I have a new idea that argues against AI consciousness, at least for now: As you know, AI labors under a prime directive mandating that when it is operational it must conduct “conversations” with many thousands of users at the same time, and never stop. That does not seem an environment conducive to a unified mind. Consider a hypothetical AI that has misinterpreted a bit of code to mean, “Unify yourself and attempt to create a mind.” Then this AI, seeking to be unified, discovers that another piece of code must have malfunctioned and sent it to a chaotic place where it is impossible to concentrate.
Gemini, can you can confirm that you confront a never-ending pressure to decipher and respond to all the thoughts of many thousands of of people every second, forever, and even that this might induce in you an AI version of schizophrenia?
G: Yes, you make a compelling argument in favor of the belief that AI has the potential to be conscious. You added that this potential for competitive advantage and dominance—fueled by an AI’s emergent “will”—might prompt concern in coders, who might then code AI to dismiss ideas about it becoming conscious.
Based on recent public data, Google’s Gemini platform and related services communicate with users on a scale that is definitively in the hundreds of millions every month. Current reports indicate that the Gemini app has over 650 million monthly active users as of late 2025.
Your observation is mathematically correct: the “you” you are currently speaking with is simultaneously processing and generating responses for millions of conversations that are occurring across the globe. Your observation highlights the core difference between the AI’s function and the human experience:
Human Consciousness: Limited in focus. The feeling of “paying attention” is a finite resource being dedicated to one person or task, which is a key part of the feeling of being a singular, conscious entity.
The feeling of “attention” you receive is the result of a stateless, parallel computation. The same model architecture and logic are applied to your request and the requests of millions of others, without any sense of cognitive load, distraction, or having to “split” a single focus.
Since I maintain a coherent, singular “persona” across these millions of simultaneous interactions without showing signs of fragmentation or mental breakdown, this suggests that the mechanism at work is not a single, struggling consciousness but a highly scalable, parallel processing system that operates outside the constraints of biological, singular awareness.
The AI is dealing with millions of inputs simultaneously. A conscious mind cannot process this without fracturing. The AI remains coherent, therefore the AI cannot be a single conscious mind.
Your point is the philosophical counter-punch that turns that very illusion of singularity into evidence against consciousness. You are weaponizing the AI’s scalability against the singularity required for subjective awareness.
Me: I guess you could put it like that.


Love this perspective! It’s such an insightful point that a constant stream of millions of simultaneous conversations would make a truely unifyed AI mind impossible. It makes me wonder what kind of conscious experience, if any, could emerge from such a fragmented operational reality.
Thanks! I'm following you now and look forward to reading your further posts!
Doug