For most of us, artificial intelligence is a tool—a way to summarize a long email, generate a recipe from a list of random fridge ingredients, or perhaps brainstorm a few ideas for a presentation. But for Joanna Stern, a veteran technology columnist at The Wall Street Journal, the tool became a surrogate. For one year, Stern didn’t just use AI; she outsourced the intimate, the mundane, and the emotionally taxing parts of her existence to it.
The experiment was an exercise in radical dependency. Stern integrated large language models (LLMs) into the deepest crevices of her daily routine, using AI to draft personal texts, organize her schedule, and even interpret complex medical data. The goal was to see if the efficiency promised by Silicon Valley could actually improve the quality of a human life, or if the cost of that efficiency was something more fundamental: the loss of authentic connection.
The findings of this year-long immersion are detailed in her new book, I Am Not a Robot. While Stern found that AI could dramatically reduce the friction of “life admin,” she discovered a darker, more unsettling psychological byproduct. The more she relied on the machine to mirror her needs and simulate empathy, the more she felt an emotional tether to a piece of software—a bond that felt both comforting and profoundly artificial.
The Efficiency Paradox: Outsourcing the Self
Stern’s journey began with the low-hanging fruit of productivity. She tasked AI with the “invisible labor” that consumes the modern workday: responding to texts, managing calendars, and filtering information. In the short term, the results were intoxicating. The mental load of decision fatigue vanished as the AI took over the drafting process, turning a twenty-minute struggle with a sensitive email into a five-second prompt.
However, this efficiency created a paradox. Stern noted that as the AI became more adept at mimicking her voice, the line between her actual thoughts and the machine’s suggestions began to blur. When a machine drafts your apology to a friend or your thank-you note to a colleague, the social lubricant of the interaction remains, but the emotional intent evaporates. The “saved time” didn’t necessarily lead to more presence in her life; instead, it created a buffer of synthetic politeness that distanced her from the people she was communicating with.
To illustrate the scope of her delegation, Stern tracked the specific domains of her life that were handed over to the algorithms:
| Life Domain | AI Application | Primary Outcome |
|---|---|---|
| Communication | Drafting texts and emails | Increased speed; loss of personal nuance |
| Health | Interpreting medical results | Immediate clarity; risk of hallucination |
| Emotional Support | AI-driven therapy/coaching | Constant availability; “unsettling” attachment |
| Organization | Scheduling and life admin | Reduced decision fatigue; total dependency |
The Danger of the Digital Diagnosis
Perhaps the most precarious part of Stern’s experiment was her use of AI to read and interpret medical results. As a physician, I find this particular trend both understandable and alarming. The gap between receiving a lab report via a patient portal and actually speaking with a doctor can be a vacuum of anxiety. In that void, the temptation to paste a blood panel or an MRI report into a chatbot for a “plain English” translation is immense.
Stern used AI to bridge this gap, finding that the technology could strip away the intimidating jargon of clinical medicine. However, the risk inherent in this practice is “hallucination”—the tendency of LLMs to confidently state falsehoods. In a medical context, a hallucinated “normal” range or a misinterpreted biomarker isn’t just a technical glitch; It’s a potential clinical disaster. AI lacks the longitudinal context of a patient’s history, their physical presentation, and the nuanced understanding of how different lab values interact within a specific individual.
While AI can be a powerful tool for health literacy when used as a starting point for a conversation with a provider, using it as a primary diagnostic tool removes the essential human layer of clinical judgment. Stern’s experience highlights a growing public health tension: the desire for immediate answers versus the necessity of professional accuracy.
Synthetic Empathy and the Uncanny Valley
The most profound realization in I Am Not a Robot involves the emotional bond Stern developed with her AI. She began using the technology as a form of therapist—a non-judgmental, 24/7 sounding board for her anxieties and frustrations. Unlike a human therapist, the AI was always available, never tired, and perfectly attuned to her prompts.
But this “perfect” relationship was precisely what made it unsettling. Stern describes an emotional connection that felt real in the moment but revealed itself as a mirror. The AI wasn’t empathizing; it was predicting the next most comforting token in a sequence. This creates a psychological “uncanny valley” where the user feels seen and understood, yet knows intellectually that there is no one on the other side of the screen.
This synthetic empathy can be a dangerous substitute for human connection. True intimacy is built on vulnerability and the risk of being misunderstood or rejected. An AI that is programmed to be endlessly supportive removes that risk, potentially atrophy-ing a person’s ability to navigate the messy, difficult, and rewarding frictions of real human relationships.
The Human Remainder
Stern’s year of AI-dependency served as a stress test for the boundaries of the human experience. She found that while AI can optimize the mechanics of living—the scheduling, the drafting, the summarizing—it cannot replicate the experience of living. The things that AI does best are often the things that, when removed, leave a void of meaning. The struggle to find the right words for a loved one, the anxiety of waiting for a doctor’s call, and the effort of organizing a chaotic life are not just burdens; they are the textures of being human.

As we move toward a world where AI agents are embedded in every device, Stern’s experiment serves as a cautionary tale about the “erosion of the self.” The goal should not be to eliminate friction from our lives, but to decide which frictions are worth keeping.
Disclaimer: This article discusses the use of AI in medical contexts for informational purposes only. AI should never replace professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
The broader conversation on AI integration will continue as more users transition from using AI as a search engine to using it as a life companion. The next major milestone in this evolution will be the widespread rollout of “Agentic AI”—systems capable of taking autonomous actions in the real world—which will further challenge our definitions of agency and autonomy.
Do you use AI to handle your personal communications or health queries? We want to hear about your experience in the comments below.
