The instinct for many when facing a complex professional or academic challenge is to open a chatbot and ask for the answer immediately. However, new research suggests that the timing of that interaction may be the deciding factor in whether the technology enhances your intellect or replaces it. Using AI too early in a problem-solving process may actually erode the very critical thinking skills the user is trying to employ.
Findings presented April 14 at the 2026 CHI conference on Human Factors in Computing Systems in Barcelona indicate a significant performance gap based on when users engage with AI. Participants who spent time working through a problem independently before consulting a chatbot performed better on critical thinking tasks than those who relied on the AI from the outset. This suggests that is AI bad for critical thinking is not a binary question, but rather one of timing and cognitive engagement.
The study, led by computer scientist Mina Lee of the University of Chicago, explored the tension between efficiency and independent reasoning. While AI can provide a rapid boost in output, especially under tight deadlines, that speed often comes at the cost of a deeper, more nuanced understanding of the material.
The Cognitive Cost of the ‘Quick Answer’
To test these dynamics, Lee and her colleagues assigned 393 participants to a simulation where they acted as city council members. Their task was to review seven distinct documents and write an essay deciding whether to accept or reject a company’s proposal to mitigate a water contamination problem. The researchers manipulated two primary variables: the amount of time allowed (either 30 minutes or 10 minutes) and the timing of access to OpenAI’s GPT-4o chatbot (early, continuous, late, or no access).

The results revealed that the highest essay scores—measured by the number of valid arguments and textual references—came from participants who had sufficient time and waited until the later stages of the process to use the AI. Conversely, those who used the chatbot from the start saw a decrease in the quality of their independent reasoning.
This phenomenon is rooted in the distinction between two primary modes of learning. Barbara Oakley, a systems engineer and education expert at Oakland University in Rochester Hills, Mich., notes that the results align with the difference between slow, effortful reasoning and swift, automatic thinking. Slow learning requires the brain to carefully construct an understanding of a problem and weigh various options. When users jump straight to an AI, they bypass this deliberate phase, relying instead on the “fast” thinking of the machine.
How AI Access Impacted Performance
| User Group | Primary Outcome | Cognitive Effect |
|---|---|---|
| Sufficient Time + Late AI Access | Highest Essay Scores | Balanced deliberate reasoning with AI refinement. |
| Sufficient Time + No AI Access | Best Information Retention | Maximum engagement with source documents. |
| Insufficient Time + Early AI Access | Higher Speed/Output | Risked adopting AI’s framing over independent analysis. |
| Insufficient Time + Late AI Access | Lower Performance | Struggled with time constraints and delayed support. |
The Trade-off Between Speed and Depth
The research highlights a precarious trade-off: AI is an exceptional tool for speed, but a potential liability for depth. In the “insufficient time” group (those given only 10 minutes), participants with early access to the chatbot actually scored higher on their essays than those without. On the surface, this looks like a win for AI integration. However, Lee warns that this “boost” is deceptive.

When users are under extreme time pressure, they are more likely to adopt the AI’s framing of the problem. This reduces the variety of arguments they produce and diminishes their actual engagement with the provided evidence. The AI does not facilitate them believe better. it simply provides a pre-packaged thought process that the user adopts to meet a deadline.
This “framing risk” is particularly evident in the study’s measurement of myside bias—the tendency to seek out information that confirms existing beliefs. The group that performed best in overcoming this bias was the one that had sufficient time and used the chatbot only toward the end. By forming their own initial hypotheses and analyzing the documents first, they were better equipped to use the AI as a tool for refinement rather than a substitute for thought.
Developing AI Literacy for the Modern Era
As generative AI becomes embedded in classrooms and boardrooms, the focus is shifting from whether to use these tools to how to use them strategically. The University of Chicago study suggests that the “optimal” workflow involves a period of cognitive struggle—a phase where the human brain does the heavy lifting of synthesis and analysis before the AI is brought in to polish, challenge, or expand the operate.
This approach requires a high degree of AI literacy. Users must be aware of their own thinking patterns and recognize when they are sliding into “fast thinking” mode. According to Lee, the goal is for individuals to weigh the risks and benefits of chatbot use based on the specific scenario and the point in the problem-solving timeline.
For those looking to maintain their critical thinking edge, the practical takeaway is to “slow the roll.” By treating AI as a final reviewer or a sounding board rather than a primary architect, users can preserve the cognitive benefits of effortful reasoning while still leveraging the efficiency of large language models.
The research team intends to further explore how different types of time constraints and various AI prompting styles affect the quality of human reasoning. The next step in this line of inquiry will likely involve testing these theories across different professional domains to see if the “late-access” advantage holds true for technical tasks as well as argumentative ones.
We want to hear from you. Do you use AI to brainstorm the start of a project, or do you save it for the final polish? Share your experience in the comments below.
