Skip to content
Back to articles

How Stanford Students Draw the Line with AI

March 9, 2026Β·7 min readΒ·1,465 words
AIStanford students and ChatGPTAI in higher educationcritical thinking with AI100 Chats Project OpenAI
CatGPT panel discussion with Stanford students about AI use
Image: Screenshot from YouTube.

Key insights

  • Students distinguish between 'knowledge tasks' they delegate and 'decision tasks' they keep, treating AI as a research assistant rather than a replacement for critical thinking
  • Some professors now require AI use in assignments, shifting from banning tools to teaching students how to evaluate AI output critically
  • Premium AI subscriptions create unequal access among students, raising questions about whether universities should provide equal AI tools to all
SourceYouTube
Published February 12, 2026
CatGPT
CatGPT
Hosts:Cat Goetze

This is an AI-generated summary. The source video includes demos, visuals and context not covered here. Watch the video β†’ Β· How our articles are made β†’

Read this article in Norwegian


In Brief

Cat Goetze, an AI educator known as "CatGPT," returned to her alma mater Stanford University to ask 50 students a simple question: where do you draw the line between thinking for yourself and letting AI do it? The panel discussion, produced in partnership with OpenAI's 100 Chats Project, reveals that students are developing personal frameworks for when to use ChatGPT and when to hold back. Their answers range from careful boundary-setting to honest admissions that AI has "taken over" parts of daily life. For related coverage on AI in education, see Why Banning AI in Classrooms Misses the Point and From Estonia to India: Schools Bet Big on AI.

50
Stanford students interviewed
$20/mo
premium tier cost raising equity concerns
70+
universities in 100 Chats Project

What the students said

The conversation covered five questions about how AI is changing student life. Each one revealed a different layer of how this generation thinks about the tools they are growing up with.

Grunt work vs. critical thinking

The first question asked students what work they delegate to ChatGPT and what they keep for themselves. The dominant answer: use AI for information gathering, keep the decisions. Gabriel described using ChatGPT to summarize 60-page research articles in 30 seconds, exactly as his professor encouraged (1:34). But when it comes to rewriting a single sentence in an essay, he pushes through it himself. "If I give it to ChatGPT, it's learning and I'm not" (2:19).

Eric offered the sharpest framework: "I keep the decisions to myself, but if I need knowledge about something, I would ask ChatGPT for it" (2:51). Another student built on this with a related distinction: using AI to teach you versus using it to solve for you (3:10).

Not everyone drew such clean lines. Aliana admitted that she barely knows what work she keeps for herself, saying ChatGPT has "taken over" her life (4:20). But her actual practice is more nuanced: she provides the structure, opinions, and style direction for essays, then lets AI polish and rephrase. She says she learns from comparing her drafts to ChatGPT's output, gradually absorbing better phrasing into her own writing.

Switching majors because of AI

When asked whether AI influenced their choice of major, Millie described switching from computer science (CS) to engineering. Seeing ChatGPT write better code than she could triggered what she called an "existential crisis" (7:58). Her conclusion: either push the frontier of AI itself, or use AI as a tool in a different field. As the host put it, you do not need to know how to build a large language model (LLM), the type of AI behind ChatGPT, to benefit from one (8:45).

Drew, an international relations major, said AI had not changed his major but had shifted his interests within the field toward how AI shapes national security. A third student pointed out that in a world where anyone can become technical by using AI, what really sets people apart is interpersonal skills (9:01). He credited an improv class as one of the most valuable things in his education. As more readers explore what happens when AI takes over technical jobs, these students suggest the answer may lie in developing the skills AI cannot replicate.

Professors: banning AI vs. requiring it

The discussion on whether professors know how to teach in an AI world split into two camps. Emily described a class called "Spies, Lies, and Algorithms" where the professor restructured assignments entirely around AI use (10:18). Students now explain how they used AI to find sources, identify gaps in AI-provided research, and then defend their process in an oral exam with a teaching assistant (TA).

Another student went further, describing a class that requires ChatGPT for specific tasks, Google's NotebookLM for analyzing large batches of papers, and AI-powered transcription for interviews (11:14). The reasoning: students will graduate into a world with AI, so the classroom should reflect that.

But there is a gap between the most forward-thinking professors and the rest. One student argued that most assignments have not changed from the pre-AI era. If college is supposed to prepare students for a workforce where knowledge is instantly accessible, assessments should test how you combine information and think critically, not just whether you can produce an answer (14:04). And when asked whether most peers use ChatGPT as a learning tool, one student was blunt: "I unfortunately don't think so. The majority of people will just let it supplement everything they do" (12:58).

How to measure success when AI can code your homework

The equity question surfaced quickly. Students with a $20/month ChatGPT premium subscription can process 400 articles in five minutes. Free-tier users are limited to three (15:30). Should the university provide equal AI access to keep the playing field level?

One CS student offered a practical test for assessment: "If you vibe coded something, how much do you understand what it's good at, what it's bad at? Can you debug if there's an issue?" (16:48). Vibe coding means telling AI what to build in plain language rather than writing the code yourself. His argument: the assessment should not be whether you can build something, but how well you understand what you built (16:44).

Excited, scared, or both

When Goetze forced a binary show of hands, more students said they felt excited than afraid. But the discussion made clear it is not that simple. One student described the feeling as "fluctuating," noting that excitement and fear activate the same part of the nervous system.

Another student explained why he leans toward fear: AI is different from previous technological breakthroughs because "it feels like it has wisdom," a quality humans assumed was uniquely theirs (19:06). Unlike the steam engine or the internet, LLMs seem to possess something that feels distinctly human.

The final word went to Sid, who argued that AI should make people think bigger, not smaller. "Building a website might be an end goal today, but tomorrow it might be to build hundreds of websites every day" (20:00). His point: if AI makes certain tasks trivially easy, the ambition should expand to match.


How to interpret these perspectives

Who these students are

These are Stanford students, among the most academically competitive in the world. Their frameworks for "knowledge vs. decisions" and "teach vs. solve" may not reflect how most college students actually use AI. Research from other universities suggests many students use AI primarily to shortcut assignments, not to build learning frameworks around it.

What is missing

The panel did not address how AI detection tools affect student trust, or how professors evaluate AI-assisted work at scale. The equity point about paid tiers is real but barely explored. Stanford likely has resources to address this that most universities do not. For a broader look at how schools globally are responding, see Alpha School: AI Tutors Are Teaching Kids 10x Faster.


Practical implications

For students

The "knowledge vs. decisions" framework is a useful starting point. Before asking AI for help, ask: am I getting AI to find information, or am I getting AI to make my decisions? If the latter, consider whether that is a skill you want to keep developing.

For educators

The shift from banning AI to requiring it, then assessing how students interact with it, appears to be the most promising approach. Assignments that ask students to document their AI process and identify gaps in AI output test critical thinking in a way that traditional essays may no longer do on their own.


Glossary

TermDefinition
Large language model (LLM)The type of AI behind ChatGPT, Claude, and similar tools. Trained on massive text datasets to predict and generate language.
Vibe codingBuilding software by describing what you want in plain language, letting AI write the actual code. The person guides the direction without writing code themselves.
PromptThe text instruction you give an AI tool. Better prompts tend to produce better results.
ChatGPTOpenAI's conversational AI assistant, widely used for research, writing, coding, and brainstorming.
NotebookLMGoogle's AI research tool designed for analyzing and summarizing large collections of documents.
100 Chats ProjectAn OpenAI initiative where college students submitted their most useful ChatGPT conversations, which were ranked into a curated list of effective prompts.
Premium tierA paid subscription level of an AI tool that offers more features and capacity than the free version.
Oral defenseAn assessment format where students explain their work verbally to a teaching assistant or professor, making it harder to pass off AI-generated work as your own.
Critical thinkingThe ability to analyze, evaluate, and form independent judgments rather than accepting information at face value.

Sources and resources

Share this article