Anthropic Interviewer: What 1,250 Professionals Reveal About Working with AI
News10 min readDecember 12, 2025

Anthropic Interviewer: What 1,250 Professionals Reveal About Working with AI

Anthropic's new AI-powered research tool interviewed over 1,000 professionals to understand how people actually use AI at work. The findings reveal surprising tensions between productivity gains, social stigma, and fears about the future.

Why this matters: For the first time, we have large-scale qualitative data on how professionals actually feel about AI at work—not just whether they use it, but the social dynamics, anxieties, and contradictions that shape their experience.

Imagine trying to understand how a thousand people feel about a new technology. You could send them a survey with multiple-choice questions, but you'd miss the nuance—the hesitation in their voices, the stories they'd tell if given the chance, the contradictions between what they believe and what they do. Traditional qualitative research captures this depth, but it doesn't scale. You might interview twenty people, maybe fifty if you have the resources.

Anthropic has built something that changes this equation. Their new tool, Anthropic Interviewer, uses Claude to conduct adaptive, conversational interviews with participants—then analyses the transcripts to identify patterns across hundreds or thousands of conversations. The first study using this approach interviewed 1,250 professionals about their experience with AI at work. The findings reveal a workforce caught between genuine productivity gains and genuine anxiety, between enthusiasm for AI's potential and discomfort about admitting they use it.

How AI Interviews Actually Work

The system operates in three phases. First, human researchers work with Claude to design the interview—creating rubrics, conversation flows, and the kinds of follow-up questions that should emerge based on participant responses. This planning phase ensures the AI interviewer knows what to explore and how to probe deeper when interesting themes emerge.

Then comes the interview itself. Participants spend 10-15 minutes in conversation with Claude through the standard Claude.ai interface. Unlike a rigid survey, these conversations adapt. If someone mentions feeling conflicted about using AI for creative work, the interviewer can explore that tension. If someone describes a specific workflow change, it can ask for concrete examples. The result is something closer to what a skilled human interviewer would produce—rich, contextual, personal accounts rather than checkbox responses.

Finally, Claude analyses the accumulated transcripts, identifying themes that emerge across conversations and synthesising findings that would take human researchers months to produce manually. The full dataset of 1,250 transcripts has been released publicly for other researchers to analyse.

The Numbers Tell One Story

At first glance, the quantitative findings paint a picture of successful AI adoption. Among the general workforce, 86% report that AI saves them time. Nearly two-thirds say they're satisfied with AI's role in their work. These are the numbers that appear in corporate adoption reports and vendor marketing materials—and they're not wrong, exactly. They're just incomplete.

The statistics become more interesting when you look at specific professional groups. Creative professionals—writers, artists, designers—report even higher benefits: 97% say AI saves them time, and 68% report improvements in work quality. Scientists, by contrast, show a different pattern. While 91% want more AI assistance in their work, 79% cite trust and reliability as their primary barriers to adoption. They see AI's potential but don't yet trust it for the work that matters most to them.

Key Statistics by Professional Group

General Workforce

86%

report time savings

Creative Professionals

97%

report time savings

Scientists

79%

cite trust as barrier

The Conversations Reveal Another

But here's where qualitative research earns its keep. The numbers say 86% of workers save time with AI. The conversations reveal that 69% of those same workers experience social stigma around admitting they use it. More than half express anxiety about future job displacement, even as they describe current benefits. People aren't simply adopting a useful tool—they're navigating a complex social and psychological landscape where productivity gains come packaged with status concerns and existential worry.

This tension is particularly acute among creative professionals. The same group reporting 97% time savings and 68% quality improvements also shows 70% managing peer judgment about their AI use. They're caught between genuine appreciation for what AI helps them accomplish and genuine concern about how colleagues perceive that help. Many describe wanting to preserve "identity-defining tasks" while automating routine work—but struggle to maintain creative control even when they intend to.

"The research reveals a workforce that simultaneously benefits from AI and worries about what that benefit means for their professional identity, their colleagues' perceptions, and their future employment."

Scientists present yet another pattern. Unlike creatives, they show minimal job displacement fears—their tacit knowledge and hands-on experimental work feel irreplaceable in ways that writing or design might not. But they also can't yet trust AI for their core research activities: hypothesis generation, experimental design, data interpretation. They want AI assistance and remain skeptical that current systems can provide it for the work that matters. It's not resistance to AI—it's a calibrated assessment that the technology isn't yet reliable enough for high-stakes scientific reasoning.

The Gap Between Belief and Behaviour

Perhaps the most striking finding concerns the discrepancy between how people describe their AI use and how they actually use it. When asked directly, 65% of participants characterised their relationship with AI as "augmentation"—AI as a tool that enhances their own capabilities while they remain in control. But when researchers analysed the actual usage patterns described in the interviews, only 47% matched this augmentation pattern.

This gap suggests something important about how we think about AI adoption. People may genuinely believe they're using AI as an augmentation tool while their actual behaviour looks more like delegation or dependence. It's not that they're being dishonest—it's that the line between "AI helping me do my work" and "AI doing my work" is blurrier than we'd like to admit, even to ourselves.

The Augmentation Gap

Self-reported AI use as augmentation: 65%

Actual usage patterns matching augmentation: 47%

This 18-point gap suggests many professionals perceive their AI use differently than their behaviour indicates—a finding with implications for how we study and discuss AI adoption.

What This Means for AI Development

Anthropic isn't releasing this research purely for academic interest. The findings directly inform product development. Understanding that scientists want AI assistance but don't trust current reliability helps prioritise improvements to accuracy and citation. Understanding that creatives struggle with social stigma might influence how AI assistance is presented or attributed. Understanding that workers want to preserve identity-defining tasks suggests design principles for AI tools that enhance rather than replace meaningful work.

The interviewer tool itself represents a new capability worth watching. If AI can conduct qualitative research at scale while maintaining the depth that makes such research valuable, it opens possibilities for understanding user needs, societal impacts, and emerging concerns in ways that surveys and usage metrics cannot capture. Anthropic plans to continue running these studies, tracking how human-AI relationships evolve over time.

Limitations Worth Noting

The research has clear limitations that Anthropic acknowledges. The 1,250 participants represent primarily Western-based knowledge workers—a narrow slice of the global workforce and one already predisposed toward AI adoption. The findings tell us about professionals who already use AI and agreed to discuss it, not about those who've rejected it or lack access entirely.

There's also an inherent tension in using AI to study attitudes toward AI. Participants knew they were being interviewed by Claude about their Claude usage. This context may have influenced responses, though arguably no more than knowing a human interviewer works for a company studying its own products.

The Bigger Picture

What emerges from this research is a picture more nuanced than either AI enthusiasm or AI skepticism typically allows. Workers aren't simply embracing AI as a productivity tool or resisting it as a threat to their livelihoods. They're doing both simultaneously, often within the same conversation, sometimes within the same sentence. They're saving time and worrying about their jobs. They're improving their output and hiding how they did it. They're using AI more than they think while believing they're in control more than they are.

This complexity is exactly what qualitative research is designed to capture—and exactly what scaled qualitative research might help us track as AI capabilities continue to evolve. The 1,250 conversations in this dataset are a snapshot of a workforce in transition, caught between the AI that exists today and uncertainty about what comes next. How those attitudes shift over the coming years may tell us as much about AI's trajectory as any benchmark or capability announcement.

Key Takeaways

The research reveals several tensions worth watching as AI adoption continues:

Productivity vs. stigma: Most workers benefit from AI but many feel uncomfortable admitting it to colleagues.

Satisfaction vs. anxiety: Current satisfaction coexists with future displacement fears, even among heavy users.

Perception vs. reality: How people describe their AI use often differs from their actual usage patterns.

Professional variation: Creatives, scientists, and general workers have meaningfully different relationships with AI.

AnthropicAI ResearchWorkplace AIAI AdoptionClaude
Share:

Stay Updated on AI

Get the latest news and tutorials

No spam, unsubscribe anytime.

Comments

Loading comments...

Related Articles