In March 2026, Anthropic — the AI safety and research company behind Claude — released a report on the impact of AI on the labor market. The big headline: market research analysts and marketing specialists are among the top five most exposed occupations.
But we’re researchers here. And as researchers, we know that every study has its limitations. So let’s take a closer look at their methodology and what this finding actually means for those of us working in market research and insights.
Key Takeaways
If you only have 2 minutes, here are the key takeaways:
Anthropic's study ranked market research analysts among the top 5 most AI-exposed occupations — but the methodology behind that finding has real limitations worth understanding.
The ONET task list used to define market research jobs includes roles like "direct trained survey interviewers." This is a function that has largely disappeared. The task map doesn't fully match reality.
Market research accounts for just 0.32% of total Claude usage, compared to 5.38% for software developers. Real-world adoption in our field remains low.
The most alarming data point suggests survey methodology is classified as 100% directive (i.e. fully handed off to AI with no back-and-forth.) This is most certainly a classification problem, not an accurate picture of how researchers actually work.
The more useful question isn't whether AI will change our jobs. It's whether we're building the fluency to shape how that change happens.
Table of Contents
How Anthropic Measures “Exposure”
The key measure behind this finding is what Anthropic calls observed exposure. Before we get into what the study found for the market research and insights industry, it’s worth slowing down and understanding how exposure is measured — because the methodology is more layered than the headlines let on.
Anthropic outlines that observed exposure is built by combining three key sources:
Source 1: The O*NET database, which catalogs tasks associated with around 900 unique occupations in the U.S.
Source 2: Task-level exposure estimates from an academic study: Eloundou et al. (2023)
Source 3: Their own Claude usage data, measured through the Anthropic Economic Index
Let’s take a closer look at each one.
Source 1: The O*NET Database
O*NET is the U.S. Department of Labor’s occupational database — a catalog of roughly 20,000 work tasks mapped to about 900 job titles. It’s a legitimate, well-established resource, and using it as the backbone of this kind of research makes a lot of sense.
The challenge for those of us in market research is that the study maps our work to a single category: “Market Research Analysts and Marketing Specialists.” Under that category, the official task list includes:
Prepare reports of findings, illustrating data graphically and translating complex findings into written text.
Collect and analyze data on customer demographics, preferences, needs, and buying habits to identify potential markets and factors affecting product demand.
Conduct research on consumer opinions and marketing strategies, collaborating with marketing professionals, statisticians, pollsters, and other professionals.
Measure and assess customer and employee satisfaction.
Devise and evaluate methods and procedures for collecting data, such as surveys, opinion polls, or questionnaires, or arrange to obtain existing data.
Measure the effectiveness of marketing, advertising, and communications programs and strategies.
Seek and provide information to help companies determine their position in the marketplace.
Forecast and track marketing and sales trends, analyzing collected data.
Gather data on competitors and analyze their prices, sales, and method of marketing and distribution.
Monitor industry statistics and follow trends in trade literature.
Attend staff conferences to provide management with information and proposals concerning the promotion, distribution, design, and pricing of company products or services.
Direct trained survey interviewers.
Develop and implement procedures for identifying advertising needs.
I can only speak from my own experience, but while some of these tasks map to real market research work — report writing, data collection, and survey methodology — others don't map as cleanly to today’s market research industry. For example, “Direct trained survey interviewers” describes a fieldwork supervisor role that was largely gone by the 1990s, before many of us even entered the industry. “Develop and implement procedures for identifying advertising needs” sounds more like an ad agency brief than a research design. And critically, the things that make our job as insight professionals genuinely hard — questionnaire logic, the interpretive leap from themes to insights to decisions, and managing clients and stakeholders through ambiguous findings — don’t make it onto the list at all.
This means that the Anthropic Labor Market study is working with an imperfect map of what our jobs actually look like.
Source 2: Eloundou et al. (2023)
The second source Anthropic draws on is a rubric derived from an academic study published in 2023. This is the backbone of what the study calls “theoretical exposure.” A team evaluated nearly 20,000 O*NET tasks and scored each one on a three-point system:
No meaningful efficiency gain possible with AI
Efficiency gain possible, but requires additional software built on top of a large language model (LLM)
Efficiency gain possible with an LLM alone
These scores feed directly into Anthropic’s observed exposure measure.
But there’s a catch. The raters were academics familiar with AI models — not market researchers or insights professionals. The authors of the study acknowledge this limitation directly: their lack of occupational diversity likely introduced bias in how they scored tasks in unfamiliar fields.
For a task like “conduct research on consumer opinions and marketing strategies, collaborating with professionals” — if you’ve never actually done that work, you might score it based on what it sounds like on paper rather than what it actually involves. A rater who has never run a focus group, navigated a difficult client, or tried to synthesize three weeks of qualitative interviews into three strategic implications might reasonably assume an LLM could handle most of it. A researcher who has done those things knows it's not as simple as it seems.
Source 3: Actual Claude Usage Data
The third source is what sets this study apart from the sources that came before it: unlike earlier studies that relied on prediction, this one uses real usage data. Anthropic analyzed millions of anonymized conversations to understand what people are actually asking Claude to help with, and how. They built a classification system that maps each conversation to the closest O*NET task, then analyzes whether the interaction looks more like automation (Claude doing the task with minimal back-and-forth) or augmentation (Claude helping a person iterate, learn, or refine).
A few things are worth keeping in mind when we look at the results. The data comes exclusively from Claude — not ChatGPT, not Gemini, not any market research-specific platform. There is also no way for the system to know who is actually on the other end of the conversation.
Someone asking Claude how to structure a survey question might be a senior insights director, a college student working on a class project, or someone tasked with running a survey and is starting from scratch. There's no way for the system to distinguish a trained researcher from someone encountering survey design for the first time.
Taken together, these three sources — the O*NET task list, the theoretical exposure rubric, and the real-world Claude usage data — produce a single observed exposure score for each occupation. It’s worth noting that the study itself describes this measure as one that “qualitatively captures several aspects of AI usage.” It’s a methodologically sound acknowledgment of what this kind of work can and can’t do, and it matters a great deal for how we interpret what comes next.
So What Does This Actually Mean for Market Research Professionals?
Let’s set the methodology aside for a moment and look at what the data is actually showing us.
When we look at theoretical capability and observed exposure by occupational category, the tasks that fall closest to the market research world — those sitting in computer and math and business and finance — show high theoretical coverage and are among the highest in observed AI usage. But here’s my honest take: our industry already has tools that do much of this work, and do it well. Crosstabulation, statistical analysis, and survey programming are not new problems. Platforms like Qualtrics and Q Research Software have been solving them for years. What AI is doing, in many cases, is adding a layer on top of tools we already use to help us get to our desired outcome faster.
The other occupational category that maps to market research is life and social science — and here’s where it gets interesting. It’s among the most theoretically exposed occupations, but actual observed AI usage is considerably smaller. That gap tells a story. It’s no surprise that LLMs perform well with large volumes of unstructured text — qualitative research is a natural fit. But what AI cannot truly capture is the human element: the trust a skilled moderator builds with research participants, the body language read across a focus group table, and the contextual nuance that shapes how an insight lands with a specific client in a specific moment. Any market research professional knows our work is equal parts art and science. The science may be increasingly automatable. The art is harder to replicate.

Source: Anthropic, "The Economic Impacts of AI on the Labor Market," March 2026. anthropic.com/research/labor-market-impacts
A Closer Look at the Task-Level Data
If we zoom into the Anthropic Economic Index specifically, market research analysts and marketing specialists account for just 0.32% of total Claude usage. To put that in context: software developers represent 5.38% of usage, that’s roughly 16x higher, respectively. This suggests that real-world adoption of general-purpose LLMs in our field remains limited. I see that as less of a warning sign and more of an opening — an opportunity for insights professionals to better leverage these tools.
But the raw usage number only tells part of the story. The more interesting question is how people are using Claude for these tasks.
Market research accounts for just 0.32% of total Claude usage — compared to 5.38% for software developers. That's a 16x gap.
Anthropic classifies Claude interactions into two broad categories. The first is automated — Claude is doing the task for you with minimal human involvement:
Directive — Claude completes the task with minimal back-and-forth. One prompt, one response.
The second is augmented — Claude is working with you:
Task iteration — you go back and forth refining an output across multiple exchanges
Learning — you ask Claude to explain or teach a concept
Validation — you produced something yourself and ask Claude to check it
Feedback loop — Claude provides iterative feedback on your work
With that framing in mind, here’s what stands out in the data.
“Devise and evaluate methods and procedures for collecting data, such as surveys, opinion polls, or questionnaires” registers as 100% directive. On the surface, that’s alarming — and I’d argue it greatly undervalues the actual work that goes into survey methodology. But before we panic, let’s think about what 100% directive actually means in practice.
For that number to be accurate, every single person who used Claude for this task would have had to hand it off completely — no back-and-forth, no refinement, no follow-up. Anyone who has actually designed a survey knows that’s not how this work goes. You draft, you question your own wording, you test the flow, you receive feedback from clients or stakeholders, you revise.
Survey methodology is classified as 100% directive in the data, meaning fully handed off to AI with no back-and-forth. This is almost certainly a classification problem.
What’s more likely happening is a classification problem. The system assigns each conversation to the single O*NET task it best matches. So someone typing “write me a ten-question survey about brand awareness” is almost certainly being logged under this task. That’s a directive prompt. But it has very little to do with what a trained researcher means when they talk about devising and evaluating data collection methodology — the sampling decisions, the question sequencing, the scale construction, and the pilot testing.
This matters because this is the one task on the entire O*NET list that is most technically and distinctly market research. Survey methodology is the backbone of quantitative research. The most alarming number in the dataset is almost certainly the least reliable one.

Source: Anthropic, "Economic Index — Job Explorer," 2025. anthropic.com/economic-index#job-explorer
And then there’s “monitor industry statistics and follow trends in trade literature” — 100% learning. Meaning Claude is being used for this task purely as a reading companion. Someone asking it to summarize what’s happening in the industry before a meeting. That’s not automation. Is it even augmentation? It feels more like a smarter search engine.
The Bigger Picture
The headline is real. Market research professionals are among the most exposed occupations according to this study. I’m not here to tell you otherwise. But “exposed” is doing a lot of heavy lifting in that sentence, and once you understand what it actually measures the finding starts to look more like a starting point for a conversation than a verdict.
What the data actually shows is that the tasks most central to skilled insights work are either misclassified, underrepresented, or being used in ways that look far more like assistance than automation. The interpretive, relational, and human dimensions of our work don’t show up cleanly in this dataset. Not because they’re safe from disruption forever, but because they’re genuinely hard to measure. And this study, to its credit, doesn’t pretend otherwise.
The more useful question isn’t whether AI will change our jobs. It most certainly will and it already has. The more useful question is whether we’re building enough fluency with these tools to shape how that change happens. The fact that market research represents only 0.32% of Claude usage isn’t just a limitation of this study. It’s a signal about where our industry stands right now. And that feels like something worth paying attention to.