Skip to main content

🌸Spring Sale30% Off Everything! Use code SPRINGSALE at checkout🌸

Skip to main content
Back to home

How We Analyze AI Job Impact

Our methodology breaks each job into its component tasks, scores every task against current AI research, and gives you a clear, honest picture of what is changing.

Why task-level analysis matters

Most "will AI replace my job?" tools give you a single number and call it a day. That is not useful. AI does not replace entire jobs at once — it replaces specific tasks within jobs, often the most routine ones first.

A radiologist does not disappear overnight. But AI can already read certain scan types faster and more accurately than humans in controlled studies. That changes the proportions of the job — less time on routine reads, more time on complex diagnoses. Understanding that shift is far more useful than a yes-or-no prediction.

Our analysis works the way labor economists and automation researchers model job impact: at the task level, with honest framing about timelines and uncertainty.

How jobs break down into tasks

Every job is a collection of recurring tasks. An accountant does data entry, prepares tax returns, performs financial analysis, advises clients, and makes regulatory compliance judgments. Each of these tasks has a different exposure to AI automation.

We decompose each job into its atomic sub-work-tasks — typically 8 to 20 per role — using occupational databases, job description analysis, and expert review. Each task is then scored independently against current AI capabilities.

The job-level score is a weighted aggregation of all task scores, where weights reflect the approximate proportion of time spent on each task. This means a job where 80% of time is spent on high-risk tasks scores very differently from one where 80% of time is spent on low-risk tasks — even if both contain some automatable work.

The 7-point likelihood scale

Each task and overall job receives a score from 0 to 100, mapped to a 7-point scale that describes how likely AI is to significantly change that work within the next 5 to 10 years.

Safe0 – 14

AI has negligible impact on this work. Requires physical presence, complex human judgment, or deep interpersonal interaction that current and near-term AI cannot replicate.

Low15 – 28

AI may assist with minor aspects but the core work remains firmly human. Productivity tools may speed up small parts without changing the role meaningfully.

Moderate29 – 42

AI tools are beginning to handle some parts of this work. Professionals who adopt these tools gain efficiency, but the role itself is not at risk of significant change yet.

Significant43 – 57

AI is actively changing how this work is done. Some tasks are being automated or substantially accelerated. The role is shifting — fewer people may be needed for the same output.

High58 – 71

AI can already handle a meaningful portion of this work. Professionals in this range should expect noticeable changes to their role within the next few years.

Very High72 – 85

The majority of routine tasks in this role are automatable with current or near-term AI. The role will likely look very different within 5 years. Upskilling toward the remaining human-judgment tasks is important.

Critical86 – 100

Almost all tasks in this role are highly automatable. Significant displacement is already underway or expected soon. Career transition planning is strongly recommended.

Cognitive AI vs. physical robotics

There is an important distinction between cognitive/digital AI and physical automation (robotics). Current AI — large language models, vision models, code generation tools — excels at information processing: reading, writing, analyzing data, generating content, and pattern recognition. These are the capabilities that are advancing fastest.

Physical tasks — construction, plumbing, nursing care, electrical work, cooking — require dexterous manipulation, spatial reasoning in unstructured environments, and real-time physical adaptation. Robotics capable of performing these tasks reliably and affordably at scale is a decade or more away in most fields.

Our analysis reflects this honestly. A data entry clerk (mostly cognitive tasks) scores very differently from an electrician (mostly physical tasks), even though both are legitimate jobs. We do not inflate risk for physical roles just because AI is in the headlines. When robotics does become relevant to a role, we update the scores.

How we stay current

AI capabilities are changing rapidly. An analysis from 12 months ago may already be outdated. Our research pipeline runs monthly and incorporates three types of data:

Published AI capability benchmarks — we track major model releases and what they can demonstrably do, filtering hype from real-world performance. Peer-reviewed automation exposure research — academic studies from labor economists that quantify which tasks and occupations are affected by specific technology changes. Deployed AI systems — tracking which AI tools are actually being used in workplaces, not just demonstrated in labs.

When a significant development happens between scheduled updates — a major new model release that changes the ceiling for specific tasks, or a large-scale deployment in a particular industry — we bring that update forward rather than waiting for the next cycle.

Every job analysis page shows when that job was last reviewed, so you can see how recent the data is.

Data sources and attribution

Our analysis draws on several authoritative data sources, including but not limited to those listed below. Our research agents continuously review academic papers, industry reports, government publications, and credible news sources to keep assessments current. We are committed to proper attribution and compliance with each source’s licensing terms.

O*NET Web Services

We use O*NET task definitions as the foundation for job decomposition. Each occupation’s official task list, importance scores, and relevance scores come directly from the O*NET database maintained by the U.S. Department of Labor.

This site incorporates information from O*NET Web Services by the U.S. Department of Labor, Employment and Training Administration (USDOL/ETA). O*NET® is a trademark of USDOL/ETA.

Anthropic Economic Index

The Anthropic Economic Index (January 2025) provides AI task-exposure metrics based on observed Claude usage mapped to O*NET tasks. We use this as a primary calibration reference for cognitive task automation levels.

ILO AI Exposure Index

The International Labour Organization’s AI Exposure Index offers a global perspective on which occupations and tasks face the highest exposure to generative AI, complementing the U.S.-centric O*NET data.

Stanford AI Index

The Stanford AI Index annual report provides benchmark data on AI capability milestones, adoption trends, and economic impact that we use to validate and calibrate our monthly updates.

Example: how an accountant breaks down

Here is a simplified example of how our task decomposition works for an Accountant role. Each task receives its own score, and the overall job score is a weighted aggregate.

Example role

Accountant

  • Data entry and bookkeeping92Critical
  • Tax return preparation76Very High
  • Financial analysis and forecasting52Significant
  • Client advisory and relationship management18Low
  • Regulatory compliance judgment22Low

See this methodology in action: Art Directors, Accountants, Registered Nurses

About This Methodology

This methodology was developed by the team at AI Job Checker (operated by Peritus slf.) to provide evidence-based assessments of AI's impact on careers. Our analyses are grounded in O*NET occupational data from the U.S. Department of Labor, supplemented by ongoing AI capability research.

Learn more about the team behind AI Job Checker

See the breakdown for your job

Free analysis — no account needed. Search your job title and get a task-level AI impact assessment based on current research.

Check your job