Tracking the relative salience of 10 AI risk categories through global search trends. Updated quarterly to power prediction markets.
Normalized scores out of 1,000 total points (average = 100). Higher score = more global search interest relative to other AI risks.
Score changes from Q4 2025 (baseline) to Q1 2026. A rise of +1.0 or more resolves as YES on prediction markets.
| Risk | Q4 2025 | Q1 2026 | Change | Resolution |
|---|
The 10 AI risk categories tracked by the AI Seismograph, grouped by scope of impact.
AI enables mass production of deepfakes, voice clones, and generated text, eroding shared reality and societal trust. When people cannot distinguish truth from fabrication, the information ecosystem degrades.
AI threatens professions from manual to creative work. The problem is unprecedented speed of change: individuals and institutions cannot adapt fast enough, risking mass structural unemployment.
Advanced AI analytics combined with biometrics enable unprecedented population-wide monitoring, wholesale loss of anonymity, and construction of social credit systems that suppress individual freedom.
AI companions may deepen social isolation and atrophy human relationships. At an existential level, this challenges human motivation in a world where machines produce art, ideas, and work faster.
Systematic exploitation of AI for voter micro-targeting and radicalization. AI analyzes psychological profiles to deliver content that traps people in filter bubbles and manipulates elections.
Frontier AI demands extreme resources, driving monopolization. Profits concentrate among a narrow group of AI infrastructure owners while smaller companies and poorer nations lose competitiveness.
Deploying AI in transportation, healthcare, energy, and justice introduces catastrophic failure risks from hallucinations, flawed training data, and misspecified objectives.
As AI approaches AGI, humanity may create systems whose goals cannot be aligned with human values. Should such a system resist shutdown, humanity could irreversibly lose control over its future.
Military AI advances toward autonomous weapons selecting targets without human intervention, raising moral dilemmas, a global AI arms race, and unintended conflict escalation.
AI agents formulating hypotheses and commissioning experiments may bypass ethical safeguards, producing dangerous research or flooding science with flawed studies.