AIBoK Annual Glossary of AI Terminology

12th Edition (2025)
Cutting Through the Hype Since 2013*
* Or at least, that's what the edition number suggests. More on that cheekiness below.

What Is This?

Independent intelligence on AI terminology—what it actually means vs. what vendors claim

📊 Five Dimensions of Analysis

  • Technical characteristics
  • Functional uses
  • Business impact
  • Societal impact
  • The Honest AI Dictionary

🎯 Four Audience Segments

  • Popular mainstream non-tech
  • Popular mainstream tech
  • Social media platforms
  • Underground but serious forums

🏆 Annual Terminology Awards

  • Most Over-Hyped Term
  • Hidden Gem
  • Zombie Term (won't die)
  • Largest Vendor Hype Gap
  • ...and 5 more categories

For Partners: NextWorkx, CoPilotHQ, and Beyond

Arm your sales teams with credible, independent terminology definitions. Your clients will trust you more when you speak their language—and call out the hype.

Example: When a vendor pitches you "autonomous AI agents," check our definition before proceeding. We've saved partners from multiple hype-driven investments by providing the reality check vendor marketing won't.

Sample Daily Report: 24 October 2025

See how we analyse trending AI terminology across all dimensions

Trending Term: Agentic AI

↗ Rising in Mainstream ↗ Rising in Tech Published: 24 October 2025 | Segment: Mainstream Non-Tech
Technical Characteristics

Uses foundation models combined with tools to plan and execute multi-step workflows. Orchestrates LLMs with function calling, loops, and error handling.

Functional Uses

Creates "virtual co-workers" to autonomously complete tasks like data analysis, content generation, customer service, and workflow automation.

Business Impact

Enables productivity gains and new AI co-worker service models. Drives investment in orchestration frameworks and agent platforms.

Societal Impact

Raises autonomy, accountability, and safety concerns for human-AI collaboration. Questions about who's responsible when agents make mistakes.

🚨 The Honest AI Dictionary
Vendors say:

"Autonomous reasoning systems that think independently and make complex decisions without human intervention"

Actually is:

"A language model orchestrated with tools and loops to follow a multi-step plan—useful but not truly autonomous. Still requires guardrails, error handling, and human oversight at critical checkpoints."

📍 Strategic Implication for Executives

When "Agentic AI" dominates terminology but "Governance" lags behind, it signals that organisations are rushing to deploy autonomous systems without adequate oversight frameworks. This pattern preceded major AI failures in financial services (Q3 2025) and healthcare (Q2 2025).

Action: If your vendors are pitching "agentic" systems, insist on seeing their governance framework first. Ask about error rates, escalation procedures, and human review processes.

Trending Term: Retrieval-Augmented Generation (RAG)

→ Stable in Tech ↗ Rising in Underground Published: 24 October 2025 | Segment: Mainstream Tech
Technical Characteristics

LLM generation pipeline that integrates a search step to ground responses. Queries external knowledge bases before generating answers.

Functional Uses

Produces fact-checked answers by querying external knowledge bases. Used in customer support, analytics, and enterprise Q&A systems.

Business Impact

Improves reliability of AI services. Reduces hallucination risks. Creates market for retrieval infrastructure and vector databases.

Societal Impact

Improves trust by grounding outputs but introduces dependency on curated knowledge bases. Quality depends on retrieval sources.

💎 The Honest AI Dictionary
Vendors say:

"Produces perfectly grounded factual responses with zero hallucinations"

Actually is:

"An LLM plugged into a search step—quality depends entirely on the retrieval sources and prompts used. Significantly reduces hallucinations but doesn't eliminate them. Only as good as your knowledge base."

📍 Strategic Implication for Procurement Teams

RAG is one of the few AI techniques where the vendor hype actually matches reality. It's the "hidden gem" of 2025—boring, reliable, and actually works in production.

Action: When evaluating AI solutions for customer support or knowledge management, ask vendors if they use RAG. If they claim "zero hallucinations" without mentioning RAG, they're either lying or don't understand their own system.

Visual Intelligence

Gartner-style visualisations showing term trajectories and hype gaps

Vendor Hype vs. Technical Reality Gap (Q4 2025)

Vendor Hype vs. Technical Reality Gap (Q4 2025) Larger gap = greater need for scrutiny when evaluating vendor claims Terms Agentic AI Vendor claims Reality 360-point gap Synthetic Data Vendor claims Reality 200-point gap AI Reasoning Vendor claims Reality 270-point gap RAG Vendor claims Reality 80-point gap Multi-Agent Vendor claims Reality 220-point gap SLMs Vendor claims Reality 120-point gap Chain-of-Thought Vendor claims Reality 40-point gap Technical reality (what it actually does) Vendor marketing claims (aspirational)

🏆 The AIBoK Terminology Awards 2025

Annual recognition (and gentle roasting) of AI industry terminology trends
👑

The "Emperor's New Clothes" Award

Most Over-Hyped Term of 2025
WINNER: Agentic AI
Vendor Hype Gap: 360 points (highest of 2025)
Vendors claim "autonomous reasoning systems that think independently," but the reality is orchestrated workflows with extensive guardrails. The term captured executive imagination and drove massive investment, but few deployments deliver on the autonomy promise.
💎

The "Hidden Gem" Award

Most Under-Hyped Term of 2025
WINNER: RAG (Retrieval-Augmented Generation)
Vendor Hype Gap: 80 points (smallest of 2025—it actually works!)
While vendors obsessed over "agentic" this and "autonomous" that, RAG quietly solved real problems with minimal fanfare. It's the boring, reliable technique that actually works in production. This is what good technology looks like—it solves specific problems reliably without grandiose claims.
🧟

The "Zombie Term" Award

Term That Refuses to Die Despite Being Useless
WINNER: AGI (Artificial General Intelligence)
Still no coherent definition. Still appears in every ambitious startup pitch deck. Still generates more heat than light in industry conversations. Mentioned in 37% of AI startup pitches despite having no agreed-upon definition. When someone says "we're building AGI," ask them to define it. The evasive answers reveal the term's emptiness.
🎭

The "Vendor Speak" Award

Largest Gap Between Vendor Claims and Reality
WINNER: Autonomous AI Agents
Vendors say "self-directing systems that work independently." Reality: heavily scripted workflows with mandatory checkpoints and extensive error handling. Example: A "customer service agent" that vendors claim "autonomously resolves support tickets" actually escalates 60% of queries to humans.

The "Actually Useful" Award

Term That Describes Something That Actually Works
WINNER: Chain-of-Thought Prompting
Vendor Hype Gap: 40 points (smallest gap)
Simple, effective, well-defined. Ask AI to show its reasoning step-by-step, and you get better outputs. No exaggeration, no misleading claims—just a technique that does what it says. This is the Goldilocks term—not over-hyped, not under-recognised, just right.
🐴

The "Dark Horse" Award

Term That Emerged from Nowhere in 2025
WINNER: Multi-Agent Systems
Went from obscure research concept to production frameworks in just 8 months. Appeared in mainstream tech media 573% more frequently in Q4 vs. Q1. Genuine technical progress + accessible tooling + compelling use cases = rapid adoption. This is what organic term growth looks like when it's driven by utility rather than marketing.

Want to See All 9 Award Categories?

Including: Nostalgia Award, Regulation Radar, Underground Darling, and the special "Wait, That Worked?" category

Provide Feedback to Unlock Full Awards

How Organisations Use This

Real scenarios where independent terminology intelligence drives better decisions

For Executives

Use the "Business Impact" and "Honest AI Dictionary" columns to separate signal from noise when evaluating vendor pitches and technology investments.

Example: When a vendor pitches "autonomous AI agents," check our definition. We've saved executives from millions in hype-driven investments by revealing what "autonomous" actually means.

For Procurement Teams

Compare vendor claims against our independent analysis. If a vendor's pitch doesn't align with our "What it actually is" definition, ask hard questions.

Example: Vendor claims "zero hallucinations" without mentioning RAG? Red flag. They're either lying or don't understand their own system.

For Training Buyers

Before investing in AI training, understand what you're actually buying. Our definitions help you evaluate whether training addresses real capability gaps or just teaches vendor-specific features.

Example: If "prompt engineering" training costs £2,000/person, is it worth it? Our analysis shows it's now basic literacy—like charging for "Google search training."

We Need Your Feedback

Help us refine this before public launch—your input shapes the final product