2nd Edition • Beta Preview

AIBoK Annual Glossary of AI Terminology for Cutting Through the Hype

A daily AI terminology tracker with a built-in BS detector

* Now with strategic implications, honest translations, and annual awards

What Vendors Say vs. What AI Actually Is

Most AI glossaries tell you what vendors want you to believe. We tell you what the technology actually does—then explain what it means for your business, your sector, and your strategic decisions.

📊

Daily Tracking

We monitor trending AI terms across mainstream media, tech forums, social platforms, and specialist communities—updated daily.

🎯

Five-Dimensional Analysis

Every term analysed across technical characteristics, functional uses, business impact, societal implications, and honest vendor translation.

🏆

Annual Awards

We celebrate the best, critique the worst, and track the terms that actually deliver on their promises—plus the ones that spectacularly don't.

🧭

Strategic Context

Sector-specific implications for executives, policymakers, educators, and practitioners who need to make informed decisions.

How We Track Terminology

Each term gets the full treatment—from technical definition to honest vendor translation. Here are two examples from this week's tracker.

Agentic AI

🔥 Trending Mainstream + Tech
⚠️ Hype Gap: 360 points
Technical
Large language models (LLMs) configured with tool access, memory persistence, and conditional logic loops that allow multi-step task execution.
Functional
Systems that can plan actions, execute tools (search, code, APIs), evaluate results, and iterate—without requiring step-by-step human instruction for each action.
Business Impact
Potential to automate complex workflows (research, reporting, coding) but requires careful orchestration, validation guardrails, and clear scope boundaries to avoid failures.
Societal Impact
Amplifies concerns about AI accountability, job displacement narratives, and regulatory gaps—especially when agents operate autonomously in consequential domains.
🔍

The Honest AI Dictionary

Vendors Say
"Autonomous reasoning systems that can independently solve complex problems, learn from experience, and make strategic decisions without human oversight."
Actually Is
A language model with access to tools (search, calculators, APIs) and the ability to run in loops. It can follow multi-step plans you define, but it's not "thinking" or "reasoning" in any meaningful sense—it's pattern-matching with structured outputs. Autonomy is limited by the tools you give it and the prompts you write.
Reality Check
The hype gap here is massive (360 points). These systems work well for defined workflows with clear success criteria, but fail spectacularly when tasks are ambiguous or require genuine contextual judgment. They're "autonomous" the way a dishwasher is autonomous—useful, but you still need to load it correctly and check the results.

Strategic Implications by Sector

Executives
Don't buy "autonomous AI" promises. Ask vendors: What tasks does it handle? What validation do we need? What happens when it fails? Budget for orchestration and governance, not just licensing.
Government
Current regulations assume human-in-the-loop. Agentic systems blur this boundary. Prepare policy frameworks for accountability when multi-step AI workflows operate without continuous human oversight.
Education
These tools will reshape research workflows (literature reviews, data analysis). Teach students when to use agents vs. when human judgment is non-negotiable. Critical evaluation > blind delegation.
Practitioners
Start with narrow, repeatable workflows (e.g., "scan these docs, extract key dates, format as table"). Build trust through validation. Expand scope gradually. Don't over-promise capability to stakeholders.

Retrieval-Augmented Generation (RAG)

📈 Rising Tech + Underground
Hype Gap: 80 points
Technical
An architecture that combines information retrieval (search) with generative AI. The system searches a knowledge base, retrieves relevant documents, then uses an LLM to synthesise an answer grounded in those sources.
Functional
Reduces hallucination risk by anchoring LLM responses in specific documents. Works well for Q&A systems, chatbots over documentation, and internal knowledge bases where accuracy matters.
Business Impact
One of the most practical GenAI architectures—deployable today with measurable ROI. Common use cases: customer support, internal documentation search, compliance Q&A, research synthesis.
Societal Impact
Lowers barriers to information access (good for education, public services) but quality depends entirely on the underlying knowledge base. "Garbage in, garbage out" still applies.
💎

The Honest AI Dictionary

Vendors Say
"Perfectly grounded AI responses that eliminate hallucinations and provide authoritative answers backed by your organisation's knowledge."
Actually Is
An LLM plugged into search results from your documents. It's better than pure generation (definitely reduces hallucination) but quality varies wildly based on: (1) how good your search is, (2) how well-structured your docs are, and (3) how you prompt the LLM to use the retrieved content.
Reality Check
This is one of the rare terms with a small hype gap (80 points). RAG actually works when implemented properly. The main risks aren't architectural—they're operational: outdated documents, poor indexing, or users who bypass the system when answers aren't perfect. Treat it like search infrastructure, not magic.

Strategic Implications by Sector

Executives
This is your "quick win" AI project. ROI is measurable (time saved searching docs, reduced support tickets). Budget for document cleanup and ongoing maintenance, not just initial deployment.
Government
RAG systems for public services (benefits info, regulation Q&A) improve accessibility. But: who's liable when the system misinterprets policy? Document version control becomes governance-critical.
Education
Excellent for research literature reviews and course material Q&A. Teach students to verify citations and check retrieved sources—RAG makes accurate answers more likely, not guaranteed.
Practitioners
Start by auditing your knowledge base. RAG is only as good as your docs. Clean metadata, consistent formatting, and regular updates matter more than fancy embedding models. Prototype fast, measure accuracy, iterate.

Celebrating the Best, Critiquing the Worst

Every year we recognise the terms that shaped the AI conversation—for better or worse. Here are our nine award categories (plus a special People's Choice award).

🏆

Most Over-Hyped Term

Awarded to the term with the largest gap between vendor promises and actual capabilities. Maximum marketing, minimum delivery.

2025 Nominee:
Agentic AI
💎

Hidden Gem

For the term that quietly delivers real value without excessive hype. Smallest gap between promise and reality.

2025 Nominee:
RAG (Retrieval-Augmented Generation)
🎭

Most Dramatic Rebranding

For the old technology given a new AI-branded name to sound cutting-edge. Style over substance.

2025 Nominee:
"AI-Powered Search" (it's just... search)
🚀

Fastest Riser

Went from obscurity to boardroom buzzword in under three months. Buckle up.

2025 Nominee:
Agentic Workflows
💀

Fastest Faller

Dominated headlines, then vanished. Turns out the emperor had no clothes.

2025 Nominee:
"Web3 AI Integration"
⚖️

Most Likely to Trigger Regulation

The term that keeps policymakers awake at night. Expect frameworks and guardrails soon.

2025 Nominee:
AI Hallucinations
🌍

Most Geographically Confused

Means completely different things in different regions. Lost in translation.

2025 Nominee:
"AI Safety" (US vs EU definitions)
🏅

Actually Useful

Does exactly what it says on the tin. Boring name, solid delivery, measurable ROI.

2025 Nominee:
Fine-Tuning
🎪

Most Creative Euphemism

For the term that works overtime to avoid saying what it actually means. Points for creativity.

2025 Nominee:
"Stochastic Parrots" (just say the model can't reason)

Full awards ceremony and voting details published January 2026

How We Track the Hype Gap

Every term gets scored on a hype-to-reality scale. Here's how we visualise the gap between vendor promises and actual capabilities.

Hype Gap Analysis Framework

Terms plotted by maturity (x-axis) vs. hype intensity (y-axis)

HYPE ZONE (High Hype, Low Maturity) PROVEN VALUE (High Maturity, Balanced Hype) EMERGING (Low Hype, Early Stage) HIDDEN GEMS (Mature, Under-Hyped) Technical Maturity → Marketing Hype → Agentic AI RAG Fine-Tuning Synthetic Data AI Safety

How to read this: Terms in the top-left (Hype Zone) have massive marketing but limited real-world capability. Bottom-right (Hidden Gems) deliver solid value without excessive promotion. The goal: help you identify which terms deserve investment, and which deserve scepticism.

Ready to Cut Through the Hype?

Subscribe to our daily (or weekly) terminology tracker and get honest AI definitions delivered to your inbox.

Also available: Annual compendium PDF, corporate site licences, and partner co-branding options.

We Need Your Feedback

You're seeing the 2nd Edition concept in beta. Help us refine it before launch by answering a few quick questions.