GroveAI
Glossary

AI Glossary A–Z

Key AI and machine learning concepts explained in plain language. A reference for business leaders, developers, and anyone navigating the AI landscape.

A

A/B Testing for AI

A/B testing for AI is the practice of comparing two or more variants of an AI system (different models, prompts, or configurations) by serving them to different user groups and measuring which performs better.

View

Agentic AI

Agentic AI refers to AI systems that can autonomously pursue goals by planning actions, using tools, making decisions, and adapting their approach based on results — moving beyond passive response generation to active task completion.

View

Agentic Loop

An agentic loop is the core execution cycle of an AI agent — observe the current state, reason about what to do next, take an action, and repeat until the goal is achieved or a stopping condition is met.

View

AI Agent

An AI agent is an autonomous system that uses a language model to reason about goals, plan actions, use tools, and execute multi-step tasks with minimal human intervention.

View

AI Agents

AI agents are autonomous AI systems that can plan, reason, use tools, and take actions to accomplish goals with minimal human supervision, going beyond simple question-answering to execute multi-step tasks.

View

AI Alignment

AI alignment is the field of research and practice focused on ensuring that AI systems behave in accordance with human values, intentions, and goals — doing what we actually want rather than what we literally specify.

View

AI Audit

An AI audit is a systematic evaluation of an AI system's performance, fairness, safety, compliance, and governance, providing assurance that the system operates as intended and meets regulatory requirements.

View

AI Bias

AI bias refers to systematic errors in AI systems that produce unfair outcomes, typically arising from biased training data, flawed model design, or inappropriate application of AI to sensitive decisions.

View

AI Centre of Excellence (CoE)

An AI Centre of Excellence is a centralised team or organisational unit that provides AI expertise, best practices, governance, and support to enable consistent, effective AI adoption across an organisation.

View

AI Champion

An AI champion is an individual within an organisation who advocates for AI adoption, drives awareness and experimentation, and helps bridge the gap between AI potential and practical business application.

View

AI Fairness

AI fairness refers to the design, evaluation, and deployment of AI systems that treat all individuals and groups equitably, avoiding discrimination and ensuring that benefits and harms are distributed justly.

View

AI Literacy

AI literacy is the ability to understand, evaluate, and effectively interact with AI systems, encompassing knowledge of how AI works, its capabilities and limitations, and how to use it responsibly.

View

AI Maturity Model

An AI maturity model is a framework that assesses an organisation's current AI capabilities across multiple dimensions and defines progressive stages of AI adoption, from initial experimentation to enterprise-wide integration.

View

AI Observability

AI observability is the practice of monitoring, tracing, and understanding the behaviour of AI systems in production, providing visibility into performance, quality, costs, and potential issues.

View

AI Procurement

AI procurement is the process of evaluating, selecting, and purchasing AI solutions, requiring specialised assessment criteria beyond traditional software procurement.

View

AI Readiness

AI readiness is an organisation's preparedness to successfully adopt and benefit from AI, encompassing data infrastructure, technical capabilities, talent, leadership support, and governance frameworks.

View

AI Regulation

AI regulation refers to the laws, standards, and governance frameworks established by governments and international bodies to ensure that AI systems are developed and used safely, fairly, and transparently.

View

AI ROI

AI ROI measures the return on investment from AI initiatives, comparing the business value generated (cost savings, revenue growth, efficiency gains) against the total investment (technology, talent, data, change management).

View

AI Transparency

AI transparency is the practice of being open and clear about how AI systems work, what data they use, how decisions are made, and what limitations they have, building trust and enabling accountability.

View

AI Vendor Lock-in

AI vendor lock-in occurs when an organisation becomes so dependent on a specific AI provider's technology, APIs, or data formats that switching to an alternative becomes prohibitively costly or disruptive.

View

API Gateway

An API gateway is an infrastructure component that sits between clients and AI services, managing authentication, rate limiting, routing, load balancing, and monitoring for AI API traffic.

View

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI) refers to a hypothetical AI system capable of understanding, learning, and applying knowledge across any intellectual task at or above human level, rather than being limited to a single domain.

View

Artificial Narrow Intelligence (ANI)

Artificial Narrow Intelligence (ANI) describes AI systems designed and trained to perform a specific task or narrow set of tasks, such as language translation, image recognition, or recommendation engines, without general-purpose reasoning ability.

View

Attention Mechanism

The attention mechanism is a neural network technique that allows AI models to dynamically focus on the most relevant parts of their input when producing each output, enabling them to capture relationships across long sequences of text.

View

Auto-scaling

Auto-scaling automatically adjusts the number of AI model instances or compute resources based on real-time demand, scaling up during peak traffic and down during quiet periods to optimise cost and performance.

View

C

Canary Deployment

Canary deployment is a release strategy that gradually rolls out changes to a small subset of users first, monitoring for issues before expanding to the full user base, significantly reducing the risk of AI system updates.

View

Chain-of-Thought Prompting

Chain-of-thought (CoT) prompting is a technique that instructs AI models to reason through problems step by step before providing a final answer, significantly improving accuracy on complex tasks.

View

Chat Completion

Chat completion is the API pattern used to interact with language models in a conversational format, where messages are sent as a sequence of roles (system, user, assistant) and the model generates the next response.

View

Chunking

Chunking is the process of splitting documents into smaller, meaningful segments for embedding and retrieval in AI systems, with the chunking strategy significantly affecting search quality and response accuracy.

View

Citizen AI

Citizen AI refers to the practice of empowering non-technical business users to build and deploy AI solutions using low-code/no-code tools, under appropriate governance and support structures.

View

Cloud AI

Cloud AI refers to artificial intelligence services and infrastructure delivered through cloud computing platforms, enabling businesses to access AI capabilities without managing their own hardware or model training.

View

Code Generation

Code generation is the use of AI models to automatically write, complete, debug, and refactor computer code, significantly accelerating software development workflows.

View

Computer Vision

Computer vision is a field of AI that enables machines to interpret and understand visual information from images and video, powering applications from quality inspection and medical imaging to autonomous vehicles and document processing.

View

Constitutional AI

Constitutional AI (CAI) is an alignment approach developed by Anthropic where AI models are trained to follow a set of explicit principles (a 'constitution'), enabling the model to self-critique and revise its outputs for safety and helpfulness.

View

Containerised AI

Containerised AI packages AI models, their dependencies, and serving infrastructure into portable containers that run consistently across any environment, simplifying deployment and scaling.

View

Context Window

A context window is the maximum amount of text (measured in tokens) that a language model can process in a single interaction, encompassing both the input prompt and the generated response.

View

D

Data Labelling

Data labelling is the process of annotating raw data (text, images, audio, or video) with meaningful tags or categories that AI models use to learn patterns during supervised training.

View

Data Lakehouse

A data lakehouse is a data architecture that combines the flexibility and cost-effectiveness of data lakes with the reliability and performance of data warehouses, providing a unified platform for analytics and AI workloads.

View

Data Mesh

Data mesh is a decentralised data architecture that treats data as a product owned by domain teams, with a self-serve data platform and federated governance, enabling scalable data management for AI and analytics.

View

Data Strategy

A data strategy is the organisational plan for collecting, managing, governing, and leveraging data assets to support business objectives and AI initiatives, providing the essential foundation for effective AI adoption.

View

Deep Learning

Deep learning is a subset of machine learning that uses neural networks with many layers (hence 'deep') to learn complex patterns from large amounts of data, enabling capabilities such as image recognition, language understanding, and speech processing.

View

Dense Retrieval

Dense retrieval is an information retrieval approach that uses learned dense vector representations (embeddings) to find semantically relevant documents, as opposed to sparse methods that rely on exact keyword matching.

View

Diffusion Models

Diffusion models are a class of generative AI that create images, video, and other media by learning to gradually remove noise from random data, producing high-quality outputs through an iterative refinement process.

View

Direct Preference Optimisation (DPO)

DPO is an AI alignment technique that trains models directly on human preference data without needing a separate reward model, offering a simpler and more stable alternative to RLHF.

View

Document Parsing

Document parsing is the process of extracting structured text, tables, images, and metadata from documents in various formats (PDF, Word, HTML, scanned images), making content accessible for AI processing.

View

M

Machine Learning

Machine learning is a branch of artificial intelligence in which algorithms learn patterns from data and improve their performance over time without being explicitly programmed for each specific task.

View

Memory (AI)

Memory in AI refers to mechanisms that allow agents and models to retain and recall information across interactions, enabling personalisation, context awareness, and learning from past experiences.

View

Metadata Filtering

Metadata filtering is a retrieval technique that narrows search results by applying structured attribute filters (such as date, category, or source) alongside semantic search, improving precision and relevance.

View

Mixture of Experts (MoE)

Mixture of Experts is a neural network architecture that divides the model into specialised sub-networks (experts) and uses a routing mechanism to activate only the most relevant experts for each input, achieving high capability with lower computational cost.

View

ML Pipeline

An ML pipeline is an automated workflow that orchestrates the steps of machine learning — from data ingestion and processing through model training, evaluation, and deployment — ensuring reproducibility and operational reliability.

View

MLOps

MLOps (Machine Learning Operations) is a set of practices that combines machine learning, DevOps, and data engineering to reliably deploy, monitor, and maintain AI models in production environments.

View

Model Card

A model card is a standardised document that describes an AI model's intended uses, performance characteristics, limitations, ethical considerations, and training data, providing transparency for stakeholders.

View

Model Registry

A model registry is a centralised repository that stores, versions, and tracks AI models along with their metadata, enabling teams to manage the full lifecycle of models from development to production.

View

Model Serving

Model serving is the process of deploying trained AI models to production infrastructure where they can receive requests and return predictions in real time, handling concerns like scaling, latency, and reliability.

View

Multi-Agent Systems

Multi-agent systems are architectures where multiple specialised AI agents collaborate, communicate, and coordinate to solve complex tasks that would be difficult or impossible for a single agent to handle alone.

View

Multi-modal AI

Multi-modal AI refers to artificial intelligence systems that can process, understand, and generate multiple types of data — such as text, images, audio, and video — within a single model.

View

R

Rate Limiting

Rate limiting controls the number of requests that clients can make to an AI service within a given time period, preventing abuse, managing costs, and ensuring fair access for all users.

View

Re-ranking

Re-ranking is a retrieval technique that uses a more powerful model to reorder an initial set of search results by relevance, significantly improving the quality of the final results presented to the user or AI model.

View

Red Teaming (AI)

Red teaming in AI is the practice of systematically probing AI systems for vulnerabilities, failure modes, and harmful outputs by simulating adversarial or edge-case scenarios.

View

Reflection (AI)

Reflection in AI is a pattern where a model evaluates its own output, identifies errors or improvements, and revises its response, leading to higher-quality results through iterative self-critique.

View

Reinforcement Learning

Reinforcement learning (RL) is a machine learning paradigm where an agent learns optimal behaviour through trial and error, receiving rewards or penalties for its actions and improving its strategy over time.

View

Reinforcement Learning from Human Feedback (RLHF)

RLHF is a training technique that uses human judgments to teach AI models which outputs are preferred, aligning model behaviour with human values and expectations for helpfulness, safety, and accuracy.

View

Responsible AI

Responsible AI is the practice of developing, deploying, and governing AI systems in ways that are ethical, fair, transparent, safe, and accountable, considering the impact on individuals and society.

View

Retrieval-Augmented Generation (RAG)

RAG is a technique that enhances large language model responses by retrieving relevant information from external knowledge sources before generating an answer, reducing hallucinations and keeping outputs grounded in factual data.

View

Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) is an architecture that enhances AI model responses by retrieving relevant information from external knowledge sources before generating an answer, reducing hallucinations and enabling access to current or proprietary data.

View

T

Taxonomy

A taxonomy is a hierarchical classification system that organises concepts, content, or data into categories and subcategories, providing structure for navigation, search, and AI-powered classification.

View

Temperature

Temperature is a parameter that controls the randomness of an AI model's outputs — lower values produce more focused, deterministic responses while higher values increase creativity and variability.

View

TensorRT

TensorRT is NVIDIA's high-performance deep learning inference optimiser and runtime that maximises AI model speed on NVIDIA GPUs through precision calibration, layer fusion, and kernel auto-tuning.

View

Text-to-Speech (TTS)

Text-to-speech is an AI technology that converts written text into spoken audio, producing natural-sounding voice output for applications like virtual assistants, accessibility tools, and content narration.

View

Tokenisation

Tokenisation is the process of breaking text into smaller units called tokens (words, sub-words, or characters) that AI models can process, forming the fundamental input and output unit for language models.

View

Tool Calling

Tool calling is the mechanism by which language models generate structured requests to invoke external functions, APIs, or services, enabling them to take actions and access real-time information.

View

Tool Use

Tool use is the ability of language models to invoke external functions, APIs, or services during a conversation, extending their capabilities beyond text generation to interact with real-world systems.

View

Top-k Sampling

Top-k sampling is a text generation strategy that restricts the language model's next-token selection to the k most probable tokens, balancing creativity and coherence by filtering out unlikely choices.

View

Total Cost of Ownership (AI)

Total cost of ownership for AI encompasses all direct and indirect costs of building, deploying, and maintaining an AI system over its lifecycle, including often-underestimated costs like data, talent, and ongoing operations.

View

Transfer Learning

Transfer learning is a machine learning technique where knowledge gained from training on one task is applied to a different but related task, dramatically reducing the data and compute needed to build effective AI models.

View

Transformer

The transformer is a neural network architecture based on self-attention mechanisms that has become the foundation for virtually all modern large language models, enabling them to process and generate text with remarkable capability.

View

Transformer Architecture

The transformer architecture is a neural network design based on self-attention mechanisms that processes input data in parallel, enabling the training of large, powerful models for language, vision, and other tasks.

View