GroveAI
Glossary

Foundation Model

A foundation model is a large AI model trained on broad data that can be adapted to a wide range of downstream tasks through fine-tuning or prompting, serving as the base layer for many AI applications.

What is a Foundation Model?

A foundation model is a large-scale AI model that has been pre-trained on vast, diverse datasets and can be adapted to many different tasks without being retrained from scratch. The term, coined by Stanford researchers in 2021, reflects the idea that these models serve as a foundation upon which specific applications are built. Examples include large language models like GPT-4 and Claude (trained on text), vision models like CLIP (trained on images and text), and multi-modal models that handle text, images, audio, and video. What distinguishes foundation models from earlier AI models is their generality — a single foundation model can be applied to translation, summarisation, question-answering, coding, and many other tasks. Foundation models are created through a two-phase process. First, pre-training on massive datasets gives the model broad knowledge and capabilities. Then, adaptation — through fine-tuning, instruction-tuning, or prompt engineering — tailors the model for specific use cases. This paradigm dramatically reduces the cost and time needed to deploy AI for new tasks.

Why Foundation Models Matter for Business

Foundation models have fundamentally changed the economics of AI adoption. Previously, each AI application required its own purpose-built model, training data, and development effort. Now, organisations can build multiple applications on top of a single foundation model, dramatically reducing time-to-value. The strategic implications are significant. Businesses must decide whether to build on proprietary models (from providers like OpenAI or Anthropic), open-source models (like LLaMA or Mistral), or a combination. Each choice involves trade-offs in capability, cost, control, and data privacy. Foundation models also shift the competitive advantage in AI from model development to data and application layer innovation. The companies that benefit most are those that can effectively combine foundation model capabilities with their unique data and domain expertise, rather than those that build models from scratch.

FAQ

Frequently asked questions

All LLMs are foundation models, but not all foundation models are LLMs. Foundation model is the broader category that includes language models, vision models, audio models, and multi-modal models. LLM specifically refers to language-focused foundation models.

Almost all organisations should use existing foundation models rather than building their own. Training a foundation model requires hundreds of millions of pounds in compute and data. Instead, focus on fine-tuning, RAG, and prompt engineering to adapt existing models to your needs.

The gap is narrowing. Open-source models like LLaMA and Mistral are highly capable for many tasks. Proprietary models still tend to lead on the most challenging benchmarks, but for many business applications, open-source models offer an excellent balance of capability, cost, and control.

Need help implementing this?

Our team can help you apply these concepts to your business. Book a free strategy call.