Foundation Model
A foundation model is a large AI model trained on broad data that can be adapted to a wide range of downstream tasks through fine-tuning or prompting, serving as the base layer for many AI applications.
What is a Foundation Model?
Why Foundation Models Matter for Business
Related Terms
Explore further
FAQ
Frequently asked questions
All LLMs are foundation models, but not all foundation models are LLMs. Foundation model is the broader category that includes language models, vision models, audio models, and multi-modal models. LLM specifically refers to language-focused foundation models.
Almost all organisations should use existing foundation models rather than building their own. Training a foundation model requires hundreds of millions of pounds in compute and data. Instead, focus on fine-tuning, RAG, and prompt engineering to adapt existing models to your needs.
The gap is narrowing. Open-source models like LLaMA and Mistral are highly capable for many tasks. Proprietary models still tend to lead on the most challenging benchmarks, but for many business applications, open-source models offer an excellent balance of capability, cost, and control.
Need help implementing this?
Our team can help you apply these concepts to your business. Book a free strategy call.