AI Fundamentals
AI Fundamentals
The foundational concepts, architectures, and advancements in artificial intelligence that underpin everything covered in this wiki. This domain spans machine learning, deep learning, large language models, AI agents, training and inference infrastructure, and the research frontier.
AI is the connective tissue across Atopia Labs’ three verticals — it’s reshaping how software is built, how IT services are delivered, and how security is practiced.
Start Here
New to the AI landscape? These foundational pages provide the essential background:
- Transformer Architecture — The neural network architecture that powers virtually all modern AI. Understanding self-attention and how Transformers process sequences is prerequisite to everything else here.
- Scaling Laws — Why bigger models perform better, and the compute-optimal tradeoffs that determine how to allocate training budgets. The Chinchilla paper reshaped how the industry thinks about model size vs. training data.
- In-Context Learning — How LLMs like GPT-3 perform new tasks from prompt examples alone, without any weight updates. The capability that makes modern AI assistants possible.
- Reinforcement Learning from Human Feedback — The alignment technique that turned raw language models into useful assistants. How InstructGPT proved a small aligned model beats a large unaligned one.
- Chain-of-Thought Prompting — Eliciting step-by-step reasoning from LLMs, dramatically improving performance on complex tasks.
- AI Agents — Autonomous systems that combine LLMs with tools and planning to execute multi-step tasks. The current frontier of applied AI, spanning research-driven coding to evolutionary optimization.