Who are the key players in AI?
The AI industry is a layered ecosystem of hardware makers, cloud providers, model developers, and application builders, each dependent on the others.
Who actually builds and runs AI?
The AI industry isn't one company or even one sector. It's a layered ecosystem where each layer depends on the ones below it.
At each layer, a few key players dominate, and the relationships between layers create the dynamics that shape the industry.
The Hardware Layer: NVIDIA's Dominance
AI training and inference require specialized chips. NVIDIA's GPUs (particularly the H100, H200, and the newer Blackwell B200 architecture) are the industry standard. Their market share in AI training is estimated at over 80%.
Why such dominance? Two reasons:
- CUDA: NVIDIA's programming framework, refined over 15+ years, has become the default for AI research. Switching costs are enormous.
- First-mover advantage: When deep learning took off around 2012, NVIDIA GPUs were already the best available hardware. They've maintained the lead since.
Competitors exist (AMD's MI300X, Intel's Gaudi 3, Google's TPUs, Amazon's Trainium2), but none has displaced NVIDIA's central position. The Blackwell architecture (2024-2025) brought significant efficiency gains, making it even harder for competitors to catch up.
The Cloud Layer: Where Compute Lives
Most AI runs in data centers owned by a few hyperscalers:
- Microsoft Azure: Exclusive cloud partner for OpenAI. Runs ChatGPT's infrastructure.
- Amazon AWS: Hosts many AI workloads. Offers its own chips (Trainium/Inferentia) alongside NVIDIA.
- Google Cloud: Powers Gemini. Has custom TPUs developed in-house.
These cloud providers aren't just renting compute. They're investors, partners, and sometimes competitors to the model labs.
The Model Layer: Who Builds the Brains
A handful of organizations train frontier models:
- OpenAI: Created GPT-5.1 (November 2025), ChatGPT. Backed by Microsoft. The most recognized name in AI, now offering adaptive reasoning with "Instant" and "Thinking" modes.
- Anthropic: Created Claude 4.5 series: Sonnet 4.5 (September 2025, "best coding model"), Haiku 4.5 (October 2025, fastest/cheapest), Opus 4.5 (November 2025, most capable). Founded by former OpenAI researchers focused on AI safety. Backed by Google and Amazon.
- Google/DeepMind: Created Gemini 2.0, AlphaFold. Vast resources, integrated with Google products. Leading in agentic AI capabilities and multimodal understanding.
- Meta: Created Llama 3.3. Notably open-weights, enabling the open-source ecosystem and local AI deployment.
- Mistral, Cohere, xAI: Competitive players with strong models. xAI's Grok models are integrated with X (formerly Twitter).
The Application Layer: What You Actually Use
Most people don't interact with models directly. They use applications built on top:
- ChatGPT/Claude.ai: First-party chat interfaces from the model providers
- Cursor, Warp: AI-powered development tools that use multiple underlying models
- Microsoft Copilot: OpenAI models integrated into Office, Windows, GitHub
- Perplexity, You.com: AI-powered search alternatives
- Thousands of startups: Building specialized tools on top of model APIs
Where is value captured?
The economics are still shaking out, but some patterns emerge:
- Hardware: Extremely profitable. NVIDIA's AI-related revenue continues to grow massively.
- Cloud: Profitable but competitive. AWS, Azure, and GCP fight for AI workloads. Each invests billions in GPU clusters.
- Models: Increasingly competitive. OpenAI's revenue has grown substantially with enterprise adoption. Anthropic has found traction with developers. The race for capability remains expensive, but revenue is growing.
- Applications: Mixed. AI-native tools like Cursor have found strong product-market fit. Many others struggle to differentiate from ChatGPT and Claude's first-party interfaces.