Founder risk
AI wrapper startup graveyard
A thin AI wrapper is an app whose core value depends on calling a frontier model with a prompt and showing the result in a nicer interface.
Wrappers can grow fast when model interfaces are immature. They can also be erased fast when the model provider adds the same workflow, improves context, bundles tools, or enters the distribution channel where users already work.
The wrapper failure pattern
1. The feature becomes default
A startup sells summarization, file Q&A, code help, image editing, or workflow automation. Then the general assistant ships the same behavior to millions of users.
2. Distribution moves upstream
The workflow leaves the standalone app and enters ChatGPT, Claude, Gemini, Copilot, Google Search, the IDE, the browser, or the cloud platform.
3. The product has no second moat
Without proprietary data, domain depth, compliance, integrations, or customer relationships, the wrapper competes against the platform that supplies its intelligence.
What survives
Survivors usually own a workflow, not just a prompt. They integrate deeply with customer data, become part of daily operations, specialize in a regulated or complex domain, or create switching costs through collaboration and records.
The lesson is not "never build on models." The lesson is to make the model only one layer in a product that has its own reason to exist.
Use this checklist
- If OpenAI or Anthropic shipped this exact flow tomorrow, what would still be valuable?
- Does the product own data the model provider cannot see?
- Does the product complete a business process, or only generate an output?
- Can the product survive lower model prices and native platform tools?
- Does the customer describe the product as software they run, or as a nicer prompt box?
Related anti-ai.app timelines
- GPT-4 makes thin wrappers fragile
- o3 and o4-mini combine reasoning with tools
- GPT-5.5 pushes from assistant to real-work agent
- OpenAI brings Codex and managed agents to AWS