Buy off-the-shelf AI for individual productivity (ChatGPT, Claude) and stand-alone use cases (sales-call analysis, marketing copy). Build custom AI integration when the AI must read from or write to your operational systems — your ERP, CRM, internal databases. Most mid-market businesses default to "buy" when "build" is right (and vice versa). The decision rules are simpler than the vendor pitches make them sound.
The decision is simpler than vendor pitches make it sound
Every AI vendor wants to sell you an AI platform. Every consultant wants to sell you a custom build. Both are right sometimes and wrong sometimes. The actual decision rule is narrow:
“Buy when AI is the product. Build when AI is a feature inside your operation. That is 80% of the decision.”
The 4 categories
Most AI use cases in a mid-market business fall into one of four buckets. The right answer is different for each.
| Category | Right answer | Examples |
|---|---|---|
| Individual productivity | BUY (subscriptions) | Drafting, research, code assistance — ChatGPT, Claude, Copilot |
| Stand-alone vertical SaaS | BUY (vertical AI tools) | Gong (sales calls), Otter (meetings), Jasper (marketing copy) |
| AI integrated into your operational systems | BUILD (custom integration) | AI inside your ERP, CRM, helpdesk, document workflow |
| AI as a competitive product feature | BUILD (custom) | AI capability that differentiates your product to customers |
Category 1: Individual productivity — buy
Every employee should have access to a frontier model (ChatGPT Plus, Claude Pro, Gemini Advanced, or whatever your security team approves). Cost: $20–25/user/month. Returns are individual but compound across the org.
Skip custom for this category. There is nothing to build that beats what OpenAI, Anthropic, and Google ship. Custom productivity tools were a 2023 idea; in 2026 they look quaint.
Trade-off: data security
Use enterprise plans (ChatGPT Team / Enterprise, Claude for Work) so prompts are not used for training. For sensitive data, use Azure OpenAI or AWS Bedrock for compliance, but the productivity layer itself is still off-the-shelf.
Category 2: Stand-alone vertical SaaS — buy
For specific functions where a vendor has already built the AI workflow end-to-end, buy their product. Examples:
- Gong / Chorus — sales call analysis
- Otter / Fireflies — meeting transcription
- Jasper / Copy.ai — marketing copy at scale
- Notion AI — knowledge base summarization
- GitHub Copilot — coding assistance
These tools cost $20–100/user/month. Building equivalents costs ₹40L+ and would not match what they offer because the vendor is iterating on it full-time.
When to skip vertical SaaS
Skip when the SaaS is so generic it does not match your operation, or when integration into your existing tools (your CRM, your ERP) is poor. In those cases, build custom on top of an LLM API.
Category 3: AI integrated into your operational systems — build
This is where mid-market businesses overspend on the wrong choice. The use cases:
- AI that reads invoices and writes structured data into your accounting system
- AI that triages customer tickets and updates your helpdesk
- AI that searches across your internal documents (RAG over your operational data)
- AI that monitors transactions in your ERP and flags anomalies
- AI that takes a customer email and creates a draft response inside your CRM
Off-the-shelf tools cannot do these because they cannot integrate with your specific operational systems. Enterprise AI platforms try to, but force vendor lock-in. The cost-effective path is custom integration on top of LLM APIs.
| Approach | First-year cost (Indian mid-market) | 3-year TCO |
|---|---|---|
| Custom integration on top of LLM API | ₹8L – ₹25L per use case | Lowest — own the integration, minimal recurring beyond API costs |
| Enterprise AI platform (Salesforce Einstein, etc.) | ₹15L – ₹50L | Higher — license fees compound, lock-in increases switching cost |
| Off-the-shelf SaaS (does not actually integrate deeply) | ₹2L – ₹10L | Lowest cost but does not solve the integration use case — wasted spend |
Custom integration is the right answer when the AI use case requires reading from or writing to your operational systems. Off-the-shelf tools simply cannot.
Category 4: AI as a competitive product feature — build
If you are a product company and AI is part of your product (not your operations), you build. Reusing a third-party platform for what differentiates your product is suicidal positioning.
This category is rarer for mid-market service businesses but applies if you have a SaaS product, an internal platform sold to clients, or a customer-facing AI feature.
The pattern most businesses get wrong
Two common mistakes:
- Subscribing to enterprise AI platforms hoping they will solve operational integration. They rarely do unless you are already deeply embedded in that vendor's ecosystem. The integration depth is shallow; the cost is high.
- Building custom for productivity use cases. Building "our own ChatGPT for the team" wastes 6 months and ₹15L on something Anthropic ships better.
The sequencing that works
Month 1–6: Buy first, observe
Subscribe everyone to a frontier model. Subscribe to vertical SaaS where it fits. Watch which prompts and patterns your team relies on. The patterns are the spec for what to build next.
Month 6–12: Identify the integration gap
You will start hearing things like: "I wish ChatGPT could pull data from our ERP," "I wish it could update tickets directly," "I wish we could search all our internal docs." These are the build candidates.
Month 12+: Build the integrations
Custom integration of one identified high-value use case at a time. ₹8L–₹25L per use case, 8–16 weeks. The investment pays back faster than first-time AI implementations because you have already validated demand internally.
How to evaluate a build proposal
If a vendor proposes a custom AI build, the proposal should answer:
- What specific operational system does the AI integrate with?
- What data flows in, what gets written back?
- What is the success metric, and what is the baseline?
- What ongoing maintenance is required (model updates, prompt iteration, data drift)?
- What is the path to ownership — does the client own the code, prompts, and integration logic?
Vague answers ("we will use the latest LLM") are a red flag.
Quick decision flowchart
| Question | If yes | If no |
|---|---|---|
| Is the use case individual productivity (drafting, research)? | BUY ChatGPT/Claude/Gemini | Continue |
| Is the use case standardized enough for vertical SaaS to solve? | BUY (Gong, Otter, etc.) | Continue |
| Does the AI need to integrate with your operational systems (ERP, CRM, etc.)? | BUILD custom integration | Continue |
| Is the AI a customer-facing product feature? | BUILD custom | Reassess — might not need AI at all |
Where to go next
Once you have a build-vs-buy answer for your use case, see the AI readiness audit to confirm you are ready, or the full AI adoption playbook for the rollout sequence.
Need help with the build-vs-buy decision?
Tell us the specific AI use case you're considering and we'll give you a straight read in 30 minutes — including which off-the-shelf tools to evaluate first, and whether custom integration actually pays back for your scale.

