My AI Engineering Philosophy: Why I Never Get Locked In
How I Learned This Lesson the Hard Way
When OpenAI first launched ChatGPT in November 2022, I was amazed. Here was a GPT-3.5-powered chatbot that could actually hold conversations. When GPT-4 launched in March 2023, I happily signed up for the $20/month Plus plan. It was a bargain — cutting-edge AI at my fingertips.
But this wasn't my first encounter with AI. I'd been working with mostly PyTorch and TensorFlow for some time, training my own models for medical metrics and research. That was the deep learning era — open-source frameworks, transparent architectures, and a truly open way of doing things. You trained your own models, you owned your own code, you controlled your own destiny.
Then ChatGPT happened, and everything shifted. The center moved from open research to vendor-trained models. The term "AI" exploded in popularity, but it meant something different now. Instead of building and training models, we were calling APIs. Instead of understanding architectures, we were optimizing prompts. Instead of owning our models, we were renting them.
Like everyone else, I followed the "standard" instructions: use OpenAI's API, optimize prompts for their model, ship fast.
But then I made what I now consider a rookie mistake: I invested deeply in LangChain, building flows tied to its abstractions. Then model version updates broke things. Same story with Guidance AI, DSPy, and Autogen — all great tools that I still keep in my toolkit, but each with its own quirks, dependencies, and upgrade pains.
It was a wake-up call: the deeper you go into one stack without guardrails, the harder it is to adapt when something changes — and in AI, everything changes fast.
The Trap Most Developers Fall Into
Every developer I know has been there. You start with a tool that feels perfect — maybe it's OpenAI, AWS Bedrock, or Google Vertex AI. It works beautifully, you build fast, and you think you've found the platform.
Then one day you realize:
- Your prompts only work with one model.
- Your entire pipeline is bound to one API.
- Your deployment lives and dies on one vendor's infrastructure.
That's vendor lock-in, and it's the silent killer of AI agility.
My Philosophy: Vendor-Neutral, Model-Agnostic Development
I live by one rule:
Never build anything that can't run on any model, any platform, any time.
Why? Because I've been burned — and I know how expensive "re-platforming" can be.
1. Models Change Faster Than You Think
Claude, LLaMA, Mistral, and new models launch every month. There are dedicated YouTube channels and major online publications dedicated to the latest releases. If you're locked to one, you're already obsolete.
2. The Wood and Paper Analogy
Think of it this way: if the lowest level of AI capability is like a bunch of 5-year-olds gathering wood pieces, does it take a PhD to do this task? No. But if you need them to use a chipper and process to convert those shavings into paper, that's when you need an advanced agent and its processing power.
Simple tasks don't require advanced AI. But complex processes — like converting raw data into structured insights, or orchestrating multi-step workflows — absolutely need the processing power of models like Kimi K2 or DeepSeek-R1.
The problem is that you're paying premium prices for access to these advanced capabilities, but you don't own them. You're renting processing power that could be taken away or changed at any moment.
3. Vendors Change Their Terms
Prices go up. APIs get deprecated. Features disappear. If you're locked in, you're powerless.
How This Philosophy Plays Out in My Work
Abstraction Layers Everywhere
I never call a model API directly. I route through an abstraction that can swap models on the fly.
Litellm goes a long way toward truly unified, model-agnostic calls. I've also forked token.js to do the same thing on the js side. Rust has some interesting possibilities in this space for performance-heavy pipelines — I'm keeping an eye on it and experimenting where it makes sense.
Prompt Engineering That Travels
- No model-specific hacks
- Structured, portable formats
- Fallback logic for weaker models
Universal Data Formats
- JSON schemas anyone can consume
- Embeddings from multiple providers
- Vector formats compatible with any DB
Why Medium-Sized Businesses Must Train In-House
If you're running a medium-sized business, building in-house AI capability isn't a luxury — it's survival insurance.
- You own the models, not just the API keys.
- You keep your IP private.
- You avoid scaling costs that explode as usage grows.
Small businesses might get by with vendor tools — quick to deploy, easy to use — but they're locked into someone else's feature set. That's fine for early stage speed, but it's a trap if you grow.
The Economics That Changed My Mind
At one point, my API and SaaS bills (across OpenAI, Claude, and others) were over $400/month. I run a lot of experiments, and the cost added up fast.
Now?
I dedicate half my local storage to open-source models — some fine-tuned on synthetic data. My total monthly AI infrastructure cost (all APIs, all SaaS, all cloud) is under $150 — with more control, more flexibility, and no lock-in.
That's the power of open source. That's the power of local models. That's why Chinese open-source LLMs are exploding in popularity — no gatekeepers, no monthly ransom, if you have the hardware to run it. There is a race between computer manufacturers and API services that now rent high-capacity GPUs to host open source models.
The Real Cost of Vendor Lock-In
I've watched teams waste months migrating to new APIs. I've seen products collapse because a vendor killed a feature. I've seen developers sidelined because they couldn't adapt.
The cost isn't just technical. It's existential.
My Development Principles
- Always build abstraction layers — APIs are replaceable
- Test with multiple models — at least three
- Standardize your formats — avoid vendor-only data shapes
- Plan your escape routes — migration is inevitable
- Document dependencies — know exactly where you're locked
The Bottom Line
Vendor lock-in is death by a thousand paper cuts. It starts with one "quick" API call, and ends with you rewriting your stack to survive.
The antidote? Build for freedom from day one.
- Freedom to experiment
- Freedom to negotiate
- Freedom to pivot
- Freedom to scale
Because the best AI systems aren't the ones that just work today — they're the ones that will still work tomorrow, next year, and in the next wave of change.