The Probabilistic Tax
- manuelnunes8
- Sep 17
- 2 min read
We've been building an AI-powered product for the past year. And when you build, you debug. We've learned something counterintuitive: our biggest performance killer isn't bad AI models or poor data quality.
It's using AI where you don't need it.
Building in AI means that shortcuts present themselves in the form of LLM interactions. Each of these shortcuts can seem appealing—they're fast to implement—yet there's a hidden cost. Every LLM interaction introduces a probabilistic point of failure.
Perhaps worse: probabilistic steps are complexity multipliers when troubleshooting in production.
The Debugging Tax
Consider a simple workflow that processes user requests. With deterministic rules, failure analysis is straightforward: check the input validation, verify the processing logic, confirm the output format. Each step either worked or it didn't.
Now introduce AI components at each stage. When something breaks, you face a combinatorial explosion of possibilities:
Did the LLM misinterpret the user's intent?
Was the reasoning correct but the context insufficient?
Did the model hallucinate a plausible-sounding but incorrect result?
Is this a rare edge case that worked yesterday but fails today?
Are multiple AI components interfering with each other in subtle ways?
And now the cherry on top: these failures often aren't reproducible. The same input might work fine on retry, making it nearly impossible to identify root causes or implement reliable fixes.
This is what we call the probabilistic tax – the hidden cost of replacing deterministic logic with AI when you don't actually need to.
The Minimum Viable LLM Principle in Practice
Our building process follows one simple rule: use deterministic logic wherever possible, LLMs only where necessary.
Here's our decision framework:
Deterministic Zone (no LLMs needed) - some examples:
Data Structure Operations: Standard JSON parsing, XML processing, and data transformations. Why would you ask an LLM to parse JSON when every programming language has battle-tested libraries for this?
Template Generation: Pre-defined code templates and component libraries. If you know the pattern, code the pattern.
Mathematical Calculations: Business case computations, ROI calculations, and performance metrics. Math doesn't need interpretation.
Validation Rules: Schema validation, format checking, and constraint verification. Rules are rules.
Workflow State Management: Step sequencing and process transitions. These should be predictable, not creative.
Notice how all these things could easily be thrown at GPT. You know what we mean.
LLM Zone (cognitive tasks only) - some examples::
Document Classification: Recognizing document types and categorizing content when formats vary significantly across business domains
Business Language Translation: Converting business process descriptions into technical specifications that machines can execute
Architectural Decision-Making: Applying design patterns and making trade-off decisions based on contextual understanding of requirements
Semantic Matching: Finding functional equivalence between required capabilities and existing code libraries
In the deterministic zone, you're replacing human labor. In the LLM zone, you're replacing human judgment.




Comments