Combining Rule Engines and Machine Learning
Summary
A concise argument against the common ML-engineer instinct that “machine learning should replace rule engines.” The core insight:
ML and rules solve different problems, and the best systems combine both. Trying to replace working rule engines with ML models shifts focus to replicating what already works (rules) instead of using ML to improve the outcomes of the entire system. The right question is not “can ML replace this rule?” but “what can ML do that rules cannot?”
This maps cleanly onto agentic system design. In the SOUL.md operating model, the agent (Claude as COO) operates within a framework of explicit rules (decision authority boundaries, escalation triggers, communication protocols) while using intelligence for the parts that benefit from judgment, context, and pattern recognition. Rules handle the predictable; ML/LLMs handle the ambiguous.
The mental model also applies to 01-projects/phdata/index Cortex AI consulting: clients often want to “add AI” by replacing existing business logic, when the higher-value play is layering AI on top of rules to handle edge cases, surface anomalies, or optimize parameters that rules set statically.
This is the same “hierarchy of needs” thinking from 06-reference/2026-03-31-block-hierarchy-to-intelligence — you need solid deterministic foundations (rules) before probabilistic layers (ML) add value.
Open Questions
- In the agent architecture, where is the current boundary between rules and intelligence? Are there places where we are using LLM calls for things that should be deterministic rules?
- For 01-projects/squarely-puzzles/index, puzzle generation likely benefits from a hybrid approach — rules for constraint satisfaction, ML/AI for difficulty calibration and novelty detection. Worth exploring.
- How do you test hybrid rule+ML systems? The testing surface is different for each component.