Research Brief: Context Engineering Is Just Data Modeling With Better PR
Issue #4 in the content calendar. The contrarian take that the hot new AI term is just what data modelers have been doing for years.
The Thesis
“Context engineering” — the practice of designing what information gets fed to AI models, in what structure, with what meaning attached — is not a new discipline. It is dimensional modeling, semantic layers, and ontological design repackaged for the AI era. The frameworks are new. The work is not.
This is not a dismissal. It is a reframe. Data teams should recognize that they already have the skills the AI world is scrambling to invent — and they should claim that ground before someone else does.
Three Angles
Angle A: “The Vocabulary Is New. The Job Description Isn’t.” (Recommended)
The strongest version of the argument. Walk through the core claims of context engineering and map each one to an existing data modeling concept:
| Context Engineering Claim | Data Modeling Equivalent |
|---|---|
| ”Structure information for optimal model consumption” | Dimensional modeling — structuring data for optimal analytical consumption (Kimball, 1996) |
| “Embed semantic meaning alongside raw data” | Semantic layers — BusinessObjects Universe patented this in 1991 |
| ”Provide temporal context for decisions” | Slowly changing dimensions — tracking historical state changes |
| ”Select relevant subgraphs for specific tasks” | Query design and materialized views — serving the right slice of a star schema to the right consumer |
| ”Define relationships and constraints between entities” | Ontology / schema design — ERDs, conformed dimensions, bus matrices |
| ”Ensure consistent definitions across consumers” | Conformed dimensions and metrics governance — “same labels mean same things across sources” |
The punchline: Kimball’s seven requirements for a DW/BI system read like a context engineering manifesto written 30 years early. “Make information easily accessible — intuitive and obvious to business users, not just developers.” Swap “business users” for “language models” and you have a modern context engineering talk.
Why this angle works: It is the most concrete and teachable. Readers can literally hold the two columns side by side and see the mapping. It respects their existing expertise rather than asking them to learn something “new.”
Angle B: “Convergent Evolution Strikes Again”
Frame through the DEDP convergent evolution lens: every generation reinvents the same patterns with new names. DWH became data lake became lakehouse became data mesh. ETL became ELT became reverse ETL. And now: data modeling has become context engineering.
The pattern is predictable. A new consumer type emerges (analysts, then dashboards, then self-service users, now AI models). The field rebrands the work of “structuring data for that consumer” with a new term. The underlying discipline — understanding what data means, how entities relate, what context is needed for good decisions — does not change.
This angle has the added benefit of the Lindy Effect argument: the older, battle-tested techniques (dimensional modeling, schema design, conformed dimensions) will persist long after “context engineering” either becomes foundational or gets replaced by the next rebrand.
Why this angle works: It positions the reader as someone who can see through hype cycles — which is the core Sanity Check brand identity.
Angle C: “What’s Actually New (And What Isn’t)”
The most balanced version. Acknowledge what context engineering adds that data modeling genuinely did not address:
- Unstructured data as first-class input. Traditional data modeling dealt in structured, tabular data. Context engineering must handle documents, images, conversation history, and tool outputs.
- Dynamic, per-request assembly. A star schema is built once and queried many ways. Context engineering assembles a unique context window for every single inference — more like a real-time materialized view than a static model.
- The reasoning layer. Context graphs and decision traces capture why something happened, not just what. Slowly changing dimensions tracked that a value changed; context engineering tries to capture the reasoning behind the change.
Then bring it back: even these “new” elements have precedent. Data virtualization was doing dynamic assembly. Event sourcing was capturing the why. The data community had the pieces — they just hadn’t assembled them for this consumer.
Why this angle works: It is the hardest to dismiss because it steelmans the opposition before delivering the contrarian reframe. But it risks diluting the punch of the headline.
Recommended Angle
Angle A as the primary structure, with a brief nod to Angle C at the close (a “what’s genuinely new” paragraph that gives the take nuance without undermining it). This matches the essay format from the content calendar and the “fundamentals over hype” editorial thread.
Content Mode
Essay. 800-1200 words. One idea explored thoroughly. This is a reframe piece — the value is in the mapping table and the historical grounding, not in breaking news.
Supporting Vault References
- 06-reference/2026-04-03-the-data-warehouse-toolkit — Kimball’s seven requirements and core dimensional concepts. The “argue about whose numbers are right” recurring theme maps directly to context engineering’s “ensure consistent definitions” goal.
- 06-reference/2026-04-04-dedp-convergent-evolution — The convergent evolution framework. “Navigate beyond the hype by focusing on underlying capabilities rather than marketing terminology.” This is the intellectual backbone of the entire argument.
- 06-reference/2026-04-04-dedp-semantic-layer-bi-olap-virtualization — The semantic layer’s 1991 origin story kills the “this is new” narrative. BusinessObjects Universe was context engineering before the term existed.
- 06-reference/2026-04-04-ontology-taxonomy-knowledge-graphs — Milan Mosny’s mapping of ontology, taxonomy, and knowledge graphs to context engineering. His definition: “Context engineering designs the pipeline selecting relevant subgraphs for specific AI decisions.” That is a query optimizer for LLMs.
- 06-reference/2026-04-04-context-graphs-trillion-dollar-opportunity — Foundation Capital’s context graphs thesis. The strongest case for what is genuinely new: decision traces as data. Use this for the Angle C nuance section.
- 06-reference/2026-04-04-building-the-event-clock — Kirk Marple’s state clock vs. event clock distinction. Slowly changing dimensions were a crude attempt at event clocks — they tracked that something changed but not why. This is the honest concession point.
- 06-reference/2026-04-04-claude-code-best-practices — Sankalp’s context engineering strategies (context rot, recitation, on-demand loading). Shows how the term is used in practice by AI practitioners — useful for grounding the argument in current usage.
- 01-projects/newsletter/sc-relaunch-essay — Already planted the seed: “There’s this new term floating around: ‘context engineering.’ I keep reading about it like it’s some brand-new discipline… didn’t we used to just call this data modeling?” Issue #4 delivers on that tease.
- 06-reference/concepts/analytics-as-craft — The craft framing. Knowing the history of your discipline is what separates practitioners from trend-followers. Context engineering is a craft opportunity for data teams, not a threat.
Draft Hooks
-
“I’ve been seeing a term everywhere lately: context engineering. And I keep having the same reaction — didn’t we used to just call this data modeling?” Direct, conversational, picks up the thread from the relaunch essay. Sets up the contrarian reframe immediately.
-
“In 1991, SAP BusinessObjects patented something called the ‘Universe’ — a logical layer that translated raw data into business meaning. In 2025, we started calling that same idea ‘context engineering.’ The rename took 34 years.” Historical anchor. Specific enough to be surprising. Lets the reader do the math.
-
“Every few years, the data industry discovers a new word for an old job. Data modeling became analytics engineering. Analytics engineering became semantic layer design. And now, semantic layer design has become context engineering. The resume keeps getting updated. The work hasn’t changed.” The convergent evolution angle compressed into a hook. Slightly more aggressive — good if the goal is LinkedIn shareability.
Sequencing Notes
This is issue #4 in a five-issue arc:
- Relaunch (personal, trust-building) — done
- The new customer (thesis statement)
- Org chart problems (organizational lens)
- Context engineering (contrarian reframe) — this issue
- Fundamentals first (the manifesto)
By issue #4, the reader has context for why fundamentals matter (issues 2-3 established that agents are the new data consumer and organizational problems are about to get amplified). This issue delivers the payoff: your existing skills are the foundation. Issue #5 then closes the arc with the manifesto.
The relaunch essay already teed this up explicitly: “didn’t we used to just call this data modeling?” This issue pays off that line. Reference it directly in the opening.
Key Risk
Overclaiming. The argument breaks down if it implies context engineering is nothing but data modeling. It is more accurate to say: “the core discipline is the same; the surface area has expanded.” The Angle C nuance (unstructured data, per-request assembly, reasoning traces) is the safety valve. Include it, even briefly, or the piece will read as dismissive rather than insightful.
The tone should be: “Data teams, you already know how to do this. Here’s what’s actually new, and here’s why your existing skills are the foundation the AI world needs.” Empowering, not gatekeeping.