
Let’s be honest for a moment. If you were to feed your current system architecture into ChatGPT, what would you actually be uploading?
In most large enterprises, the answer is a “highly detailed” Visio diagram from 2019, where the database is represented by a cylinder that looks suspiciously like a tin of baked beans, and the “Cloud” is a fluffy cumulus shape that simply says Azure?.
If you ask an LLM to analyse that, it will hallucinate. It will politely inform you that your legacy mainframe appears to be connected to a toaster.
We are entering the era of the AI Architect. We all want to ask our IDEs: “Where are the single points of failure in this checkout flow?” or “Does this new microservice violate our segregation of duty policies?”
But an LLM is only as smart as the context you give it. If you feed it pixels (diagrams), it guesses. If you feed it infrastructure as code, it gets lost in the weeds of subnets and instance types, missing the forest for the trees.
It needs a middle ground. It needs CALM the Common Architecture Language Model.
The Rosetta Stone for Robots
CALM becomes your canonical source of truth. It turns the abstract scribbles on a whiteboard into structured, validated JSON.
When you describe your system in CALM, you aren’t drawing a picture; you are encoding a relationship.
Service A -> [Calls] -> Service B -> [Stores Data In] -> Database C.
To a human, that’s a diagram. To an LLM, that is pure, unadulterated logic.
Suddenly, your AI assistant isn’t guessing. It knows-exactly-that Service A depends on Database C. It can traverse that graph. It can reason about it. It can tell you, “Mate, if you deploy this update, you’re going to break the reporting system in Slough.”
Grounding the AI in Reality (and the 2026 Roadmap)
We aren’t just theorising here. If you look at our CALM 2026 Roadmap (Issue #2075 for those following along on GitHub), you’ll see we are explicitly widening the net.
We’re moving beyond just GitHub Copilot. We’ve already added support for Claude Code (Issue #2038) and AWS Kiro (Issue #2003). Why? Because different models have different strengths, but they all share the same hunger for structured data.
The “Story” vs. The “Inventory”
Even more exciting is the shift towards Flow-First Modelling (Issue #1875).
Historically, enterprise architects have been obsessed with “boxes”-listing all the servers and apps (the inventory). But AI-and frankly, the business-thinks in “flows.”
User clicks buy -> Order Created -> Inventory Checked -> Payment Taken
By prioritising these flows in the CALM specification, we are giving the AI the “story” of your system. It allows the LLM to understand intent, not just existence.
Time Travel for Architects
We are also tackling Architecture Timelines (Issue #1762). Because systems aren’t static; they are living, breathing (and occasionally dying) beasts.
Imagine asking your AI: “Show me what the architecture looked like last Tuesday before everything went pear-shaped.” With CALM’s timeline support, the AI can diff the architecture state just like it diffs code. It understands evolution. It can see that you added a dependency on a deprecated API three days ago and flagged it before you even merged the PR.
Stop Feeding the Robot Scraps
If we want AI to be useful in Enterprise Architecture, we have to stop treating it like a magic 8-ball and start treating it like a junior engineer. You wouldn’t hand a junior engineer a napkin sketch and say “ensure this is compliant.” You’d give them the specs (well maybe).
CALM is that spec.
It is the prompt engineering language for your entire infrastructure. It is the difference between an AI that hallucinates a security policy and an AI that enforces one.
So, put down the mouse, close the presentation software, and come help us build the standard. We’ve got issues open, PRs waiting for review, and a community that actually enjoys discussing metadata schema.
Get involved at calm.finos.org or find us on GitHub at finos/architecture-as-code.