The midmarket EA toolchain dilemma
There is a segment of the market — organisations with roughly 200 to 5,000 employees — where the need for structured enterprise architecture practice is real, but the tooling designed to support it does not fit.
These organisations have enough IT complexity to need a coherent approach to portfolio management, capability mapping, and architectural governance. They often have a single enterprise architect, or an IT director wearing the EA hat, trying to provide that coherence. And when they look for tooling to support the work, they encounter a familiar problem: the tools available either belong to a different era of software or are designed for organisations ten times their size.
This is the midmarket EA toolchain dilemma. It is not a niche problem. It affects a very large number of organisations, and it shapes the quality of IT decision-making in ways that have real consequences.
What overshoot looks like
Enterprise architecture platforms built for large enterprises have genuinely impressive capabilities. They can model complex multi-domain architectures, support hundreds of concurrent users across distributed teams, integrate with dozens of data sources, and produce detailed governance reports across thousands of components.
For a company with 400 employees and one enterprise architect, almost none of that is relevant. What is relevant is the ability to maintain a clear picture of the application landscape, connect it to business capabilities, support a small number of critical portfolio decisions each year, and communicate architectural choices to stakeholders who are not architects.
When a midmarket organisation adopts a platform designed for ten times its scale, several things typically happen:
The onboarding horizon extends. Large EA platforms often quote implementation timescales of six to eighteen months before the tool is genuinely producing value. The complexity of configuration, data modelling, and stakeholder adoption does not compress well into a smaller organisation. The EA practitioner spends most of their time managing the tool rather than doing EA.
The model becomes the enemy. Large-scale EA platforms tend to reward comprehensive modelling. Every application has dozens of attributes. Every relationship between components is typed and documented. For a team of one or two people, maintaining that level of detail is not sustainable. The model quickly falls behind reality, which destroys its credibility and utility.
The ROI calculation does not close. Licence costs that are appropriate for a hundred-user deployment are prohibitive for a single practitioner. The value that a large enterprise extracts from a comprehensive EA platform — cross-functional alignment, enterprise-wide decision support, compliance and audit capabilities — does not exist at the same scale in a midmarket organisation.
The spreadsheet trap
Faced with the overshoot problem, many midmarket organisations default to the opposite extreme: spreadsheets and slide decks. This is understandable. The tools are free, they are flexible, and every stakeholder can open them.
But spreadsheets are not EA tools. They lack version control, they do not preserve relationships between entities, they cannot surface insights across the portfolio, and they break down the moment you need to do anything other than list things. A spreadsheet-based portfolio may look like EA work but it functions as a snapshot — static, easily outdated, and disconnected from the decisions that actually matter.
The space between the enterprise platform and the spreadsheet is large, and it is largely unoccupied.
What right-sized looks like
A tool designed for the midmarket EA context would look different from both ends of this spectrum. It would be opinionated about scope rather than comprehensive by default, because comprehensiveness is a trap when you have limited capacity to maintain it.
The core of a right-sized EA tool is a structured application inventory — not a flat list, but a registry of applications with enough context to support portfolio decisions. That context includes which business capabilities each application supports, a basic health assessment, strategic alignment scoring, and cost data. Not a hundred attributes per application — a focused set that drives the decisions that actually need to be made.
Connected to the application inventory is a capability map — not a detailed process model, but a stable representation of the business capabilities that the strategy depends on. The purpose of the capability map is not exhaustive documentation. It is a shared vocabulary that allows IT and business stakeholders to discuss portfolio decisions without getting lost in technical detail.
The third component is a decision log — a place where architectural choices are recorded with enough rationale that future decision-makers can understand why things are the way they are. This is modest in ambition but significant in practice. Organisations that maintain a decision log find that it reduces the effort required to onboard new team members, reduces the frequency of revisiting settled decisions, and provides a credible basis for explaining IT choices to non-technical stakeholders.
These three components — structured application inventory, capability map, decision log — do not require a large platform. They require a tool that is designed around them, not a platform that accommodates them as one subset of a much larger feature set.
The governance dimension
One area where midmarket organisations consistently underinvest is governance. Not governance in the sense of committees and formal approvals — that model does not scale down well and tends to produce friction without value. Governance in the sense of consistent decision-making: a shared understanding of how architectural choices get made, who is consulted, and what record is kept.
A right-sized tool supports governance by making the process lightweight enough to actually be followed. If submitting a project for architectural review requires filling in a fifteen-field form and waiting three weeks, the process will be bypassed. If it requires answering four questions and produces a decision within a week, it will be used.
The midmarket EA practitioner is almost always trying to create governance structures that are taken seriously by peers who do not report to them and who have their own pressures and priorities. The tool that supports this work needs to reduce friction, not add it.
The human factor
The midmarket EA context has a human dimension that is easy to overlook. In most organisations of this size, the EA function is one or two people. They are typically technically skilled, often experienced in larger organisations, and frequently frustrated by the gap between the rigour they can apply in their own work and the influence they can actually have on organisational decisions.
The tooling that serves this context needs to be something they can own and maintain without significant support. It needs to produce outputs — portfolio views, capability maps, strategic alignment analysis — that are legible to stakeholders who are not architects. And it needs to connect the EA practitioner's work to the strategic conversations that actually drive decisions, rather than positioning it as a technical service to be consulted after the fact.
The dilemma is real, but it is not permanent. The organisations that find a tooling approach that fits — lightweight enough to maintain, structured enough to produce genuine insight, connected enough to strategic context to influence decisions — find that the EA practice delivers value that is visible to the business. That visibility creates the conditions for the practice to grow.
Atlas is built for this context: structured enough to be a genuine EA tool, lightweight enough for a team of one or two to own and maintain. Try Atlas →