Why Legal Tech Keeps Disappointing
If you've led a law firm long enough, you've lived through the cycle. A new tool is evaluated. The demo is impressive. The feature set addresses a real pain point. The firm signs the contract, rolls it out — and somewhere between six and eighteen months later, the tool is either underused, quietly abandoned, or limping along delivering a fraction of what was promised. The firm blames the vendor, or blames adoption, or blames the timing. Then the cycle starts again with the next tool.
The conventional explanations — wrong vendor, immature technology, lawyers resistant to change — all contain some truth. But they don't explain the consistency of the pattern. When the same outcome repeats across different firms, different tools, and different practice areas, the problem isn't the individual decisions. It's something structural.
That structural problem is architectural. Most legal technology products are built on foundational assumptions about how firms operate, how data should be managed, and how systems should communicate — and those assumptions are where the failures originate. Understanding what's actually happening at the technical layer, beneath the polished demo, gives firm leaders a more precise vocabulary for evaluating what they're buying and why it keeps falling short.
The closed data problem
The most consequential architectural decision most legal tech vendors make is also the one firms are least likely to evaluate: how the product handles the firm's data.
Most legal tech products ingest the firm's information into proprietary data models — internal structures designed to serve the vendor's application, not the firm's broader operational needs. Documents, matter details, client records, and work product flow into the tool. Getting that data back out — in a structured, usable form that other systems can consume — ranges from difficult to effectively impossible, depending on the product.
This is not an accident. It's a business model. Closed data architectures create switching costs. The more of a firm's operational information lives inside a vendor's proprietary system, the harder it is to leave, and the more leverage the vendor has at renewal. From the vendor's perspective, this is rational. From the firm's perspective, it's a trap that compounds over time.
The operational consequences are direct. When a firm's data is locked inside a product that doesn't expose it in structured, accessible formats, every other system that needs that data has to work around the gap. The case management system can't read what the document automation tool knows. The billing platform can't access what the intake system captured. The AI tool that could reason across the full matter picture can't see half of it because the data is sealed inside applications that weren't designed to share.
This is one of the primary generators of the integration tax I described earlier in this series — and it starts not with the firm's decisions, but with how the products themselves are engineered. The firm didn't choose to fragment its data. It adopted tools that fragment data by design.
What to look for instead: open data models that store information in standard, accessible formats. Products that treat the firm's data as the firm's asset — exportable, queryable, and available to other systems without requiring the vendor's permission or a custom extraction project. The architecture of a product's data layer tells you more about how it will perform in your ecosystem than any feature comparison ever will.
Batch processing in a real-time world
In my first essay, I described a firm where the integration between an AI platform and the case management system synced only overnight — a new file uploaded in the morning wasn't available to the AI tool until the following day. That delay wasn't a bug or a misconfiguration. It was a direct reflection of the integration architecture: batch processing.
Batch processing means data moves between systems on a schedule — typically nightly, sometimes hourly — rather than in response to events as they occur. It is cheaper to build, simpler to maintain, and far easier for a vendor to implement than the alternative. It is also fundamentally misaligned with how legal work actually moves.
Cases don't progress on overnight schedules. A medical record arrives at two in the afternoon and a case manager needs to act on it now. A settlement offer comes in and the attorney needs the full, current picture of the matter to respond intelligently. A filing deadline triggers a cascade of tasks that need to begin immediately. In each of these scenarios, a system that updates overnight is a system that's functionally out of date for the entire working day.
The alternative is event-driven architecture — systems that communicate in real time, triggered by the events themselves. When a document is uploaded, the downstream systems know immediately. When a status changes, every dependent process updates. When new data enters the ecosystem, it's available everywhere it needs to be, not hours or a day later.
Event-driven architecture is harder to build and more demanding to maintain. It requires robust message handling, careful error management, and infrastructure that can process updates continuously rather than in scheduled batches. This is exactly why most legal tech vendors don't invest in it — batch processing passes the procurement evaluation just as easily, and the limitation only becomes apparent after the contract is signed, when the firm discovers that "integrated" means "syncs overnight."
The question to ask is straightforward: when data changes in one system, how quickly does every connected system reflect that change? If the answer involves the words "nightly," "scheduled," or "batch," the firm is buying an integration that will feel broken in practice — regardless of how smoothly it performed in the demo.
The API illusion
"We have an API" has become the legal tech industry's equivalent of "we have an integration." It's technically true, almost always incomplete, and tells the firm virtually nothing about what the product can actually do within a connected ecosystem.
The range of what "we have an API" means in practice is enormous. At one end of the spectrum: a fully documented REST API with comprehensive endpoints, webhook support for real-time event notifications, structured data responses, granular permissions, and active maintenance as the product evolves. At the other end: a handful of barely documented endpoints that return inconsistently formatted data, offer no event-driven capabilities, and break without warning when the vendor updates their platform.
Both of these get the same checkbox on the RFP.
The distinction matters because the API is the surface through which every other system in the firm's stack communicates with the product. Its quality, comprehensiveness, and reliability determine whether the product can participate in a connected operational environment or whether it's functionally a standalone island that requires human bridges to everything around it.
Several dimensions separate a serious API from a checkbox API, and firms — or their technical advisors — should be asking about each of them.
Coverage is the first. What percentage of the product's functionality and data is accessible through the API? Many legal tech APIs expose only a fraction of what's available through the user interface. If the firm's workflow requires automated access to a capability that only exists in the UI, the integration hits a wall and a human has to step in.
Event support is the second. Can the API notify external systems when something changes — a new document uploaded, a status updated, a task completed — or does the external system have to repeatedly poll the API to check for changes? Polling is the batch processing of API architecture: it works, it's delayed, and it means the firm's systems are always slightly behind reality.
Data structure is the third. Does the API return well-structured, consistent data in standard formats, or does the firm's technical team need to write custom parsing logic to make sense of the responses? Poorly structured API responses create fragile integrations that break when the vendor makes even minor changes to their platform.
Stability and versioning is the fourth. Does the vendor maintain backward compatibility when they update their API? Do they version their endpoints so that existing integrations continue to function when new features are added? A vendor that makes breaking changes to their API without warning is a vendor whose integrations will require constant maintenance — an ongoing cost the firm didn't budget for.
The firm that asks these questions during evaluation will learn far more about how a product will actually perform in their environment than any feature demo can reveal. The firm that accepts "we have an API" at face value will discover the limitations after the contract is signed — when the cost of switching is highest and the vendor's incentive to address the gaps is lowest.
What better architecture actually requires
The pattern across closed data, batch processing, and hollow APIs is consistent: legal tech products are primarily architected to serve the vendor's needs — lock-in, development simplicity, lower infrastructure costs — rather than the firm's operational reality. Describing what better architecture looks like is not a product pitch. It's a set of engineering principles that any firm should be demanding and any serious vendor should be building toward.
The first principle is open data. The firm's data belongs to the firm. It should be stored in structured, standard formats. It should be fully exportable at any time without requiring a custom project. And it should be accessible — in real time, through well-documented interfaces — to every other system in the firm's ecosystem that needs it. A product that ingests the firm's data and holds it hostage behind a proprietary model is not a partner. It's a dependency.
The second principle is event-driven communication. Systems should talk to each other in response to events, not on schedules. When something happens in one part of the firm's operational environment — a document arrives, a deadline is triggered, a matter status changes — every downstream system should know immediately. This requires investment in message infrastructure that most vendors have chosen not to make. But it's the difference between a connected ecosystem and a collection of tools that happen to share a network.
The third principle is platform thinking over point-solution thinking. A platform is designed from the ground up to serve as a foundation that other capabilities can build on. A point solution is designed to perform a single function well. The difference isn't scope — it's architecture. A platform exposes its data, its events, and its functionality as building blocks. A point solution consumes inputs and produces outputs, with little regard for how those outputs connect to anything else. The legal tech market has been dominated by point-solution thinking for two decades. The demands of AI, operational maturity, and modern firm management are exposing the limits of that approach.
The fourth principle is that integration is a core product concern, not an afterthought. In most legal tech companies, integration is handled by a small team — or a single engineer — whose work is subordinate to feature development. New features win deals. Better integrations don't appear on feature comparison matrices. This internal priority structure ensures that the integration layer remains the weakest part of most products. The vendors that treat integration as a first-class engineering concern — investing the same rigor and resources they invest in their core features — will build products that actually survive contact with the firm's operational reality. The rest will keep winning demos and losing deployments.
The buy side has to lead
The legal tech market isn't going to correct these architectural problems on its own. Vendors respond to what wins deals, and as long as procurement processes evaluate features in isolation, reward polished demos, and accept "we have an API" without interrogation, the market will keep producing tools that disappoint in deployment.
The change has to come from the buy side. Firm leaders and their technical advisors need to start asking architectural questions with the same rigor they apply to feature evaluation. How is our data stored? Can we get it back out? How do your integrations actually work — real-time or batched? What does your API expose, and how do you maintain it? These aren't technical trivia. They're the questions that determine whether a tool will deliver value in the firm's actual operational environment or whether it will become the next entry in the cycle of adoption and disappointment.
This isn't about becoming a technology company or hiring a team of engineers. It's about developing enough architectural literacy to distinguish between a product that will participate in the firm's operational ecosystem and one that will sit alongside it as another silo. The difference between those two outcomes isn't visible in a demo. It lives in the technical decisions the vendor made long before the sales team walked into the room.
The firms that develop this literacy will make better technology investments. They'll spend less time in the adoption-disappointment cycle. They'll build operational environments where tools — including AI — actually deliver on their promise. And they'll find, over time, that the question was never whether legal technology works. It was whether the architecture underneath it was ever designed to.
Stay updated
Get new essays on the future of legal practice and technology sent directly to your inbox.
Typically 1 or 2 emails per month. No spam.
Further Reading
Agents at the Gate: The State of Agentic AI in Legal Practice
Something fundamental has shifted in how artificial intelligence operates, and it extends well beyond the legal industry. Agentic AI — systems that plan, execute, and adapt autonomously — is in production across software engineering, financial services, and healthcare. The legal profession, characteristically, is watching from a cautious distance. But caution is becoming difficult to distinguish from inaction.
11 min read
The Firm That Runs Itself (Almost)
The second is the all-or-nothing fallacy. Firms assume that improving operations means a massive, disruptive overhaul — ripping out every system and replacing it simultaneously. That assumption is both paralyzing and usually wrong. There's an old observation — often attributed to Bill Gates — that most people overestimate what they can accomplish in a day and underestimate what they can achieve in a year. Operational improvement works exactly this way. Most improvements are incremental and compounding. Connect two systems. Standardize one workflow. Structure one category of data. Each step reduces the integration tax and creates a foundation for the next. No single step feels revolutionary. But the firm that takes one step per month looks dramatically different a year later. That said, there are moments when the right move isn't incremental — when the accumulated debt in the firm's infrastructure is so structural that the honest answer is to replace the foundation rather than keep patching it. Knowing the difference between a problem that calls for iteration and one that calls for a fundamental shift is itself a leadership judgment. But the default assumption — that any meaningful change requires burning everything down overnight — is the one that keeps most firms from starting at all.
11 min read
The Leverage Line: What AI Actually Does (And What Remains Human)
When power tools arrived in carpentry, they didn't replace carpenters. They changed the leverage equation. The carpenter who understood which tool to reach for — and when precision hand work was still the right call — could produce better results, in less time, at greater scale. The same dynamic is playing out in legal practice. AI isn't a replacement and it isn't a toy. It's a power tool, and its value is exponentially multiplied in the hands of a tactician who knows how to deploy it.
11 min read
Enjoying this essay?
Join the private mailing list for early access to new writings.