The Leverage Line: What AI Actually Does (And What Remains Human)

· 11 min read
The Leverage Line: What AI Actually Does (And What Remains Human)

The legal profession's conversation about artificial intelligence has collapsed into a false dichotomy. In one camp: AI will automate legal work, eliminate roles, and fundamentally shrink the workforce. In the other: AI is unreliable, can't grasp legal nuance, and won't meaningfully change how serious legal work gets done.

Neither position is useful, because neither is really about AI. They're about absolutes — and absolutist thinking is a poor foundation for strategic decisions.

Consider an analogy from a different craft. When power tools arrived in carpentry, they didn't replace carpenters. They didn't render hand-tool skills irrelevant. What they did was change the leverage equation. The carpenter who understood which tool to reach for — and when precision hand work was still the right call — could produce better results, in less time, at greater scale than one who refused to adapt. The power tool didn't replace the craftsman's judgment. It rewarded it.

The same dynamic is playing out in legal practice right now. AI is not a replacement for legal professionals, and it is not a parlor trick. It is a power tool — one that dramatically amplifies the capability of the practitioner who knows how to deploy it. While some value comes from the tool itself, it is exponentially multiplied in the hands of a tactician: someone who understands what AI does well, where human judgment remains essential, and how to build the workflow that maximizes the leverage between the two.

That's the lens worth adopting. Will some legal roles be displaced by AI? Yes — particularly those defined primarily by tasks that AI handles well, like routine document processing, standard-form drafting, and repetitive data extraction. That's not a comfortable truth, but ignoring it isn't a strategy. The more consequential shift, though, isn't which roles disappear. It's how the skills that define value in legal practice are changing. The professionals who thrive won't just be the ones with the deepest legal knowledge. They'll be the ones who pair that knowledge with the ability to deploy AI effectively — who understand how to direct it, when to trust it, when to override it, and how to build the workflows that turn its output into excellent legal work. Legal judgment isn't becoming less important. But judgment alone, without the operational fluency to leverage the tools now available, will no longer be enough.

What AI can do today — and what most firms haven't seen yet

Most law firms that have experimented with AI have encountered a version of it that is disconnected from their systems, operating on whatever documents get manually fed into it, producing output that then has to be manually carried back into the workflow. Based on that experience, many firm leaders have concluded that AI is modestly useful — a convenience, not transformative.

That conclusion is understandable. It's also based on an experience that was set up to underwhelm — not intentionally, but structurally.

The capabilities of current AI models — even before accounting for the pace of improvement — are meaningfully different when the tool has access to the right data and the right infrastructure. The gap between what most firms are experiencing and what is already technically possible is not a capability gap. It is an infrastructure gap, and it is the same gap I explored in my first essay on this site.

Consider three capabilities that firms are already encountering in limited form, and what they look like when the infrastructure catches up.

First-draft generation is the most visible. Most firms that have tried AI drafting have experienced it as a generic starting point — better than a blank page, but clearly a template-level output that requires heavy revision. That's what happens when an AI tool drafts from a prompt and a single uploaded document. When the same tool has access to the full matter file, the structured case data in the CMS, the firm's own precedent documents, and the specific context of where a case stands procedurally, the output shifts from generic to genuinely informed. The draft reflects the specific matter, the specific client, and the firm's own standards. It still requires attorney review — every first draft does — but the distance between the AI's starting point and the finished product shrinks dramatically.

Pattern recognition scales in the same way. A standalone AI tool analyzing a single document can identify relevant passages and extract key information. That's useful but limited. When that same capability operates across an entire case file — medical records, correspondence, filings, deposition transcripts, insurance documents — it can reason across the full picture. It can flag inconsistencies between a treatment record and a billing statement. It can identify where a new piece of information intersects with existing case strategy. It can surface connections across hundreds of pages that no individual reviewer would catch, not because the reviewer isn't smart enough, but because the volume exceeds what human attention can hold simultaneously.

I can speak to this from direct experience. Our firm reviewed a potential case that, on initial evaluation, had some redeeming characteristics but didn't appear to be a strong fit. We referred it out, and the receiving firm accepted it. Some time later, the attorney we'd referred it to informed us of a significant seven-figure settlement. When we asked what drove the outcome, the answer was striking: AI analysis of the medical records had identified that the client was suffering from complex regional pain syndrome — a finding that the human reviewer had missed and that fundamentally changed the value and trajectory of the case. The legal judgment to pursue the case aggressively, the client relationship, the negotiation strategy — all of that was human. But the critical insight that made it possible came from a machine reading patterns across a volume of medical data that a human reviewer, under normal time constraints, simply didn't catch.

Data extraction is where the infrastructure dependency becomes most concrete. AI can read a document and extract structured information — policy limits, treatment dates, provider names, settlement positions — faster and more consistently than manual entry. But extraction alone only solves half the problem. The other half is what happens to that data. If a human still has to take the extracted information and key it into the CMS, rename the file, notify the right team members, and update the matter status, the firm has replaced one manual step with a slightly faster manual step. The integration tax remains. When the infrastructure exists for AI to extract the data and write it to the correct systems, trigger the appropriate notifications, and update the matter record — without a human serving as the relay — the leverage is transformational.

The point is worth stating plainly: most firms are evaluating AI based on a version of it that is hobbled by the very infrastructure problems these essays have been describing. They're judging a power tool by watching someone use it without plugging it in — and concluding it doesn't work.

Where the human role transforms

This is where the conversation usually turns to a reassuring list of things AI can't do — and where I want to be more honest than that.

The standard comfort narrative goes something like this: AI can't exercise ethical judgment, can't develop novel legal arguments, can't build client trust, can't bear professional accountability. Therefore lawyers are safe. That narrative is increasingly incomplete, and clinging to it is a poor foundation for strategic decisions.

The more honest version: AI is encroaching on every one of those categories. And that's not a threat — it's a shift that makes the distinctly human contribution more concentrated and more valuable, not less relevant.

Take ethical reasoning. AI can identify ethical issues, cite the relevant rules of professional conduct, flag potential conflicts, and reason through the analysis with a consistency that doesn't degrade under time pressure or billable-hour stress. In some respects, AI is more reliable at spotting ethical issues than a busy practitioner who might miss a conflict check at the end of a long week. Where the human role transforms is not in performing the ethical analysis — AI is increasingly competent at that — but in bearing the weight of ethical accountability. An AI can tell you there's a conflict. It cannot bear the professional consequences of how you respond to that information. The reasoning is increasingly shared. The responsibility is not.

Take legal argumentation. A common reassurance is that AI can't produce novel legal reasoning. That's technically true but practically misleading. The vast majority of legal work doesn't require novel arguments. It requires the best available argument applied to specific facts — rigorous selection and application of established frameworks. AI is exceptionally good at this, often more comprehensive than an associate working from memory and limited research time. The human value isn't in constructing arguments from scratch — it's in the strategic judgment about which arguments to deploy, how to sequence them, when to press and when to concede, and how to adapt when the unexpected happens. That layer of strategic orchestration is where the attorney's expertise compounds. And it becomes more impactful, not less, when the underlying analytical work is stronger.

Take client trust. AI cannot have the conversation a client needs to have with their attorney — the one where they feel heard, understood, and confident that someone is genuinely invested in their outcome. But AI can make the attorney dramatically better at every interaction that builds that trust. The attorney who walks into a client meeting with an AI-assembled case summary, fully current, with strategic options already framed, is an attorney who inspires more confidence — because they appear (and are) more prepared, more attentive, and more on top of the details. And increasingly, clients may prefer AI for certain interactions — a status update at ten on a Saturday night, a quick answer about what happens next in the process — precisely because it's available when a human isn't. The relationship remains human. The infrastructure that supports the relationship doesn't have to be.

Take accountability. AI can pass the bar exam. The capability threshold has been crossed; the regulatory framework hasn't caught up. Whether AI will eventually hold something resembling professional licensure is a question for the profession to work out over time. But today, a human being signs the pleading, stands before the court, carries the malpractice insurance, and bears the reputational consequences when something goes wrong. That isn't a statement about AI's limitations — it's a statement about how the profession is currently structured. And it means that for the foreseeable future, the human role includes something AI structurally cannot provide: a person who is professionally and personally accountable for the outcome.

The pattern across all four of these is the same. The human role isn't disappearing. It's concentrating — moving toward judgment, accountability, relationship, and strategic decision-making. The work surrounding those things — the research, the drafting, the extraction, the analysis — is increasingly shared with or handed to AI. And that concentration makes the human contribution more valuable per hour, not less.

The leverage shift

This brings us to the argument that the false dichotomy obscures entirely: AI won't eliminate every role — but it will fundamentally change the leverage ratio.

Consider the case manager I described in the integration tax essay — the one who had built parallel spreadsheets because the CMS couldn't track what she needed. A significant portion of her time went to data extraction, re-keying, and synchronization. If AI handles that work — and the infrastructure exists for it to do so seamlessly — she doesn't become redundant. She becomes proportionately more available for the work that case managers should actually be doing: evaluating case progress, coordinating with attorneys, communicating with clients, identifying problems before they compound. The lift gained from effective deployment of technology translates directly into both quantitatively more output and qualitatively better output.

Scale that across the firm. A paralegal who can review three times the document volume isn't replaceable — they're three times as effective. An associate starting with a contextually informed draft rather than a blank page, an attorney walking into every client meeting fully briefed with strategic options already assembled — these professionals aren't less necessary. They're materially better at the work clients are actually paying for.

The leverage argument extends directly to firm economics. For contingency firms, the math is immediate: expanded capacity per person means more matters handled effectively with the same team, which directly improves margins on every case. The integration tax essay described how operational friction erodes the firm's share of recovery on every matter. AI-driven leverage is the inverse — it expands the firm's capacity without proportionally expanding its cost base.

For hourly firms, the calculation is different but equally consequential. Clients are increasingly unwilling to pay for work that AI could have accelerated. The firm's value proposition is shifting toward the judgment, strategy, and accountability that justify premium rates — precisely the human contributions that AI makes more concentrated. Firms that can demonstrate they're using AI to deliver better work faster, while reserving attorney time for genuinely high-value activity, will have a pricing and positioning advantage over firms that are still billing for tasks AI could handle.

But here's the critical caveat, and it connects everything in this series: this leverage only materializes when the infrastructure supports it. An AI tool that can extract data from a document but can't write it to the CMS creates a slightly faster version of the same manual bottleneck. A drafting tool that produces an excellent first draft but requires the attorney to download, edit, and re-upload it captures a fraction of its potential value. The tool isn't the leverage. The system is.

The conversation worth having

The false dichotomy — AI replaces everyone, or AI is overhyped — is comfortable because it's simple. It lets firm leaders either panic or dismiss, and both responses have the advantage of not requiring any hard operational thinking.

The real conversation is harder but more productive. It starts with understanding specifically where AI creates leverage and where the human role concentrates. It continues with building the infrastructure that maximizes the return on both. And it requires an ongoing discipline — revisiting the boundary as capabilities evolve, adjusting workflows, and resisting the temptation to either over-trust or under-trust the technology.

That boundary is moving — on both sides, but not at the same rate or in the same way. AI is getting dramatically better at processing, synthesizing, extracting, generating, and executing. It is also getting incrementally better at reasoning, analysis, and even identifying strategic considerations that were recently considered safely human territory. What is not moving is the accountability layer — who signs the pleading, who bears the professional consequences, who stands behind the work — and the relationship layer — who the client trusts, confides in, and relies on when the stakes are personal. The skills that define leverage are expanding rapidly. The skills that define responsibility and trust are holding steady. Firms that invest accordingly will build in the right direction.

The firms that engage in this conversation will find that AI doesn't threaten their people. It transforms what their people are capable of — more matters handled, better preparation, higher-quality work, more time spent on the things that actually require a human mind and a human relationship.

That transformation, like everything else in this space, starts with infrastructure. Not with the tool. Not with the demo. With the foundation underneath.

Share this essay:
Jim Andresen

Jim Andresen

Jim Andresen is a legal operations executive, technologist, and CEO of LawWorks. He has spent over a decade running operations at personal injury law firms and writes about the infrastructure, AI, and structural shifts redefining how legal expertise is delivered. His essays draw from the perspective of someone building the solution and operating within the problem it solves.

Stay updated

Get new essays on the future of legal practice and technology sent directly to your inbox.

Typically 1 or 2 emails per month. No spam.

Further Reading

The Future of Legal Practice

Agents at the Gate: The State of Agentic AI in Legal Practice

Something fundamental has shifted in how artificial intelligence operates, and it extends well beyond the legal industry. Agentic AI — systems that plan, execute, and adapt autonomously — is in production across software engineering, financial services, and healthcare. The legal profession, characteristically, is watching from a cautious distance. But caution is becoming difficult to distinguish from inaction.

11 min read

The Future of Legal Practice

Why Legal Tech Keeps Disappointing

If you've led a law firm long enough, you've lived through the cycle: impressive demo, signed contract, underwhelming results. The conventional explanations — wrong vendor, immature tech, resistant lawyers — don't explain the consistency of the pattern. The real problem is architectural, and it starts with decisions the vendor made long before the sales team walked into the room.

10 min read

The Future of Legal Practice

The Firm That Runs Itself (Almost)

The second is the all-or-nothing fallacy. Firms assume that improving operations means a massive, disruptive overhaul — ripping out every system and replacing it simultaneously. That assumption is both paralyzing and usually wrong. There's an old observation — often attributed to Bill Gates — that most people overestimate what they can accomplish in a day and underestimate what they can achieve in a year. Operational improvement works exactly this way. Most improvements are incremental and compounding. Connect two systems. Standardize one workflow. Structure one category of data. Each step reduces the integration tax and creates a foundation for the next. No single step feels revolutionary. But the firm that takes one step per month looks dramatically different a year later. That said, there are moments when the right move isn't incremental — when the accumulated debt in the firm's infrastructure is so structural that the honest answer is to replace the foundation rather than keep patching it. Knowing the difference between a problem that calls for iteration and one that calls for a fundamental shift is itself a leadership judgment. But the default assumption — that any meaningful change requires burning everything down overnight — is the one that keeps most firms from starting at all.

11 min read