“Rather than being framed as bottlenecks, handling compliance and risk hedging, [CLOs and GCs] can now make empirically driven decisions and drive strategy.”– Dr Megan Ma, Executive Founding Director of liftlab at Stanford Law School
Watch the full interview with Dr. Megan Ma
In our latest fireside chat, Josef Co-founder and COO Sam Flynn sits down with Megan to unpack how legal teams can move from passive tech adopters to active designers of their future workflows.
Drawing on her background working with the Canadian government on self-driving car regulation, Megan offers a compelling analogy for how we should think about AI in legal practice. Her thesis? The art lies in how and when we invite humans into the loop.
Watch the conversation in full, listen to it on Spotify, or read our recap below.
“When they realized a perfect self-driving car was unachievable,” Megan explains, “they started looking at how to build the best alarm system, how to build the best escalation pathway.”
It’s an insight that translates surprisingly well to legal AI. As Megan puts it, “You’ll never eliminate all risk, so it’s about signalling when humans should take over.”
Too often in legal automation, we leap from task to outcome, skipping over the messy middle. But every contract, for instance, involves “15 sub-steps,” Megan says. “And no one talks about those sub-steps.” When we skip the nuance, the tools may underperform, but not due to technological failure, but because of a failure to map out where human judgment is most valuable.
Large language models hallucinate. It’s not a bug, it’s a baseline and “fundamental to how the technology works.” Once you accept that, the question becomes: how do humans interact with imperfect outputs?
Rather than chasing perfection, Megan encourages the vendors to shift focus toward usefulness. That is, designing systems that help users spot when something feels “off,” guide them to intervene, and build trust through transparency and control.
It means designing not just for outputs, but for confidence, clarity, and collaboration. Because in the world of legal AI, how users engage matters more than whether the model is technically right.
Legal training has always relied on repetition and mentorship: two things that generative AI is poised to transform. Megan’s solution? Legal personas.
“We were building an M&A negotiation simulator,” she explains. The goal: give junior lawyers a guided, interactive way to experience deal dynamics (e.g. client intake, opposing counsel negotiations, even tax and employment consultation) without the risk of real-world error.
In another tool her team developed, users could upload a document and select an AI generated Partner. Megan says, “your AI basically overlays your red lines and explanations of those red lines” giving junior lawyers the ability to learn not just what was changed, but why.
“Rather than just clicking accept, accept, accept, accept, you have the time to go over and view in your eyes why those red lines existed.”
The result won’t just mean faster up-skilling, but smarter collaboration. “Young associates can start thinking critically from your perspective and your experience.”
“Rather than being framed as bottlenecks, handling compliance and risk hedging, [CLOs and GCs] can now make empirically driven decisions and drive strategy.”– Dr Megan Ma, Executive Founding Director of liftlab at Stanford Law School
Despite the huge wave in current innovation, real-world adoption in certain pockets remains stubbornly slow. For those developing legal AI tools, Megan says, “You’re kind of like a train going at full speed against stop signs constantly.”
Legal teams are overwhelmed with demos, vendor fatigue, and legacy system inertia. But change is happening, often catalyzed by good design.
Megan recounts one project where a Microsoft Word plugin received lukewarm feedback, only to become a hit once moved to a clean, browser-based interface. “We didn’t change a single thing on the backend,” she says. “And the result was, ‘Well, this is actually fundamentally what we need.’”
Perhaps the most compelling shift Megan identifies is in-house legal’s evolving identity. “You’re actually now reframed as the person that’s driving strategy,” she says. And AI is helping make that possible.
With the right tools, she says, GCs and CLOs can guide the business through simulations and help them make “empirically driven or data driven decisions.” She adds, “This wasn’t the case before.”
This won’t just help legal teams work faster, but help them earn new, indispensable seats at the table.
The SVP, Global Head of Digital Legal and Innovation at DHL knows what it takes to really innovate: structured change, business alignment, and a deep understanding of the people you want to bring along for the journey.
In the days leading up DHL’s first ever Legal Innovation Summit, Sam spoke with Elaine about her career, DHL’s innovation playbook, her approach to legal AI, and how she believes legal teams can lead from the front.
Book a demo to see first-hand how easy it is for your whole team to get up and running with legal automation.
We'll be in touch soon to arrange a time to speak.