Steve Hasker is the chief executive officer of Thomson Reuters.
When you read business headlines these days, you might think artificial intelligence is converging – that everyone is racing toward the same destination, powered by the same models, differentiated only by speed.
That assumption is wrong. The AI market is separating – not by speed, but by accountability.
As AI moves from productivity tools to industry-specific assistants, there’s an important distinction between AI for general knowledge work and AI for fiduciary work, where professionals are accountable for high-stakes outcomes. In professions such as law, tax and auditing, accuracy and trust are non-negotiable. When AI is entrusted with decisions carrying legal, financial and regulatory consequences, it simply can’t be wrong.
Yes, speed matters. But not at the expense of verifiability, explainability or defensibility. The winners in this new era won’t be those who simply ship the fastest models, but those whose systems stand up to scrutiny in a courtroom, a boardroom, an audit, or a regulatory review. That’s the real test of accountability and transparency.
The privacy threat that AI poses isn’t what it learns. It’s what it figures out
Opinion: Canadian companies need access to Anthropic’s Mythos before hackers arrive
New entrants from the AI industry are validating what we have long believed: professional knowledge work is one of the most valuable frontiers for AI. We welcome that.
Foundational models are extraordinary engineering achievements. They have accelerated experimentation and lowered barriers to AI adoption across industries. But they are not designed to meet the standards that fiduciary professionals are held to every day. They don’t understand tax or law. They don’t understand regulatory accountability. And they cannot stand behind their answers.
Ask a general-purpose AI product to draft a motion to dismiss in a Florida breach-of-contract case. You’ll get something that looks like a motion to dismiss; it’ll have the right structure, confident language, and may even cite cases. But some of those cases may not exist. Others may have been overturned. And the filing requirements may be wrong for that jurisdiction.
Now ask a fiduciary system built for professional standards the same question. It pulls binding Florida authority, verifies that each citation is still good law, applies the correct procedural requirements, and produces a document that a lawyer can actually file.
That is why fiduciary-grade AI – grounded in authoritative content, trained by legal and tax experts, and embedded directly into real workflows – is the only viable path to durable value. For example, when a lawyer annotates contracts, flagging risk and indicating how issues should be identified and explained, that expertise is embedded into the workflow itself. The system can surface, prioritize and explain issues consistently at scale. This is the difference between AI built for convenience and systems built for accountability.
To date, AI success has been measured in hours saved. But real ROI comes from vertical AI that enables new capabilities, stronger client outcomes, higher confidence and better use of scarce expertise. Its value is durable only when professionals can trust the system end to end, including knowing their confidential information remains theirs and is never repurposed to train someone else’s model.
These are the four dimensions that determine whether AI truly works for professionals:
- “Good enough” is a liability. Professionals need defensible answers. Generic models hallucinate, scrape from the open internet, lack a source of truth and don’t stand behind outcomes; professional-grade systems do.
- ROI comes from end-to-end execution, not isolated tasks. Real value is created when an AI agent can run multistage workflows – from research to analysis to drafting to review, and on to human validation – delivering work that reduces risk and increases margin, not just speed.
- Trust is engineered. Trustworthy systems are built on what others can’t replicate: authoritative content, human expertise and real workflows. This combination is the difference between tools that assist and systems that truly stand up when the stakes are highest.
- Data ownership is non-negotiable. Professionals will rely on AI in high-stakes work only if they know their confidential information is safeguarded. That’s why systems built for professional standards enforce clear boundaries by design – ensuring that customer data remains the customer’s and that a professional’s IP is never repurposed to train a third-party agent or a large language model (LLM).
In an era defined by intelligent systems, trust is the true differentiator.
That’s why organizations like Trust in AI Alliance are necessary. It brings together leading voices from companies including Anthropic, AWS, Google Cloud, OpenAI and Thomson Reuters to establish shared principles. Our mission is simple: advance trustworthy, agentic AI systems and define principles that ensure AI serves people and institutions responsibly.
The real question is no longer, “How quickly did we deploy AI?” It is, “What becomes possible – safely, responsibly, and at scale – that wasn’t before?”
The market for AI models is not converging. It is separating. The future belongs to those who build systems that stand up to scrutiny, earn confidence and deliver real outcomes in the moments that matter most.