Not all AI use carries the same risk. That sounds obvious, but most firms treat it as binary. Either AI is allowed or it is not. The reality is more graduated, and getting the classification right is what makes governance workable rather than restrictive.
A blanket restriction on AI is easy to write and hard to enforce. People will use tools anyway when the tools are genuinely useful, and the restriction will erode quietly. A classification system that matches oversight requirements to actual stakes is harder to design but much more durable in practice.
The variables that drive risk
Risk in AI use cases comes down to three things: what data is involved, how the output is used, and how reversible the decision is.
An internal draft using anonymized project notes is very different from a summary that goes directly to a client. A brainstormed list of marketing angles is very different from a proposed contract clause. The difference is not the tool. It is the context: the sensitivity of the input, the stakes of the output, and how much a human can correct the situation if something is wrong.
Lower-risk use cases
These involve non-sensitive data, internal-only outputs, and meaningful human review before anything consequential happens. Examples:
- Drafting internal knowledge base articles or SOPs from existing notes
- Brainstorming and ideation where the human filters and selects
- Summarizing internal meeting notes or project updates
- Editing and formatting help on non-sensitive content
- Research compilation on topics with no confidential source material
Most firms can operate in this category without heavy governance overhead. A basic review norm and light documentation is sufficient.
Moderate-risk use cases
These involve internal business data or outputs that could influence client-facing decisions or external communications. Examples:
- Drafting client communications from internal notes or context
- Proposal and pitch language developed with AI assistance
- Research summaries that shape recommendations to clients
- Data analysis outputs referenced in reports or presentations
This category needs a defined review standard. Not sign-off on every word, but a clear expectation about what gets checked before use. The common failure mode here is an informal assumption that review is happening without ever specifying what that review looks like.
Higher-risk use cases
These involve regulated data, confidential client information, or outputs where errors carry significant consequences. Examples:
- Legal document drafting, contract language, or compliance filings
- Financial projections or analysis used in client deliverables
- Any output where the firm's professional judgment is on the line
- Materials involving data subject to legal, regulatory, or confidentiality obligations
High-risk use cases require explicit review, defined escalation paths, and often a clear rule that AI support stays in the drafting and assistance phase rather than the final output phase. The human professional is still responsible for everything that leaves the firm.
Using the classification in practice
Once you have categorized your firm's actual use cases, governance becomes a matter of matching review standards to risk levels. Low-risk uses get light oversight. High-risk uses get defined checkpoints. The same principle applies as AI tools and use cases evolve.
The classification does not have to be exhaustive from the start. Start with the use cases that are already active or most likely to appear. Add new categories as the need arises. A living classification system is more useful than a comprehensive one that is out of date.
The goal is a workable answer to the question every team member should be able to answer: given this task and this data, what level of review does this require before I act on or share the output?