The Future of ERM Series: #7 — Using AI in ERM

A man shakes hands with a robot.

The Future of ERM: 12 Hidden, or Not So Hidden, Threats

#7 Using AI in ERM

AI was meant to enhance judgment, not replace it. Yet we now face an uncomfortable question: what happens when the tool designed to strengthen human reasoning begins to erode it?

Enterprise Risk Management, like most disciplines, is not immune to the allure of automation. Many programs already operate with precision yet produce little value. In such cases, AI risks amplifying what is already hollow. In the wrong hands, it becomes another layer of efficiency without insight, another polished system with little substance behind it. It is like buying a powerful drill and choosing to turn the screw by hand. The tool has power, but it is the user who decides whether to use it.

But in capable hands, AI may transform the way ERM operates. It can accelerate analysis, identify patterns that humans might overlook, and place validated information in front of executives while it still matters. The issue is not the technology itself but how we use it. If AI becomes a replacement for thinking, it will only make ERM faster at being irrelevant.

The danger lies in mistaking data-driven speed for understanding. The risk grows when the data that drives that speed is biased, manipulated, or selectively presented to reinforce a preferred narrative. In those cases, AI does not enlighten decision-making; it amplifies what was already wrong. The paradox is clear. AI can elevate ERM’s capabilities, yet quietly erode its essence if left unchecked.


What AI Strengthens

AI’s greatest strength lies in its scale. It can absorb more data, more reports, and more variables than any human or team could ever hope to review. Used thoughtfully, it can supercharge scenario planning, horizon scanning, and the detection of early signals that might otherwise go unnoticed.

It can also help surface what is often missed in traditional ERM cycles: the indirect risks, the faint correlations, and the weak signals that rarely appear in management reports until it is too late. The question is not whether AI can find them but whether we know how to interpret what it finds.

The way AI is used matters far more than its inherent capability. Asking AI for evidence-based information, diverse perspectives, and verifiable sources can sharpen foresight rather than distort it. Prompting it to explore arguments and counterarguments helps expose bias rather than embed it. If prompted poorly, it will simply confirm what we already believe.

When managed well, AI can make ERM faster and more relevant. Real-time monitoring that once took quarters can now be achieved in days. Data validation, once an exhaustive process, can now be automated. Executives can have access to risk insights while decisions are still being made, not after the fact.

Yet this progress should not redefine ERM’s purpose. The role of risk management has never been to gather information but to interpret it. AI can expand the field of vision and improve the pace of analysis, but the meaning of what it sees still depends on human judgment.


What AI Impairs

The benefits of AI come with a shadow. As organizations adopt more machine-led analysis, they risk dulling the very skill ERM depends on: discernment.

When risk professionals defer judgment to algorithms, accountability begins to fade. If the model fails, the blame shifts to the machine. Consistency replaces curiosity, and standardization replaces interpretation. AI might suppress human bias, but it can also hard-code systemic bias. It can streamline process but erode creativity.

The greatest risks AI fails to see are those rooted in human nature. It does not understand ambition, pride, or fear. It cannot sense when personal interest overtakes collective good, or when reputation quietly starts to unravel. It cannot interpret ethical or emotional context. It can calculate the probability of fraud but not the temptation that causes it.

These are not secondary details; they are the essence of why ERM exists. Every major failure in business history has included human elements such as ego, greed, pressure, or moral compromise. AI may recognize the pattern but not the motive. It cannot perceive intent or nuance. It is a little like a zombie film. We create the threat, and yet we are the only ones who can stop it. Risk is rarely born without us.

And then there is the issue of transparency. AI systems often operate as “black boxes.” They provide an answer, sometimes a compelling one, but cannot always explain how it was reached. When this occurs, the appearance of intelligence masks the absence of understanding.

ERM depends on trust. It depends on the ability to trace logic, understand assumptions, and test conclusions. If the reasoning behind an AI output cannot be explained, the insight cannot be trusted. The danger is not that AI lies, but that it can sound completely truthful even when it is wrong.

Risk management’s strength lies not in calculation but in interpretation. Without that human layer, ERM becomes just another algorithm, efficient and empty.


The Board’s Dilemma

Sooner or later, boards will ask a question that will make many risk leaders uneasy: if AI can do this faster, why fund ERM to the level we are, or at all?

Maybe it is a fair question, but maybe not. Either way, it confuses speed with substance. Faster does not always mean better, and data without discernment is not assurance.

AI can produce elegant dashboards filled with predictive curves and probability metrics. It can automate the monitoring of financial trends, supplier risks, and macro signals. It can even summarize exposure maps that once took months to compile. But none of this guarantees understanding…or accuracy.

The ERM leader must remind decision-makers that a dashboard, however sophisticated, is not a decision. It is a signpost. Progressing along the path to the next signpost still needs to be taken, in a measured way.

The value of ERM lies in its ability to interpret what the data implies, not just what it says. It lies in discernment. The capacity to put data into context, understand what motivates people (even if illogical), and communicate the implications in ways that drive action.

ERM, at its best, functions as a translator between risk information and human behavior. It listens, probes, and connects dots. In that sense, it is less about control and more about understanding. AI can support that process, but it cannot replicate it.

Like a skilled interpreter between two languages, ERM ensures that nothing meaningful is lost in translation. If we let AI speak alone, we risk hearing everything and understanding nothing.


Redefining ERM in the Age of AI

AI does not diminish ERM. It redefines what ERM must become.

The next generation of risk professionals will need to master data literacy, algorithmic ethics, and the ability to question how AI reaches its conclusions. They will need to know when to trust the data and when to dive deeper into it, which is likely frequently. They will need to see AI as a partner in analysis, not as the analyst itself.

Used properly, AI should free human capacity. It can reduce manual tasks and data wrangling so that professionals can focus on sensemaking, synthesis, and decision quality. But it will also demand a new discipline: validating what is generated, testing for bias, and applying ethical filters to ensure what is efficient is also right.

The organizations that thrive will not be those that replace humans with machines, but those that combine their strengths. AI will be the engine of speed and scale. Humans will remain the source of conscience and context. Together, they can produce insight that is both fast and thoughtful, both scalable and ethical.

ERM should also take a leadership role in governing how AI is used across the enterprise. The same principles that underpin good risk management such as transparency, accountability, and fairness, should guide how AI is designed and applied. ERM is uniquely positioned to ensure that efficiency never comes at the expense of integrity.


The Irreplaceable Human

In the end, AI can simulate thinking but not caring.

Judgment is not only a cognitive act. It is emotional, ethical, and experiential. It is shaped by learning, by empathy, and by the lived experience of being wrong and trying again.

AI might one day recognize the patterns that precede a failure, but it will never feel what it means to fail. It will not understand the weight of a decision or the cost of a mistake. It cannot distinguish between positive stress that builds resilience and negative stress that breaks it. AI doesn’t know what a long run feels like, it cannot comprehend taking a cold plunge. It has never been awestruck looking at the night sky or a sunrise. It can only reference what others have written about those experiences.

ERM must remain the conscience that interprets what AI cannot. It must question not only what the data shows but what it omits. It must ensure that decisions are not only informed but also humane.

The future of ERM will not be defined by whether it uses AI, but by how it uses it. If AI is the accelerator, humans must remain the steering wheel.

AI may strengthen the system, but only humans define its meaning.


Let’s discuss how to keep your risk program moving forward without missing a beat. Click here to schedule a Discovery Session or use the Discovery Session button on my website.