As a technology and cybersecurity professional by training and experience, I’m often asked my opinion on just about everything that falls between these goalposts. Being a technologist at heart and a veteran in the industry, I’m always more than happy to share my thoughts.
Recently, with all the buzz around artificial intelligence, my role in guiding RB Cyber Assurance has spawned questions about how AI may (or may not) impact the insurance industry, and perhaps even more specifically, cyber insurance.
Artificial intelligence is already transforming the cyber insurance industry, and its impact will grow over the coming few years. AI offers speed, economies of scale, and better risk awareness across the board, from application intake to underwriting, claims processing, and fraud detection. However, in a field as hostile, dynamic, and human-driven as cybercrime, the importance of people cannot be overlooked. AI should support the judgment of humans in cyber insurance, not take its place. In a past position, some years ago, I was part of the team that developed an expert system that performed life insurance underwriting. The system could approve/issue basic policies but would never decline. The system has thresholds built into its judgment process and would always refer any uncertainty to a human underwriter.
AI has the potential to significantly enhance risk triage and data standardization at the application stage. In order to identify clear danger indicators, natural language processing can analyze complex questionnaires, externally scan findings, and historical loss data. Given the uncertainty and inconsistency that frequently occur in how businesses characterize their environments, this is especially helpful. Experienced professionals are aware that cyber risk does not neatly fit into categories, though. Due to factors like supply chain dependencies, leadership maturity, culture, or previous security choices, two firms may have the same responses but drastically different threat exposures. To assess context, question presumptions, and find holes that automation alone will miss, human review is still crucial. That said, the awareness of potential built-in biases inherent in all models, must be taken into consideration.
The potential and danger of AI are most noticeable in underwriting. Here at the crossroads of risk assessments, probability, impact, and actuarial matrices lies what is arguably the heart of the entire process. Massive datasets can be analyzed by machine learning models, which can also correlate breach trends and update risk scores on a regular basis in response to changing threat intelligence. This is especially effective when dealing with ransomware gangs, business email penetration patterns (Business Email Compromise), and new attack vectors like identity-based attacks or supply chain intrusions. However, cybercrime is inherently adaptive. I’ve been quoted as saying, “Cybercrime is like water; it will flow through the path of least resistance.” Threat actors purposefully alter their strategies to avoid detection, tamper with controls, and take advantage of human nature. We have been and continue to be the weakest link in the chain of cybersecurity. When models presume that tomorrow’s attacks will be similar to yesterday’s losses, an over-reliance on previous data might lead to blind spots. An “AI underwriter may suffer from digital myopathy. In my opinion, no model can replace the critical intuition and skepticism that experienced underwriters, especially those with investigation or incident response experience, contribute. It is that very experience that drives the intuition and skepticism.
Processing claims adds still another level of complication. AI can assist with issues such as triage, log analysis, unusual activity detection, and even loss range estimation. It can help identify fraudulent tendencies or overstated statements. However, cyber accidents are rarely straightforward or uncomplicated. Today, most of the benefit of an AI agent in a post-breach scenario would be in incident remediation. Intent, timing, containment measures, and communications are all important in ransomware and extortion instances. Subtle clues like tone, transaction context, and internal workflow problems are frequently crucial in business email compromise cases. These subtleties necessitate human analysis, based on my own practical experience as a cybercrime investigator, of how cybercriminals behave, fabricate stories, and adjust to changing circumstances.
The complete automation of cyber insurance decision-making carries a strategic risk as well. Defensive systems, such as underwriting standards and claims procedures, are actively researched by adversaries. These systems are susceptible to manipulation if they become solely computational or predictable. In an adversarial setting, human beings contribute ambiguity, discretion, and judgment—all crucial defensive qualities that are by no means static or “machine-like.” The key would be making the AI “act like this.”
So, in my opinion, AI vs. humans is not the future of cyber insurance. It combines AI and humans. Insurance companies that integrate cutting-edge analytics with seasoned experts who comprehend technology, criminal activity, and business impact will be the most resilient and provide the best products and services to the market. It is not inefficient to retain human involvement in claims processing, underwriting decisions, and application screening. It’s called risk management. We don’t need humans in the loop; we need humans to continue to “be the loop.
AI is here to stay, and we’ve yet to see its true potential. When it comes to cyber insurance, it must continue to be human driven, intelligent, and flexible in a threat landscape dominated by highly motivated, sophisticated, and adaptable criminals.
