Whether or not insurers cover the costs of a cyberattack can depend, in part, on being able to make clear-cut distinctions in this blurry space: When Russian government hackers targeted Ukraine’s electric grid earlier this year, was that an act of war because the two countries were already at war? What about when Russia hacked Ukraine’s electric grid in 2015, or when pro-Russian hackers targeted servers in countries like the United States, Germany, Lithuania, and Norway because of their support for Ukraine? Figuring out which of these types of intrusions are “warlike” is not an academic matter for victims and their insurers — it is sometimes at the heart of who ends up paying for them. And the more that countries like Russia exercise their offensive cyber capabilities, the harder and more critical it becomes to make those distinctions and sort out who is on the line to cover the costs.
When insurers first began offering policies that covered costs related to computer security breaches more than 20 years ago, the promise was that the industry would do for cybersecurity what it had done for other types of risks like car accidents, fires, or robbery. In other words, cyberinsurance was supposed to insulate policyholders from some of the most burdensome short-term costs associated with these events while simultaneously requiring those same policyholders to adopt best practices (seat belts, smoke detectors, security cameras) for reducing the likelihood of these risks in the first place. But the industry has fallen well short of that goal, in many cases failing both to help breached companies cover the costs of major cyberattacks like NotPetya, and to help companies reduce their exposure to cyber risk.
Read more of this story at Slashdot.