Friday, March 29, 2024

AI Insurance Is Coming, Here’s Why

- Advertisement -

Artificial intelligence is a powerful tool, but it may pose some risks if used incorrectly. These risks can be unknown or uncontrollable and hence, you need to find other ways to deal with it.

Artificial intelligence (AI) is not only a powerful tool, it can also be a highly risky technology if used incorrectly. No matter how much you take care, there still might be some lurking risks, which are either unknown or uncontrollable, and that means you will be needing a different way of risk-management.

If you perform a thorough pre-mortem analysis, please ensure that training and testing are complete. A stress test of all the systems with the help of red teams will make you more confident about their performance. However, there still might be some risks that are unidentified, or identified but can’t be anticipated or controlled. Such residual risks can be dealt with by way of transference.

- Advertisement -

Transference or risk transfer is a risk management and control strategy that involves the contractual shifting of a pure risk from one party to another. One example is the purchase of an insurance policy, by which a policyholder passes the specified risk of loss to the insurer.

Currently available options and their limitations

If you have an IT solution, which is the closest equivalent for an AI solution, a few options are already available in the market for transferring some of the risks.

Information Technology Liability (IT Liability) insurance covers claims arising from the failure of information technology products, services, and/or advice.

The information technology (IT) industry has unique liability exposures due to the crossover between the provision of professional services and supply of goods. Many service providers in this industry offer a mix of both goods and services. It gets further complicated by the legal ambiguity around software advice and development and whether it is, in fact, the provision of a service or sale of goods.

Traditional Professional Indemnity insurance policies often have onerous exclusions relating to the supply of goods. In contrast, traditional Public and Products Liability policies often contain exclusions relating to the provision of professional services.

Many insurers have developed a range of insurance options to address these issues, which they commonly refer to as IT liability policies. These policies represent a combination of Professional Indemnity and Public and Products Liability insurances bundled into one product. These policies were developed to minimise the prospect of an uninsured claim due to its ‘falling between the gaps’ between the two traditional insurance products.

However, the added complexity of AI solutions is driven by complex algorithms in addition to the data ingested or processed. Over time, change in data can significantly change product or solution characteristics; add to that cloud-based services. It just widens the gap in existing options, and with that, the prospect of falling through the gaps increases significantly.

Hence the AI insurance

Before we even think about AI insurance, let’s see if there is a need for it. This need can only become evident when there are multiple issues with significant complexity. There has been a steady stream of warnings since the last half century to slow down and ensure we keep machines on a tight leash.

Many thought leaders have asked critical questions such as who accepts the responsibility when AI goes wrong, and what are the implications for the insurance industry when that happens.

Autonomous or driverless cars are the most important considerations for the insurance industry. In June 2016, British insurance company Adrian Flux started offering the first policy specifically geared towards autonomous and partly automated vehicles. This policy covers typical car insurance options, such as damage, fire, and theft.

Additionally, it also covers accidents specific to AI—loss or damage as a result of malfunctions in the car’s driverless systems, interference from hackers, failure to install vehicle software updates and security patches, satellite failure, or failure of the manufacturer’s vehicle operating system or other authorised software.

Volvo has said that when one of their vehicles is in autonomous mode, they are responsible for what happens.

I think this is an important step. However, still, it fails to answer the question of who is liable for any accidents? Who is at fault if the car malfunctions and runs over someone?

When autonomous machinery goes wrong in a factory and disrupts production, who is responsible? Is it the human operator who has thin-threaded control, or is it the management for buying the wrong system? May be it should be the manufacturer for not testing the autonomous machinery thoroughly enough.

We need to establish specific protections for potential victims of AI-related incidents, whether these are businesses or individuals, to give them confidence that they will have legal recourse if something goes wrong.

The most critical question from a customer’s standpoint would be, who foots the bill when a robot or an intelligent AI system makes a mistake, causes an accident or damage, or becomes corrupted? The manufacturer, developer, the person controlling it, or the system itself? Or is it a matter of allocating and apportioning risk and liability?

Drew Turney, a journalist, argues in one of his articles, “We don’t put the parents of murderers or embezzlers in jail. We assume everyone is responsible for his or her decisions based on the experience, memory, self-awareness, and free will accumulated throughout their lives.”

There are many examples where complex situations have occurred, which begets a need for AI insurance.

AI loses an investor’s fortune

Austria-based AI company 42.cx had developed a supercomputer named K1. It would comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on the US stock futures. Based on data gathered and its analysis (based on that data), it would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.

In May 2019, a Hong Kong real estate tycoon Samathur Li Kin-kan decided to sue the company that used trade-executing AIs to manage his account, causing him to lose millions of dollars. A first-of-its-kind court case opened up that could help determine who should be held responsible when an AI stuffs up.

While it is the first known instance of humans going to court over investment losses triggered by autonomous machines, it also highlights the black-box problem of AI vividly. If people don’t know how the AI is making decisions, who’s responsible when things go wrong?

The legal battle is a sign of what’s coming as AI gets incorporated into all facets of life, from self-driving cars to virtual assistants.

Karishma Paroha, a London based lawyer, who specialises in product liability, has an interesting view. She says, “What happens when autonomous chatbots are used by companies to sell products to customers? Even suing the salesperson may not be possible. Misrepresentation is about what a person said to you. What happens when we’re not being sold to by a human?”

Risky digital assistant for patients

In mid-2019, a large consulting firm started deploying voice-controlled digital assistants for hospital patients. The idea was that a digital device would be attached to the TV screen in the patient room, and the patient could request for assistance.

The firm wanted to replace the age-old call button service due to its inherent limitations. One of the significant limitations cited was that there was not enough context available for nurses to prioritise patients only based on the call request. With the age-old call button system, two patients could request help at the same time. While one of them may require urgent help, the other one could wait. However, with just the indication of help requests, nurses couldn’t determine the patient who needed immediate help. With voice-based digital assistants, the firm and hospitals anticipated that, with more context from the command text, they could prioritise nurse visits.

The system was deployed with prioritisation based on the words uttered by the patients. For example, if a patient asked for drinking water, the request was flagged as a low priority, whereas if someone complained about pain in the chest, they were flagged with the highest priority. Various other utterances were prioritised accordingly. The idea behind these assigned priorities was that patient needing water to drink can wait for a few minutes, whereas a patient with chest pain may be at risk of attack and needs immediate assistance. Generally speaking, this logic would work in most of the scenarios.

However, this system wasn’t linked with the patient information database. Such that, besides getting the room number of the requester, the system did not know anything else. Most importantly, it did not understand what the patient’s condition was and why they were in the hospital.

Not knowing the patient’s condition or ailment may prove to be a severe impediment in some cases. These cases may be at the long tail of the common scenarios, and I think that is what makes it more dangerous.

For example, if a patient has requested water, the system would put it as a low priority request and give other requests a relatively higher priority. However, what if the patient was not feeling well, and hence requested water. May be not getting water in time could worsen the patient’s condition. It may be that the patient was choking or coughing and therefore asked for water. Continuous coughing may be an indicator of imminent danger. Without knowing the patient’s ailment and condition, it is tough to determine whether a simple request for water is a high priority or low priority, and these types of scenarios are where I see significant risks. One may term this system as well-intentioned but poorly implemented.

Now the question is—in a scenario where a high priority request gets flagged as a low priority on account of the system’s limited information access, who is responsible? If the patient’s condition worsens due to this lapse, who would they hold accountable?

Who (or what) can be held accountable for such failures of service or performance is a major lurking question, and perhaps, AI insurance may be able to cover the end-users in all such scenarios when the need for compensation arises.

Many regulators are swiftly gearing up

Since 2018, European lawmakers, legal experts, and manufacturers have been locked in a high-stake debate: whether it’s the machines or human beings who should bear ultimate responsibility for their actions.

This debate refers to a paragraph of text that was buried deep in an European Parliament report from early 2017. It suggests that self-learning robots could be granted ‘electronic personalities.’ This status could allow robots to be insured individually and be held liable for damages, if they go rogue and start hurting people or damaging property.

One of the paragraphs says, “The European Parliament calls on the commission, when carrying out an impact assessment of its future legislative instrument, to explore, analyse and consider the implications of all possible legal solutions, such as establishing a compulsory insurance scheme where relevant and necessary for specific categories of robots whereby, similarly to what already happens with cars, producers, or owners of robots would be required to take out insurance cover for the damage potentially caused by their robots.”

Challenges for AI insurance

Although the market is tilting quickly to justify AI insurance products, there are still a few business challenges. These challenges are clear roadblocks for any one player to take the initiative and lead the way.

A common pool

As the insurance fundamentally works with a common pool, the first challenge appears to be not having enough quorum to start this pool. Several AI solutions companies or adopters can come together and break this barrier by contributing to the common pool.

Equitability

The second challenge is whether this common pool is equitable enough. Due to the nature of AI solutions and customer base, not every company would have equivalent revenue and pose a comparable risk. Being equitable, though may not be mandatory in the beginning, it will soon become an impediment for growth if not appropriately managed.

Insurability of cloud-based AI

In the case of minors (children), parents are liable and responsible for their behaviours in the public domain. Any wrongdoing by them results in parents paying for it. However, as they grow and are termed as adults, responsibility entirely shifts to them.

Similarly, the liability of AI going wrong will have to shift from vendors to users over time, which may impact the annual assessment of premiums. Any updates to the AI software may bring this a few steps backward for vendors as there are new inputs to AI now. If AI works continuously without any updates, the liability will keep shifting gradually towards end-users.

However, in terms of cloud AI (cloud-based AI solution), this shift may not happen at all since vendors would always be in full control.

If customers or AI users supply the training data all the time, there would be a shared liability from the solution and outcome perspective.

Attribution

However, of all, attribution of failure might be the biggest challenge with AI insurance. Several cases discussed in series of my articles have shown us how challenging and ambiguous it can be to ascertain fault contributing factors in the entire AI value chain.

AI typically uses training data to make decisions. When a customer buys an algorithm from one company but uses their own training data or buys it from a different company and it doesn’t work as expected, how much at fault is the AI and how much is the fault of training data?

Without solving the attribution problem, insurance proposition may not be possible.

As I interviewed several insurance industry experts during the last year, all of them insisted that a good insurance business model demands correct risk profile and history.

Unfortunately, this history doesn’t exist yet. The issue is that it won’t exist at all if no one ever takes the initiative and makes the flywheel rotate. So the question now is—who will do it first? What might make it work?

While doing it first in the tech industry is quite a norm, in risk-averse sectors like the financial industry, it is just the opposite. So, until that happens, there might be an intermediate alternative.

How about insurers covering not the total risk but only the damage control costs? For example, if something goes wrong with your AI system, and it causes your process to come to a halt, you will incur two financial burdens. One would be on account of revenue loss and the other would be fixing the system. In this case, while the revenue loss can be significant, system fixing costs may be relatively lower, and insurance companies may start by offering only fixing part of the cover.

Insurers may also explore Parametric Insurance to cover some of the known or fixed cost issues.

Aggregators can combine the risks of a cluster of several companies matching specific criteria. They can cover part of those risks themselves and transfer partial risks to the insurers. Either way, it is not a complete deadlock to get this going.

AI insurance is coming, here’s why

Implementing an AI solution means you can do things at a much larger scale. If you have been producing x number of widgets per day without the use of AI, you may end up creating 1000x widgets with AI.

This type of massive scale also means when things fail, they would also fail magnanimously. The issues 1000x failures can bring upon a business could be outrageous. These would typically not only result in revenue losses but also put a burden on fixing them, making alternate arrangements while these are being repaired, and so on.

This scale is dangerous, and therefore having an AI insurance would make sense. Additionally, with this option in place, it will also make people more responsible for developing and implementing AI solutions. These options would also contribute towards the Responsible AI design and use paradigm.

And more importantly, every human consumer or user would want some level of compensation at some point in time when AI solutions go wrong. They won’t accept it if you say AI is at fault. It is a fair expectation to be compensated.

The question is, who will foot the compensation bill?

This question is the biggest reason why I believe AI insurance will be necessary. It may be a fuzzy concept for now but will soon be quite relevant.

Better risk management is the key

Getting AI insurance may be a good idea, but that shouldn’t be your objective. Instead, you must focus on a structured approach for development and deployment. By doing it, you can minimise risks to such an extent that it (AI solution) is safe, useful, and under control.

If you follow my three core principles of good problem solving, i.e., doing the right thing, doing it right, and covering all your bases, it will help.

Given that almost everything you would have mitigated or had planned to do so, there would be hardly anything that would qualify as residual risk. Managing residual risk is more like a remaining tail when the entire elephant has passed through the door.

If, however, there is any uncertainty or unknown risk to your solution, risk transfer should be more effortless as you would have completed the required due diligence already.

It is always a good idea to deal with problems and risks in their smallest states. Stich in time saves nine!


Anand Tamboli is a serial entrepreneur, speaker, award-winning published author and emerging technology thought leader

SHARE YOUR THOUGHTS & COMMENTS

Electronics News

Truly Innovative Tech

MOst Popular Videos

Electronics Components

Calculators