Summary: Artificial intelligence (AI) is poised to transform most aspects of business, including insurance, but it needs to be used responsibly, says Doug Marquis, chief technology officer at Zywave, who outlines some practical steps insurers can take to use AI safely and ethically.
Several types of artificial intelligence are already being adopted across various parts of the insurance industry, potentially opening the door to incredible efficiency gains, increased profitability, innovation and complex problem solving.
The use cases in the insurance industry for AI-based large-scale language models like those used in ChatGPT are evolving, but some current examples of their use include summarizing and generating documents, performing data analytics, retrieving data for risk assessment and underwriting, etc. As an insurtech company, we are looking at how AI can help us create software in an automated way to exchange data between two entities across the insurance ecosystem.
The risks of AI
However, using AI can introduce multiple risks, primarily because it is prone to error. For example, an AI may take statutory information from one US state and assume that it applies to all states, which is not necessarily the case. AI can also hallucinate (make up facts) by taking factual information and inferring incorrect answers.
AI can be biased if it uses inherently biased data to create algorithms that discriminate against groups of people, for example, based on ethnicity or gender. As a result, the AI may perceive a higher mortality rate for one racial or ethnic group and infer that life insurance premiums should be higher for that group.
AI-induced bias also poses a risk in hiring, potentially discriminating against people from certain regions or socio-economic backgrounds. For this reason, human oversight of AI decisions remains crucial to ensure inclusion, fairness and equal opportunity.
New AI Regulation
AI technology has advanced rapidly over the past two years, and regulation has lagged far behind. While lawmakers try to keep up with the rapid development of AI and the risks it may pose, insurers will need to prepare for a flood of new regulations.
Earlier this year, Colorado became the first state to pass comprehensive consumer protection legislation regulating developers and deployers of high-risk AI – systems that significantly influence significant decisions about education, employment, financial or lending services, critical government services, health care, housing, insurance, or legal services.
Avoid algorithmic bias
The Colorado AI law, which goes into effect on February 1, 2026, requires developers and implementers of AI high-risk systems to exercise care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination or bias.
This means that developers will have to share certain information with deployers, such as harmful or inappropriate uses of high-risk AI systems, the types of data used to train the systems, and the risk mitigation measures taken. Developers will also have to disclose information such as the types of high-risk AI systems they have released and how they are managing the risks of algorithmic discrimination.
Meanwhile, AI users should adopt risk management policies and programs to monitor the use of high-risk AI systems and complete impact assessments of their AI systems and any changes they make to these systems.
Transparency is needed
Colorado’s law also has basic transparency requirements similar to recent EU AI laws, the Utah Artificial Intelligence Policy Act, and California and New Jersey chatbot laws: Consumers must be told when they are interacting with an AI system, such as a chatbot, unless that interaction is obvious, and implementers must clearly state on their websites that they are using AI systems to inform important decisions about customers.
It’s likely that other states will adopt AI regulations similar to Colorado’s in the future. However, it’s important to keep in mind that many governance measures, such as risk-ranking AI, management of test data, and data monitoring and auditing, are already covered by other laws and regulatory frameworks in the U.S. and around the world. Given the proliferation of legislation at all levels, we can expect the AI landscape to become even more complex in the near future. In the meantime, there are some steps companies can take to help ensure they are protected.
Five practical steps for insurers
Transparency: With a simple disclaimer, insurers can let customers know they are using chatbots and disclose where AI is being used for decision-making in certain systems, including hiring. Intellectual Property: It is important for insurers to protect customer data ownership when dealing with AI vendors. They must also protect sensitive personal data, such as medical information. For example, at Zywave, we have seen AI providers with contracts that require ownership of the data and modeling they are providing. Companies must be more careful than ever when reviewing contracts to ensure confidentiality, intellectual property ownership, and protection of trade secrets that may be placed in the vendor’s systems. Appropriate Data: To ensure that AI is making decisions based on accurate information, it is the company’s responsibility to ensure that they are giving their AI systems access to trustworthy data. For example, at Zywave, we use our own data repository. This repository consists of our own data, data purchased from trusted third parties, or data from public U.S. government sites that we have obtained and vetted. Colorado’s new AI regulations state that companies must be able to explain how they arrived at their hiring decisions and prove they are unbiased, which translates to transparency and a record of data provenance. Documentation: As the number of AI products used in the insurance industry grows, it’s important to meticulously document the data being used, its origins, and who owns it. This will help companies protect themselves from accusations of copyright infringement and intellectual property theft, as well as protect against AI making mistakes based on inaccurate data retrieved from the internet. Learn new skills: Insurers will need to gain a deeper understanding of AI to comply with regulations that are likely to be rolled out in the U.S. and other countries over the next two years. New roles have already been created for prompt engineers to ensure AI systems generate optimal answers, but they need to be overseen by other humans because the information they input can be biased or pose security risks.
Given the increased use and advancements of AI over the past few years, it seems like this technology is here to stay. While the additional management and oversight required to ensure AI is used safely and ethically may seem daunting, this new technology offers enormous business value, with the potential to dramatically improve efficiency and profitability through automation.
The benefits arguably outweigh the additional work required to develop robust AI protocols: putting strict guardrails in place will enable the insurance industry to reap the benefits of AI while remaining compliant with a rapidly evolving regulatory environment.
topic
Insurtech Data-Driven Artificial Intelligence Market