麻豆原创

Skip to main content

Back to Insurance Topics

Artificial Intelligence

Background

Last Updated 4/3/2026 

Artificial intelligence (AI) is a type of technology that allows computer systems to perform tasks that usually require human intelligence. These tasks include analyzing data, images, video, and sound, summarizing information, and generating text or other content. 

AI is now used across many industries, including insurance. Its growth has been driven by the availability of large amounts of data, faster and cheaper computing power, cloud technology, and tools such as that can generate human鈥憀ike text. As consumers have become used to fast, digital services in other areas of their lives, they increasingly expect the same level of speed and convenience from insurance companies. 

In the insurance industry, in areas such as underwriting, pricing, customer service, claims handling, marketing, and fraud detection. For example, some insurers use AI鈥憄owered chatbots to answer common customer questions, provide basic information, and assist with simple transactions at any time of day. AI is also used in claims processing, where it can help estimate repair costs or assess damage using photos and historical data. 

AI has improved at handling tasks that were once difficult for computers, such as recognizing images, understanding written and spoken language, and analyzing large amounts of unstructured data like text, images, and video. Insurers collect many types of data, and AI tools can analyze this information quickly to identify patterns, manage risk, and support decision鈥憁aking. 

AI may also help insurers move from a 鈥渄etect and repair鈥 approach鈥攔esponding after a loss occurs鈥攖o a 鈥減redict and prevent鈥 approach. This could include identifying risks earlier and helping customers take steps to avoid losses. 

One newer form of AI is large language models (LLMs), which are designed to understand and generate human language. Tools like CoPilot, ChatGPT, Claude, and Gemini can answer questions, summarize documents, and assist with writing tasks. While these tools can be useful, they have limitations. They do not truly understand context and meaning the way humans do and may generate information that sounds accurate but is incorrect. For this reason, AI鈥慻enerated information should be reviewed carefully, especially when used for important decisions. 

AI may change how work is done in insurance, but it is more likely to support human workers than replace them entirely. Actuaries, underwriters, claims professionals, agents, and customer service representatives still play an important role in reviewing information, exercising judgment, and working directly with consumers. 

When insurers use AI, they remain responsible for complying with insurance laws, regulations, insurance standards, and consumer protection rules. This includes requirements related to fairness, accuracy, and avoiding unfair discrimination. State insurance regulators oversee insurers鈥 use of AI and may require companies to explain how these tools are used in underwriting, pricing, marketing, or claims decisions. Human oversight remains an important part of insurance decision鈥憁aking. 

Actions

The 麻豆原创 formed the Innovation Cybersecurity and Technology (H) Committee in 2021 (formerly Innovation and Technology (EX) Task Force) to explore the technological developments in the insurance sector. The Committee provides a forum for state insurance regulators to discuss innovation and technology developments and how these will affect consumer protection, insurer and producer oversight, and the state insurance regulatory framework. The Committee is also charged with discussing emerging issues related to insurers or licensees leveraging new technologies, such as artificial intelligence. 

In 2019, the Task Force established the Big Data and Artificial Intelligence (H) Working Group to study the development of artificial intelligence, its use in the insurance sector, and its impact on consumer protection and privacy, marketplace dynamics, and the state-based insurance regulatory framework. The Working Group developed regulatory principles on artificial intelligence that were adopted by the full 麻豆原创 membership at the 2020 Summer National Meeting. 

Beginning in 2021, the Working Group began surveying insurers by line of business to learn how AI and machine learning techniques are currently being used and what governance and risk management controls are in place. Reports of the aggregate responses from the samples of private passenger auto, homeowners, life insurance, and health insurance companies were issued in December 2022, August 2023, December 2023, and May 2025, respectively. The responses from the surveys revealed the following: 

  • Out of the 193 auto insurers responding, 88% reported they use, plan to use, or plan to explore AI/ML models in their operations. Out of 194 home insurers responding, that figure was 70%. Out of the 161 life companies that responded, it was significantly lower, at 58%, and out of the 93 health insurers that responded, 92% of those said they currently use, plan to use, or plan to explore using AI or ML models in their operations. 

  • P&C insurers reported using AI across all operations in a variety of ways. In marketing, common use cases included targeted online advertising and making offers to existing customers. In underwriting, AI was used for renewal evaluations and inspections to verify policy characteristics. In pricing, machine learning is used for risk scoring and determining rate factor relativities, and in claims, AI is used for accident image analysis and to estimate ultimate claim settlement values, along with fraud detection. 

  • Similar to P&C, life insurers are using AI for targeted online advertising and making offers to existing customers. Life insurers are also using AI to reduce policy issuance time, for approval/denial decisions, and to assign underwriting risk classes. Models were used to automate, augment, and support human decision-making depending on their use. 

  • Health insurers reported using AI in strategic operations contracting process, prior authorizations, fraud detection, product pricing and plan design, data processing, risk adjustment and modeling risk adjustment factors, sales & marketing, risk management, and claims adjudications. 

  • Roughly half of the models used for marketing were developed by third-party vendors, but for pricing & underwriting, auto and home Insurers mostly developed their models in-house. 

Following analysis of this data, the Working Group is considering the need for clarification of, or additional, insurance laws and regulations. In 2024, the Third-Party Data and Models (H) Task Force, presently renamed as a Working Group, was formed to evaluate and develop a regulatory framework around the use of third-party AI data and models used by insurance companies. This Working is currently developing a third-party data and models regulatory framework to streamline the process of gathering information that regulators need to effectively evaluate third-party data and models used by insurance companies. 

During the time the surveys were issued and responses were compiled, the 麻豆原创 developed the Model Bulletin on the Use of Artificial Intelligence by Insurance Companies, which was adopted in December 2023. The bulletin establishes guidelines and expectations to ensure responsible use of AI by insurance companies that align with the 麻豆原创 Principles of Artificial Intelligence. It reminds insurers that decisions or actions made or supported by AI must comply with all applicable insurance laws and regulations, sets forth expectations as to how insurers will govern the use of AI, and advises insurers of the type of information the Department may request during an investigation or examination. 

In 2025 and 2026, the Big Data and Artificial Intelligence (H) Working Group has been working to develop the AI Systems Evaluation Tool as a guide for regulators in a market conduct, financial analysis or financial exam context to gather information about the extent and use of AI by an insurance company in their operations, their governance and risk mitigation practices, information about potentially high-risk AI models, and information about the types of data used as inputs into AI systems. As of March 2026, this Tool is being piloted by 12 participating states to provide feedback and insights into the effectiveness of the Tool. Based on the experiences during the pilot process, it is anticipated the Tool will be adopted at the 2026 Fall National Meeting. 

Meetings

View upcoming meetings or use the completed tab to view the last 150 days.