Curiosity, experimentation, and integration of artificial intelligence (AI) within businesses can significantly amplify cyber event risks and business liability. Whether these risks stem from malicious intent or accidental occurrences, it’s crucial to consider them and plan the necessary steps to safeguard your business and customer data.
Here are five risks to consider when integrating AI tools into your business:
1. Software Supply Chain Risk
A recent report by risk and reinsurance specialists Guy Carpenter & Company LLC underscores the significant threat that artificial intelligence poses to your software supply chain.
So, what does that mean?
For businesses opting to use third-party AI solutions to interact with customers or for efficiencies, such as ChatGPT or similar, the AI tools can introduce significant risks when compromised, such as:
- Outages
- Accessibility Issues
- Data Breaches
- Malicious downloads
The report cites that ChatGPT suffered several outages during 2023 and 2024, which affected thousands of users.
2. Open New Pathways for Attack
Once a business introduces an artificial intelligence solution that customers or staff interact with, whether a chatbot, online calculator, or other customisable AI tool, the tool receives and sends information. That data transfer process provides a gateway to malicious or accidental manipulation, known as ‘Jailbreaking’.
What is jailbreaking? You may well ask. It’s a common term for tricking an artificial intelligence model into behaving outside its intended scope. Jailbreaking may lead to cyber events such as:
- Data breaches
- Loss of access
- Network Breaches
- Incorrect information to customers
These events can happen, and the report cites several examples of cyber events due to jailbreaking:
- In 2024, attackers used a vulnerability in an open-source library to steal information from users’ Chat GPT interactions.
- Air Canada paid damages to a passenger after the airline’s Chatbot gave incorrect information to the passenger.
- You can read many more examples here.
3. Data Privacy Threat
Artificial Intelligence tools require training, which means their behaviour and output are only as good as their training and data inputs.
Training these tools can be labour and time-intensive, so companies rely on third parties or in-house teams to fine-tune the system. It also means companies allow access to large quantities of often critical or sensitive data to assist the training process.
When it comes to artificial intelligence, the more customers and data it can learn from, the better its performance. Therefore, developers like to centralise and aggregate data, which means your information may be in a communal pot. As you can imagine, the risks multiply as data is stored, replicated and exposed while training the AI tools.
For example:
- Artificial Intelligence researchers accidentally exposed 38 terabytes of customer data through a misconfiguration in September 2023.
- A data storage and processing vendor, Snowflake, experienced a breach in April 2024, affecting over 200 customers and millions of users.
Therefore, it’s of utmost importance to carefully consider how your data will be utilised and stored. Therefore, it’s critical to establish robust systems to protect your business and customer data from potential AI-related threats.
4. Cyber Security and AI
Artificial Intelligence is fast becoming a popular tool for cyber security operations, particularly solutions such as CrowdStrike, which rely on high-level access to work effectively. However, what are the risks to your business?
As dependency on artificial intelligence increases, so does the risk of errors, misconfiguration, or unwanted access.
Likewise, as the AI system learns to block malicious attacks, it can cut off network access, reset credentials, or quarantine systems, making it difficult to resolve cyber events such as ransomware attacks and shut out users.
For example, the Microsoft Azure outage in July 2024 went down for 10 hours following a Distributed Denial of Service (DDoS) attack affecting banks and businesses globally following a cyberattack.
A DDoS attack targets websites and servers to exhaust an application’s resources to knock it offline or impact its performance. To do this, attackers flood a site with traffic or use bots to increase traffic. It’s one of the most common cyber threats that could compromise your business, online security, sales and reputation.
For businesses considering AI-based security solutions, while they can be effective tools, it’s essential to research vendors and ensure adequate boundaries are in place to protect critical data.
5. Insurance Considerations
As with any other business purchase decision, it’s vital to consider your risks and exposure for insurance purposes when integrating artificial intelligence systems into your business.
Insurers will want to know:
- You have robust data protection processes in place
- How the artificial intelligence is being developed, deployed and or tested
- How secure the data will be and whether the vendor will centralise the data for others to access
- The type of access the artificial intelligence will have to your systems
- How access and permissions will be managed and monitored
The answers to these questions will help insurers determine whether they underwrite the risk and the cost of the insurance premiums.
If you would like help understanding your business risks and how to transfer risk to insurance contact the Clear Insurance team today.