As artificial intelligence transforms the way videogames are built, marketed, and played, it’s also introducing a wave of uncertainty for insurers tasked with underwriting the creative and technical risks behind these productions. From unpredictable player experiences and intellectual property concerns to cyber threats amplified by machine learning, AI is reshaping the risk landscape in ways the industry is only beginning to understand.
According to Marco Andolfatto (pictured), chief underwriting officer at Apollo, this shift is forcing insurers to rethink traditional assumptions about development teams, liability exposures, and how external actors may weaponize the very technologies meant to enhance gameplay.
AI is already beginning to upend traditional production workflows, Andolfatto said. From procedural generation of assets to intelligent behavior design, AI tools are enabling developers to do more with fewer resources. But that efficiency comes at a cost.
Microsoft’s Xbox division has already laid off more than 2,000 employees, a sign of the broader transformation underway in game development, Andolfatto said. As AI tools become more integrated into production workflows, he explained, studios are reassessing their financial models and reducing reliance on large development teams.
“They're already contemplating and already seeing a future where you need fewer developers to develop the game because of AI tools,” he said.
He compared the shift to the early days of robotics in manufacturing: transformative, but disruptive. For underwriters, that raises fresh questions. How do you assess the risk profile of a production team that's been radically downsized? What happens when AI-generated content introduces unintended or legally ambiguous elements into a final product?
One of the more novel risks is tied to unpredictability – specifically, how AI-driven systems might shape a player’s experience in ways developers didn’t intend.
“The more you use AI, the more unpredictable the gamer's interaction with the game can be,” Andolfatto said. If AI is creating content or communication that’s counter to what the developers want, it creates a potential liability, he explained.
This, Andolfatto said, is especially concerning in games that incorporate generative dialogue or user-personalized experiences, where the outputs may not be fully vetted before reaching players.
That lack of control can also result in reputational fallout if content generated by AI is offensive, misleading, or too similar to copyrighted material created elsewhere.
Andolfatto also warned that AI is intensifying the legal risks studios already face. While concerns around intellectual property have long existed in gaming, the use of AI tools adds new complexity.
In some cases, entire segments of a game may need to be stripped and rebuilt due to their similarity to other content – a costly and disruptive scenario that has already played out in the industry.
As AI tools become more prevalent, Andolfatto said, legal departments and insurers will need to stay ahead of the curve to help studios navigate both the creative and compliance-related pitfalls.
While studios weigh the internal risks of AI use, the technology is also reshaping the threat landscape from the outside. According to Andolfatto, the growing sophistication of AI tools in the hands of cybercriminals is creating new vulnerabilities that insurers and developers must watch closely.
AI enables malicious actors to mimic parts of an organization’s digital footprint, from login pages to player environments – effectively using a studio’s own brand as bait.
“Cybercriminals can use AI essentially to mimic different elements of the organization or different elements of a game itself, and create vulnerabilities within that organization or within the customer experience,” he said.
This opens the door to sophisticated phishing campaigns and spoofed interfaces that could lure players into handing over sensitive information, infect their devices, or reroute them into compromised networks.
“A hacker or hacking group could use AI to mimic elements of a game and then drive certain gamers or customers to an area ... for the purposes of hacking them,” he said.
What’s especially concerning, Andolfatto noted, is that these attacks may not stem from flaws in the game itself or its internal use of AI – but rather from external actors exploiting AI as a tool for deception.
“AI increased the threat outside versus making them more vulnerable inside,” he said.