Our Views:
Changes to the Cyber Landscape – Generative Artificial Intelligence
In response to the changing cyber landscape, Lloyds (the global insurance and reinsurance market) published “Generative AI: Transforming the Cyber Landscape”. The report represents collective insights and research from experts within the Lloyd’s market.
The report outlines significant shifts in the cybersecurity landscape due to advancements in Generative AI (GenAI) and Large Language Models (LLMs). The report covers the relatively short history of GenAI, and then deep dives into the implications for businesses and insurance.
We are increasingly being asked about the role insurance plays in managing cyber risk, so we have summarised the key takeaways in this month’s bulletin and will return to the topic in future months to explore some of the key issues.
Brief History of AI and LLM’s
Generative AI and LLMs have made significant advancements, particularly in the last 18 months, potentially altering cybersecurity dynamics. Recent efficiencies in the processing of data have made it easier to run models on commodity hardware, which in turn have raised concerns about unrestricted model access and the creation of harmful content by cyber attackers.
About six years ago, Google Research published a pivotal paper introducing the ‘Transformer’ algorithm, revolutionizing the way sequential data with complex structures is encoded, represented, and accessed. This innovation laid the foundation for the majority of generative machine learning approaches in language, vision, and audio by 2023, by transforming sequential processing into parallel processing. This allowed for handling significantly larger datasets within the same computational budget, drastically reducing costs and enhancing the model’s ability to understand and generate long-range data structures.
This breakthrough spurred rapid advancements in AI capabilities, evidenced by the development and release of models like ChatGPT and GPT-4, which demonstrated human-like proficiency across various tasks. The pace of these advancements has posed challenges for establishing effective AI governance policies among enterprises and regulatory bodies, highlighting a need for balanced approaches to ensure the technology’s safe evolution. The EU, UK, and US have each adopted different regulatory strategies, ranging from strict legislative frameworks to more principle-based or soft-law approaches, reflecting the global diversity in managing AI’s growth while safeguarding against its risks.
Transformation of Cyber Risk
Lloyds outlines a framework of how Gen AI tools may be used by attackers (or cyber security professionals). These factors will influence cyber threats in predictable ways and assess the potential impact that LLM technology has on each of them.
Vulnerability Discovery
Automated tools, especially those utilising LLMs, could significantly increase threat actors’ ability to discover vulnerabilities, potentially outpacing defensive tools due to asymmetric incentives.
Finding such vulnerabilities is traditionally time-consuming and requires deep technical expertise. However, the advent of Large Language Models (LLMs) has the potential to revolutionise this process by enabling the automated discovery of exploitable vulnerabilities across various challenging domains like embedded firmware, proprietary software binaries, and hardware device drivers. This automation could significantly reduce the cost and time associated with identifying vulnerabilities, making cyber attacks easier, cheaper, and more effective.
LLMs can perform at-scale scans of open-source repositories, identifying vulnerabilities that would be difficult or impossible for humans to find. This creates a larger pool of vulnerabilities, offering threat actors greater flexibility in their attack strategies. AI-enhanced tools could also be used by security professionals for defensive purposes, such as threat intelligence and incident response, but the asymmetry in incentives and flexibility means that threat actors might derive more benefit from these technologies.
The potential impacts of this evolution are substantial, with the automated discovery of vulnerabilities likely to increase the frequency and severity of cyber incidents significantly. This includes the potential for cyber-physical risks, as vulnerabilities in industrial control systems are uncovered. While security teams within organisations face various constraints that may limit their ability to defend against these evolving threats, threat actors have the motivation and flexibility to exploit this new technology fully.
Campaign Planning and Execution
The automation and fine-tuning capabilities of GenAI could enable more efficient, targeted cyber campaigns, lowering costs and expanding the scope of potential attacks.
Traditional cyber campaigns, including phishing and data breaches, require considerable human effort for tasks such as target identification, data collection, and the creation of attack materials. These requirements have historically limited the scalability and cost-effectiveness of cyber attacks.
Gen AI technologies, particularly Large Language Models (LLMs), introduce the potential to automate several aspects of campaign planning and execution. This includes the automated collection and analysis of data on potential targets, the generation of customised attack materials (such as phishing emails or fake communications), and even engaging with targets in real-time without direct human intervention. The use of LLMs can significantly reduce the resource constraints faced by threat actors, enabling them to conduct broader, more sophisticated campaigns with less effort and potentially greater effectiveness.
This automation also poses risks of impersonation and misinformation, as Gen AI can produce highly convincing fake content that can be used to manipulate individuals and breach organisational networks. The National Security Agency (NSA) has highlighted these risks, indicating the serious implications of Gen AI’s capabilities for cyber security.
Risk-Reward Analysis
Enhanced capabilities may embolden threat actors by improving their ability to evade detection and successfully execute their objectives.
For threat actors, the utility derived from their activities—be it financial, informational, reputational, or political—is paramount. However, achieving these goals without drawing undue attention from powerful entities like state intelligence agencies is crucial to avoid detection and countermeasures. Threat actors commonly employ techniques to obscure their digital “fingerprints,” obfuscate attack components, or safely move stolen assets, minimising the risk of attribution.
The advent of Large Language Models (LLMs) offers new tools for enhancing these obfuscation efforts. LLMs have the potential to automate the process of removing or altering identifiable characteristics from malware and cyber campaign materials, as well as to misdirect forensic analysis through sophisticated manipulation of digital traces. This capability could significantly improve the ability of threat actors to conduct operations covertly, complicating the efforts of cybersecurity professionals to track and mitigate cyber threats effectively.
Single Points of Failure
The centralisation of LLMs as a service could introduce new vulnerabilities and potential for widespread impact from cyber attacks.
As LLMs become embedded in critical services and infrastructure, their centralised nature and the concentration of service providers like OpenAI, Google, and Microsoft introduce significant vulnerabilities. This centralisation could lead to disruptions across multiple domains if these key providers are compromised. Furthermore, the widespread use of LLMs raises concerns about dataset poisoning, embedded vulnerabilities, unpredictable AI behavior, and potential biases in AI-generated outputs. These factors combined could result in substantial risks, including large-scale outages, cyber-physical incidents, and market disruptions, highlighting the need for comprehensive risk management and mitigation strategies in the face of increasing reliance on LLM technologies.
Discussion points for your organisation
AI technologies lower the entry barrier for cybercrime, leading to an increase in vulnerabilities and potential cyber losses. This scenario necessitates more nuanced risk assessments and adjustments in insurance policies to cover emerging threats such as deepfakes.
As cyber attacks become more automated and sophisticated, there’s a growing risk of cyber catastrophes, especially those linked to large-scale, systemic failures involving new LLM service providers. The insurance sector is likely to evolve its products to address these broader risks, including disruptions and data breaches stemming from AI-enhanced attacks. Your organisation needs to consider how insurance may address these risks which might have been previously underserved. We encourage you to adopt comprehensive cyber defense and continuity plans to mitigate AI-related vulnerabilities, with cyber insurance playing a crucial role in supporting resilience and recovery. Moreover, fostering a collaborative effort between insurers, governments, and society is essential for creating a secure cyber environment, emphasising the importance of information sharing, enhancing supply chain resilience, and promoting public awareness on cyber hygiene.
About the Bulletin:
The NZ Incident Response Bulletin is a monthly high-level executive summary containing some of the most important news articles that have been published on Forensic and Cyber Security matters during the last month. Also included are articles written by Incident Response Solutions, covering topical matters. Each article contains a brief summary and if possible, includes a linked reference on the web for detailed information. The purpose of this resource is to assist Executives in keeping up to date from a high-level perspective with a sample of the latest Forensic and Cyber Security news.
To subscribe or to submit a contribution for an upcoming Bulletin, please either visit https://incidentresponse.co.nz/bulletin or send an email to bulletin@incidentresponse.co.nz with the subject line either “Subscribe”, “Unsubscribe”, or if you think there is something worth reporting, “Contribution”, along with the Webpage or URL in the contents. Access our Privacy Policy.
This Bulletin is prepared for general guidance and does not constitute formal advice. This information should not be relied on without obtaining specific formal advice. We do not make any representation as to the accuracy or completeness of the information contained within this Bulletin. Incident Response Solutions Limited does not accept any liability, responsibility or duty of care for any consequences of you or anyone else acting, or refraining to act, when relying on the information contained in this Bulletin or for any decision based on it.
