NZ Incident Response Bulletin – April 2026

Local AI Agents Move to the Endpoint – A New Cyber Risk Frontier

Recent advances in artificial intelligence are shifting capability from cloud-hosted platforms to tools that run directly on user devices. While this promises performance, privacy, and autonomy benefits, it also introduces a materially different cyber risk profile that many organisations are not yet prepared for.

Recent advances in artificial intelligence are shifting capability from cloud-based tools to software that runs directly on laptops and desktops. This brings clear benefits around speed and control, but it also introduces a different kind of cyber risk that many organisations are not yet factoring in.

What’s Changed

A new generation of AI tools can now act more like a digital assistant than a simple chatbot. These tools can access files, interact with systems, and carry out tasks on behalf of a user, often with little supervision. Some are designed to run entirely on a local computer rather than through a browser. At the same time, more advanced models such as Claude Mythos Preview, developed by Anthropic, are showing just how powerful these systems are becoming. In controlled testing, they have been able to find weaknesses in software at a scale that was not previously possible, which is why access to them has been tightly restricted.

The key shift is simple. AI is no longer just responding to instructions. It is starting to take action.

Key Cyber Risks

  1. More Ways for Things to Go Wrong. These tools often need broad access to work properly. That can include company files, emails, and connected systems. The more access something has, the more damage it can do if it is misused or goes wrong.
  2. Access to Sensitive Information. Because these tools act on behalf of a user, they often inherit the same level of access. If something is misconfigured or compromised, sensitive information such as documents, passwords, or system access could be exposed.
  3. Being Tricked into Doing the Wrong Thing. These tools rely on instructions and inputs, and those can be manipulated. For example, a malicious email or document could contain hidden instructions that cause the AI tool to behave in ways the user did not intend, such as sharing information or making changes.
  4. Risk from Add-Ons and Integrations. Many of these tools can be extended with plugins or add-ons to increase their usefulness. However, each additional connection introduces risk. If one of those extensions is not trustworthy, it can become a pathway into the organisation.
  5. Faster and More Accessible Cyber Attacks. Advanced AI models are making it easier to find and exploit weaknesses in systems. What used to take highly skilled attackers significant time can now be done faster, and potentially by less experienced individuals.

What This Means for Organisations

This shift changes how organisations need to think about risk at a practical and strategic level. End user devices are no longer just passive tools used to open documents, send emails, or access systems. They are starting to become active participants in how work is carried out, with the ability to search for information, make decisions, take action, and interact with multiple systems on a user’s behalf. That creates a very different risk profile, because the device is no longer simply waiting for instructions. It may now be helping to interpret, prioritise, and carry out tasks in ways that were previously done directly by a person.

As a result, access and permissions matter more than ever. If an AI tool is installed on a local device and connected to business systems, it may be able to access the same files, messages, and services as the user. In effect, it can operate with the same authority as that individual, and in some cases at far greater speed. This means that a poor access decision, an unnecessary system connection, or an overly broad permission setting can have wider consequences than many organisations expect. What might once have been a minor oversight could now allow an AI-enabled tool to move through information, trigger actions, or expose data at scale.

At the same time, harmful activity may not look obviously malicious. One of the more difficult aspects of this risk is that the behaviour of these tools can appear entirely normal on the surface. Opening files, sending messages, searching folders, or accessing systems may all look like legitimate business activity, because in many cases they are the same kinds of actions a user would normally perform. That makes it harder for organisations to distinguish between acceptable use, poor use, manipulated behaviour, and actual compromise. In other words, the warning signs may be far less visible than in more traditional cyber incidents.

Adding to this, the speed at which cyber threats can develop and spread is increasing. AI tools can process information, carry out tasks, and respond to prompts much faster than a person. That same speed can work against an organisation if something goes wrong. A mistake, a malicious instruction, or an exploited weakness can lead to rapid consequences before staff have time to recognise the issue and intervene. This compresses response time and places greater pressure on monitoring, governance, and decision-making.

Taken together, the environment is becoming more complex and the margin for error is smaller. Organisations are not simply dealing with another software product. They are dealing with technology that can act, interact, and influence outcomes inside the business. That means cyber risk management needs to evolve accordingly, with stronger attention to oversight, access control, user awareness, and organisational readiness.

Managing the Risk

Organisations do not need to avoid these tools, but they do need to manage them carefully. The key is to treat them as powerful, high-impact software rather than everyday applications.

  1. Control Where They Are Used. Not every device or user should have access to these tools. Organisations should limit their use to approved individuals or teams, maintain visibility over what tools are in use, and ensure they are reviewed before being rolled out more broadly.
  2. Limit What They Can Access. These tools should only have access to what they genuinely need to perform their function. This means restricting access to sensitive files and systems, avoiding full access where partial access is sufficient, and keeping critical systems separate from general use environments.
  3. Keep Them Contained. Where possible, these tools should be run in controlled environments rather than directly on a user’s primary device. Using separate environments or dedicated machines helps prevent direct access to critical systems and reduces the overall impact if something goes wrong.
  4. Watch What They Do. It is important to understand how these tools are being used in practice. Organisations should keep records of their activity, monitor for unusual or unexpected behaviour, and review usage regularly. This is less about detecting known threats and more about identifying behaviour that does not look right.
  5. Be Careful with Add-Ons. Extensions and integrations should be tightly controlled. Only approved plugins should be allowed, their sources should be verified, and automatic installations should be avoided to reduce the risk of introducing untrusted components.
  6. Build Awareness. Users need to understand that these tools can be influenced by the information they are given. External content should be treated with caution, untrusted information should not be fed into these tools, and people should be encouraged to question outputs or actions that do not seem right.
  7. Update Response Plans. Organisations should be prepared for the possibility that something may go wrong. Response plans should include scenarios involving AI tools, ensure there is sufficient information available to understand what occurred, and allow for tools to be quickly disabled or isolated if required.

Closing Thought

Local AI tools are powerful, and they will become more common. They can improve productivity, but they also introduce new ways for things to go wrong. The important shift is this. Organisations are no longer just managing people and systems. They are managing software that can act on its own. That changes the risk, and it needs to be treated accordingly.

About the Bulletin:

The NZ Incident Response Bulletin is a monthly high-level executive summary containing some of the most important news articles that have been published on Forensic and Cyber Security matters during the last month. Also included are articles written by Incident Response Solutions, covering topical matters. Each article contains a brief summary and if possible, includes a linked reference on the web for detailed information. The purpose of this resource is to assist Executives in keeping up to date from a high-level perspective with a sample of the latest Forensic and Cyber Security news.

To subscribe or to submit a contribution for an upcoming Bulletin, please either visit https://incidentresponse.co.nz/bulletin or send an email to bulletin@incidentresponse.co.nz with the subject line either “Subscribe”, “Unsubscribe”, or if you think there is something worth reporting, “Contribution”, along with the Webpage or URL in the contents. Access our Privacy Policy.

This Bulletin is prepared for general guidance and does not constitute formal advice. This information should not be relied on without obtaining specific formal advice. We do not make any representation as to the accuracy or completeness of the information contained within this Bulletin. Incident Response Solutions Limited does not accept any liability, responsibility or duty of care for any consequences of you or anyone else acting, or refraining to act, when relying on the information contained in this Bulletin or for any decision based on it.