The generative AI revolution is moving rapidly, and with the launch of OpenAI’s Agent functionality, businesses are being presented with a new frontier. This feature allows ChatGPT to operate as a virtual assistant that can autonomously browse websites, fill out forms, download files and carry out multi-step tasks. However, incidents in recent years continue to highlight dangers as AI adoption grows, and introduces fresh professional indemnity (PI) and cyber risk exposures.
In early 2023, three Samsung engineers reportedly entered proprietary source code and confidential meeting notes into ChatGPT; the information then became part of the model’s training data. The Australian Cyber Security Centre received more than seventy-six thousand cybercrime reports between July 2021 and June 2022 – that is one report every seven minutes. The average cost of an incident was over AU$39,000 for small businesses. Given this backdrop, organisations must consider how autonomous AI tools fit into their risk management frameworks.
Understanding ChatGPT agents
Unlike earlier versions of ChatGPT, which simply responded to prompts, the new agent runs in a sandboxed virtual browser and can perform actions on a user’s behalf. In practical terms, the agent can:
• fill out web forms
• navigate websites
• download files
• execute sequences of tasks
• make real-time decisions under user supervision
This shift from ‘AI as adviser’ to ‘AI as actor’ blurs the line between advice and action. It means any mistakes, omissions or oversights made by the tool could translate into professional liability.
Professional indemnity risks
Delegated decision-making – When an AI agent completes tasks such as entering client data, managing spreadsheets or drafting proposals, any errors or omissions still belong to the business. Clients may allege negligence if the work is inaccurate.
Misrepresentation – Agents draw data from external websites. If the AI misinterprets a regulation or pulls outdated information, it could give rise to misstatements that expose the firm to claims.
Lack of oversight – Without a documented approval workflow, staff may not have visibility into what the agent is doing. In PI terms, failing to supervise the tool’s output could be seen as failing to take reasonable care.
Cyber liability risks
Prompt injection and manipulation – Attackers can craft malicious inputs that override the model’s instructions. Security researchers note that prompt-injection attacks can coerce a model into revealing confidential data or producing dangerous responses.
Data leakage – The Samsung example illustrates how entering proprietary material into ChatGPT can result in unintended disclosure. Because the model uses inputs for training, sensitive data may become exposed to other users.
Supply-chain vulnerabilities – Agents interact with third-party websites and tools. A breach in one of those services can have knock-on effects, drawing the agent into a wider incident.
Risk management considerations
Governance and policies – Update your organisation’s acceptable-use policy to address AI tools. Define who can employ agents, for what tasks, and establish a supervision protocol with clear checkpoints.
Technical controls – Use agents in isolated environments that do not have access to core systems or client databases. Restrict permissions to the minimum necessary and enable logging so you can audit activity.
Legal and insurance readiness – Review your PI and cyber insurance policies to ensure that AI-driven activities are not excluded. Seek endorsements or clarifications from insurers if needed and consider adding an AI errors-and-omissions clause. Australian directors remain legally accountable for cyber risk; ASIC’s guidance makes it clear that boards must implement robust risk management strategies and demonstrate due diligence.
Where the insurance industry stands
Insurers have only started to address autonomous AI. Policy wordings are likely to evolve to include specific clauses dealing with AI-operated systems and to impose duty-of-care requirements for agent supervision. Given the increasing number of data-breach notifications and rising costs, underwriters may ask more detailed questions about how you manage AI in your operations. Being able to demonstrate clear governance and technical controls will make it easier to secure cover at favourable terms.
Final thoughts and KBI’s perspective
AI agents are an exciting development and will play an important role in future business processes. However, they must be adopted thoughtfully.
The golden rule remains unchanged: technology does not replace responsibility.
Business leaders should weigh the benefits of automation against the potential for professional liability and data-privacy incidents. Incorporating AI risks into your risk management framework and ensuring your PI and cyber policies are suitable will help you unlock the benefits while staying protected.
Especially now, it helps to have good advice in your corner.
If you would like an expert opinion on how your PI or cyber policies respond to AI-related exposures, KBI is here to help. Our brokers understand the unique challenges facing Australian businesses and can guide you through policy reviews, coverage options and best practices. Contact us today to discuss how tailored insurance and robust governance can keep your organisation safe while you explore the possibilities of ChatGPT agents.