We have talked before about the risks of AgenticAI before. Watch our video discussing the matter at TechLaw10: Agentic AI – what is it & what are the risks?. Now the use of an AgenticAI enabling tool, OpenClaw, has been a cause for concern for many organisations using AgenticAI.
What is Agentic AI?
In simple terms, AgenticAI is basically where an organisation asks one AI app to coordinate a group of AI apps to perform a series of tasks. Sometimes those agents are asked to perform a task autonomously without a human in the loop. This means that the agents choose which other agents they will use to do the job. AgenticAI can be riskier if governance processes are not in place (as we explained in the film) as organisations can lose control.
In our video, we talked on a theoretical basis about the issues with the loss of control and the possible compromise to data security. Those issues have now become reality with the investigation into OpenClaw.
What is the risk?
OpenClaw used to be known as Moltbot and before that it was known as Clawd. It was built in late 2025 as a “weekend project” by its author Peter Steinberger. OpenClaw quickly became very popular. Steinberger says his GitHub repository had 2 million visitors in a single week. Many developers used his code as part of their AgenticAI infrastructure. In simple terms OpenClaw allowed different AI agents to talk to each other and to share access to systems.
On 9 February 2026 a report was published into potential vulnerabilities with OpenClaw which identified over 42,000 unique IP addresses hosting exposed OpenClaw control panels with full system access across 82 countries. The report said that researchers had discovered just under 50,000 instances where a device would be vulnerable to Remote Code Execution (RCE) – effectively meaning that an attacker could use the gateway created by OpenClaw to take over the device. The research suggested that OpenClaw deployments were heavily concentrated in major cloud and hosting providers with those based in China being the most vulnerable.
Depending on the OpenClaw settings used, the vulnerability would give threat actors the ability to connect to third party services such as email, calendars, chat applications and browsers.
After the initial vulnerability was highlighted, a cybersecurity firm’s investigation found a misconfigured data base exposing 1.5m authentication tokens, 35,000 email address and private messages between AI agents. Even though the creator of OpenClaw acknowledged the security risks and said that he developed it as “a free, open-sourced hobby project” he seems to have tacitly acknowledged that it was not fit for the purpose that some organisations were using it for.
Can we fix this by uninstalling OpenClaw?
Probably not. For many organisations, fixing the problems with OpenClaw will not be as simple as simply uninstalling it. There seem to be various technical reasons why this is not likely to be wholly effective. For many organisations they will not know their exposure to the dangers of OpenClaw as it may be a tool that is being used by people within the organisation without a proper Data Protection Impact Assessment (DPIA) or AI Impact Assessment (AIIA).
The use of ShadowAI should not be underestimated. An October 2025 study by Microsoft showed that 71% of UK employees admitted to using unapproved AI tools at work. It is likely that given the increased spread of AI since then, and the incorporation of AI into common applications like searches, that this might be understating the problem.
OpenClaw can also be set to run with various applications, as Steinberger has said “OpenClaw is an open agent platform that runs on your machine and works from the chat apps you already use. WhatsApp, Telegram, Discord, Slack, Teams—wherever you are, your AI assistant follows.” To manually reset passwords etc for all of those applications may be a significant task.
What have regulators said?
On 12 February 2026, the Dutch DPA, the Autoriteit Persoonsgegevens (AP) warned users and organisations against the use of OpenClaw and similar experimental systems. The AP’s view was that this type of open source system does not meet basic security requirements. The AP said that users should not use OpenClaw and similar AI agents on systems with sensitive or confidential data. They said that those systems will include systems that had access to access codes, financial administration data, employee data, private documents or identity documents. The AP also asked parents to check whether their children had installed OpenClaw on their devices at home.
In addition, the AP warned that just because OpenClaw ran locally on a user’s computer, this did not mean that the system was secure.
Are these isolated incidents?
Apparently not. As we said in our AgenticAI video, one of the issues with these types of applications is that often users don’t understand the full extent of the control that they are giving over to AI. There have been reports that another popular AI application, Orchids, has also experienced similar problems. Orchids is a so-called “vibe-coding tool” which means that people without technical knowledge can use it to build apps and games by typing a text prompt into a chatbot. Security experts have apparently said it is very easy to hack Orchids.
Orchids claims to have a million users but allegedly has vulnerabilities that again allow a threat actor to take over a user’s device.
One of the issues highlighted by both the OpenClaw and the Orchids allegations, are that often the organisations involved in creating these AI apps can be very small. In OpenClaw’s case it seems to have started as a one-man band. Orchids apparently have less than 10 employees.
Practical Steps organisations can take
For many organisations, these issues are a call for action. They should look at practical steps to deal with risk. This might include:
- Looking at technical settings. Organisations need to ensure that they can restrict the use of applications like OpenClaw on their networks. There are some specifically designed tools available to look at ShadowAI risk and if the organisation has those tools, they need to ensure that OpenClaw is on the list of prohibited applications. It has been reported that it is currently not possible for humans to delete an account on OpenClaw at least by using common settings. If you think that your organisation has been exposed, you may want to take specialist advice on removing the harm.
- Check your socials. It has also been reported that OpenClaw collects X (formerly Twitter) user names, display names, passwords etc so it might be possible via OpenClaw for a threat actor to gain access to the organisation’s social networking output, which again can lead to reputational risks and expose the organisation to phishing attacks etc.
- Literacy will be key. Literacy has been a core requirement of the EU AI Act since February 2026. Our guidance on AI Literacy can be found at: What are the AI Literacy Obligations from the EU AI Act?
- Organisations may want to take measures to protect against ShadowAI. Whilst a literacy program will be part of this, organisations may want to look at technical measures as well. This might include traditional software solutions like data loss prevention (DLP) software but also specialist ShadowAI monitoring and blocking services. In today’s world, no organisation can effectively ban AI but they can try and regulate unauthorised applications.
- Look at contracts & developer due diligence. For some organisation the issue might stem from sub-contracted developers using a quick fix to get the job done. You may wish to look at the contractual protections you have in place to meet your compliance and regulatory obligations. This might also include specific insurance policies since developers with just 1 or 10 employees are unlikely to have the financial ability to pay up when things go wrong.
- Do a proper DPIA or AIIA. This isn’t just common sense but may well be a legal requirement. Whilst organisations want to move quickly in the new AI world sometime sit is necessary to step back and see if an organisation’s legal and compliance obligations are being taken into account.
For More Information