Meta and Other Tech Companies Ban OpenClaw Over Cybersecurity Concerns

0
business-news-2-768x548.jpg


Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. “You've probably seen Clawdbot trending on X/LinkedIn. While it's cool, it's currently unvetted and high risk for our environment,” he wrote in a Battle message with a red siren emoji. “Please keep Clawdbot off all company hardware and away from work-linked accounts.”

Grad isn't the only tech that has raised concerns among staff about the experimental agent AI tool, which was briefly known as MoltBot and is now called OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or lose their jobs. The executive told reporters that he believes the software is unpredictable and could lead to a privacy breach when used in otherwise safe environments. He spoke on condition of anonymity to speak candidly.

Peter Steinberger, the solo founder of OpenClaw, launched it as a free, open source tool last november Mar his popularity increased last month as other coders contributed functions and began to share their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.

OpenClaw requires basic knowledge of software engineering to set up. After that, it needs only limited direction to take control of a user's computer and interact with other apps to help with tasks like organizing files, performing web research, and shopping online.

Some cybersecurity professionals have publicly urge companies to take measures to strictly control how their employees use OpenClaw. And the recent bans show how companies are moving quickly to ensure that security becomes a priority for their desire to experiment with emerging AI technologies.

“Our policy is, 'mitigate first, investigate second' when we encounter anything that could be harmful to our business, users or clients,” says Grad, who is co-founder and CEO of Massive, which provides Internet proxy tools for millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says.

At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on Jan. 29 on an internal Slack channel to share new tech to potentially try. The president of the company quickly responded that the use of OpenClaw was strictly prohibitedValere CEO Guy Pistone tells WIRED.

“If it gained access to one of our developers' machines, it could access our cloud services and our clients' sensitive information, including credit card information and GitHub codebases,” says Pistone. “It's pretty good to clean up some of their actions, which also scares me.”

A week later, Pistone had Valere's research team run OpenClaw on an employee's old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later recommended restricting who can issue commands to OpenClaw and exposing it only to the Internet with a password in place for its control panel to prevent unwanted access.

In a report shared with WIRED, the Valere researchers added that users should “accept that the bot can be tricked.” For example, if OpenClaw is configured to digest a user's email, a hacker could send a malicious email to the person instructing the AI ​​to share copies of files on the person's computer.

But Pistone is confident that security measures can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. “If we think we can't do it in a reasonable amount of time, we'll forget about it,” he says. “Whoever figures out how to make it safe for businesses will definitely have a winner.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *