This Defense Company Made AI Agents That Blow Things Up
Like many Silicon Valley companies today, Scout AI is training big AI models and agents to automate choices. The big difference is that instead of writing code, answering email, or buying stuff online, Scout AI's agents are designed to search for and destroy things in the physical world with exploding drones.
In a recent demonstration, held at an undisclosed military base in central California, Scout AI's technology was put in charge of a self-driving off-road vehicle and a pair of deadly drones. The agents used these systems to find a truck hiding in the area, then blew it to pieces with an explosive charge.
“We need to bring the next generation of AI to the military,” Colby Adcock, CEO of Scout AI, told us in a recent interview. (Adcock's brother, Brett Adcock, is the CEO of Figure AI, a startup working on humanoid robots). “We take a hyperscaler foundation model and we train it to go from being a generalized chatbot or agent assistant to being a warfighter.”
Adcock's company is part of a new generation of startups racing to adapt technology from big AI labs for the battlefield. Many policymakers believe that the use of AI will be the key to future military dominance. The combat potential of AI is one reason why the US government has tried to limit sales of advanced AI chips and chipmaking equipment to China, although the Trump administration recently chose to loosen those controls.
“It's good for defense technology startups to push the environment with AI integration,” says Michael Horowitz, a professor at the University of Pennsylvania who previously served in the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “That's exactly what they need to do if the US is going to lead in military adoption of AI.”
However, Horowitz also notes that harnessing the latest AI capabilities in practice may prove particularly difficult.
Large language models are inherently unpredictable and AI agents – like those that the popular AI assistant OpenClaw–can misbehave when given even relatively benign tasks such as ordering goods online. Horowitz says it can be especially difficult to prove that such systems are robust from a cybersecurity standpoint — something that would be necessary for widespread military use.
Scout AI's recent demo featured several steps where AI had free reign over combat systems.
At the start of the mission, the following command was entered into a Scout AI system known as Fury Orchestrator:
A relatively large AI model with more than 100 billion parameters, which can run on a secure cloud platform or on an on-site air-gapped computer, interprets the first command. Scout AI uses an undisclosed open source model with its limitations removed. This model then acts as an agent, issuing commands to smaller models with 10 billion parameters that run on ground vehicles and the drones involved in the exercise. The smaller models also act as agents themselves, issuing their own commands to lower-level AI systems that control the vehicles' movements.
Seconds after receiving marching orders, the ground vehicle zips along a dirt road winding between brush and trees. A few minutes later, the car came to a stop and sent the pair of drones flying into the area where the target was instructed to wait. After spotting the truck, an AI agent riding one of the drones gave an order to fly towards it and detonate an explosive charge just before impact.