A fleet of robot ships bobs gently in the warm waters of the Persian Gulf, somewhere between Bahrain and Qatar, maybe 100 miles off the coast of Iran. I am on the nearby deck of a US Coast Guard speedboat, squinting off what I understand is the port side. On this morning in early December 2022, the horizon is dotted with oil tankers and cargo ships and tiny fishing dhows, all shimmering in the heat. As the speedboat zips around the robot fleet, I long for a parasol, or even a cloud.
The robots do not share my pathetic human need for shade, nor do they require any other biological amenities. This is evident in their design. A few resemble typical patrol boats like the one I’m on, but most are smaller, leaner, lower to the water. One looks like a solar-powered kayak. Another looks like a surfboard with a metal sail. Yet another reminds me of a Google Street View car on pontoons.
These machines have mustered here for an exercise run by Task Force 59, a group within the US Navy’s Fifth Fleet. Its focus is robotics and artificial intelligence, two rapidly evolving technologies shaping the future of war. Task Force 59’s mission is to swiftly integrate them into naval operations, which it does by acquiring the latest off-the-shelf tech from private contractors and putting the pieces together into a coherent whole. The exercise in the Gulf has brought together more than a dozen uncrewed platforms—surface vessels, submersibles, aerial drones. They are to be Task Force 59’s distributed eyes and ears: They will watch the ocean’s surface with cameras and radar, listen beneath the water with hydrophones, and run the data they collect through pattern-matching algorithms that sort the oil tankers from the smugglers.
A fellow human on the speedboat draws my attention to one of the surfboard-style vessels. It abruptly folds its sail down, like a switchblade, and slips beneath the swell. Called a Triton, it can be programmed to do this when its systems sense danger. It seems to me that this disappearing act could prove handy in the real world: A couple of months before this exercise, an Iranian warship seized two autonomous vessels, called Saildrones, which can’t submerge. The Navy had to intervene to get them back.
The Triton could stay down for as long as five days, resurfacing when the coast is clear to charge its batteries and phone home. Fortunately, my speedboat won’t be hanging around that long. It fires up its engine and roars back to the docking bay of a 150-foot-long Coast Guard cutter. I head straight for the upper deck, where I know there’s a stack of bottled water beneath an awning. I size up the heavy machine guns and mortars pointed out to sea as I pass.
The deck cools in the wind as the cutter heads back to base in Manama, Bahrain. During the journey, I fall into conversation with the crew. I’m eager to talk with them about the war in Ukraine and the heavy use of drones there, from hobbyist quadcopters equipped with hand grenades to full-on military systems. I want to ask them about a recent attack on the Russian-occupied naval base in Sevastopol, which involved a number of Ukrainian-built drone boats bearing explosives—and a public crowdfunding campaign to build more. But these conversations will not be possible, says my chaperone, a reservist from the social media company Snap. Because the Fifth Fleet operates in a different region, those on Task Force 59 don’t have much information about what’s going on in Ukraine, she says. Instead, we talk about AI image generators and whether they’ll put artists out of a job, about how civilian society seems to be reaching its own inflection point with artificial intelligence. In truth, we don’t know the half of it yet. It has been just a day since OpenAI launched ChatGPT 504, the conversational interface that would break the internet.
Glimmerings of autonomous technology have existed in the US military for decades, from the autopilot software in planes and drones to the automated deck guns that protect warships from incoming missiles. But these are limited systems, designed to perform specified functions in particular environments and situations. Autonomous, perhaps, but not intelligent. It wasn’t until 2014 that top brass at the Pentagon began contemplating more capable autonomous technology as the solution to a much grander problem.
Bob Work, a deputy secretary of defense at the time, was concerned that the nation’s geopolitical rivals were “approaching parity” with the US military. He wanted to know how to “regain overmatch,” he says—how to ensure that even if the US couldn’t field as many soldiers, planes, and ships as, say, China, it could emerge victorious from any potential conflict. So Work asked a group of scientists and technologists where the Department of Defense should focus its efforts. “They came back and said AI-enabled autonomy,” he recalls. He began working on a national defense strategy that would cultivate innovations coming out of the technology sector, including the newly emerging capabilities offered by machine learning.
This was easier said than done. The DOD got certain projects built—including Sea Hunter, a $20 million experimental warship, and Ghost Fleet Overlord, a flotilla of conventional vessels retro-fitted to perform autonomously—but by 2019 the department’s attempts to tap into Big Tech were stuttering. The effort to create a single cloud infrastructure to support AI in military operations became a political hot potato and was dropped. A Google project that involved using AI to analyze aerial images was met with a storm of public criticism and employee protest. When the Navy released its 2020 shipbuilding plan, an outline of how US fleets will evolve over the next three decades, it highlighted the importance of uncrewed systems, especially large surface ships and submersibles-—but allocated relatively little money to developing them.
In a tiny office deep in the Pentagon, a former Navy pilot named Michael Stewart was well aware of this problem. Charged with overseeing the development of new combat systems for the US fleet, Stewart had begun to feel that the Navy was like Blockbuster sleepwalking into the Netflix era. Years earlier, at Harvard Business School, he had attended classes given by Clay Christensen, an academic who studied why large, successful enterprises get disrupted by smaller market entrants—often because a focus on current business causes them to miss new technology trends. The question for the Navy, as Stewart saw it, was how to hasten the adoption of robotics and AI without getting mired in institutional bureaucracy.
Others at the time were thinking along similar lines. That December, for instance, researchers at RAND, the government-funded defense think tank, published a report that suggested an alternate path: Rather than funding a handful of extravagantly priced autonomous systems, why not buy up cheaper ones by the swarm? Drawing on several war games of a Chinese invasion of Taiwan, the RAND report stated that deploying huge numbers of low-cost aerial drones could significantly improve the odds of US victory. By providing a picture of every vessel in the Taiwan Strait, the hypothetical drones—which RAND dubbed “kittens”—might allow the US to quickly destroy an enemy’s fleet. (A Chinese military journal took note of this prediction at the time, discussing the potential of xiao mao, the Chinese phrase for “kitten,” in the Taiwan Strait.)
In early 2021, Stewart and a group of colleagues drew up a 40-page document called the Unmanned Campaign Framework. It outlined a scrappy, unconventional plan for the Navy’s use of autonomous systems, forgoing conventional procurement in favor of experimentation with cheap robotic platforms. The effort would involve a small, diverse team—specialists in AI and robotics, experts in naval strategy—that could work together to quickly implement ideas. “This is not just about unmanned systems,” Stewart says. “It is as much—if not more—an organizational story.”
The main threat on Stewart’s mind was China. “My goal is to come in with cheap or less expensive stuff very quickly—inside of five years—to send a deterrent message,” he says. But China is, naturally, making substantial investments in military autonomy too. A report out of Georgetown University in 2021 found that the People’s Liberation Army spends more than $1.6 billion on the technology each year—roughly on par with the US. The report also notes that autonomous vessels similar to those being used by Task Force 59 are a major focus of the Chinese navy. It has already developed a clone of the Sea Hunter, along with what is reportedly a large drone mothership.
Stewart hadn’t noticed much interest in his work, however, until Russia invaded Ukraine. “People are calling me up and saying, ‘You know that autonomous stuff you were talking about? OK, tell me more,’” he says. Like the sailors and officials I met in Bahrain, he wouldn’t comment specifically on the situation—not about the Sevastopol drone-boat attack; not about the $800 million aid package the US sent Ukraine last spring, which included an unspecified number of “unmanned coastal defense vessels”; not about Ukraine’s work to develop fully autonomous killer drones. All Stewart would say is this: “The timeline is definitely shifting.”
Hivemind is designed to fly the F-16 fighter jet, and it can beat most human pilots who take it on in the simulator.
I am in San Diego, California, a main port of the US Pacific Fleet, where defense startups grow like barnacles. Just in front of me, in a tall glass building surrounded by palm trees, is the headquarters of Shield AI. Stewart encouraged me to visit the company, which makes the V-BAT, an aerial drone that Task Force 59 is experimenting with in the Persian Gulf. Although strange in appearance-—shaped like an upside-down T, with wings and a single propeller at the bottom-—it’s an impressive piece of hardware, small and light enough for a two-person team to launch from virtually anywhere. But it’s the software inside the V-BAT, an AI pilot called Hivemind, that I have come to see.
I walk through the company’s bright-white offices, past engineers fiddling with bits of drone and lines of code, to a small conference room. There, on a large screen, I watch as three V-BATS embark on a simulated mission in the Californian desert. A wildfire is raging somewhere nearby, and their task is to find it. The aircraft launch vertically from the ground, then tilt forward and swoop off in different directions. After a few minutes, one of the drones pinpoints the blaze, then relays the information to its cohorts. They adjust flight, moving closer to the fire to map its full extent.
The simulated V-BATs are not following direct human commands. Nor are they following commands encoded by humans in conventional software—the rigid If this, then that. Instead, the drones are autonomously sensing and navigating their environment, planning how to accomplish their mission, and working together in a swarm. -Shield AI’s engineers have trained Hivemind in part with reinforcement learning, deploying it on thousands of simulated missions, gradually encouraging it to zero in on the most efficient means of completing its task. “These are systems that can think and make decisions,” says Brandon Tseng, a former Navy SEAL who cofounded the company.
One thing is for sure: The technology is advancing quickly. When I met Tseng, he said Shield AI’s goal was to have “an operational team of three V-BATs in 2023, six V-BATs in 2024, and 12 V-BATs in 2025.” Eight months after we met, Shield AI launched a team of three V-BATs from an Air Force base to fly the simulated wildfire mission. The company also now boasts that Hivemind can be trained to undertake a range of missions—hunting for missile bases, engaging with enemy aircraft—and it will soon be able to operate even when communications are limited or cut off.
Before I leave San Diego, I take a tour of the USS Midway, an aircraft carrier that was originally commissioned at the end of World War II and is now permanently docked in the bay. For decades, the ship carried some of the world’s most advanced military technology, serving as a floating runway for hundreds of aircraft flying reconnaissance and bombing missions in conflicts from Vietnam to Iraq. At the center of the carrier, like a cavernous metal stomach, is the hangar deck. Doorways on one side lead into a rabbit’s warren of corridors and rooms, including cramped sailors’ quarters, comfy officers’ bedrooms, kitchens, sick bays, even a barbershop and a laundry—a reminder that 4,000 sailors and officers at a time used to call this ship home.
Standing here, I can sense how profound the shift to autonomy will be. It may be a long time before vessels without crews outnumber those with humans aboard, even longer than that before drone mother-ships rule the seas. But Task Force 59’s robot armada, fledgling as it is, marks a step into another world. Maybe it will be a safer world, one in which networks of autonomous drones, deployed around the globe, help humans keep conflict in check. Or maybe the skies will darken with attack swarms. Whichever future lies on the horizon, the robots are sailing that way.