Imagine you had all the designs and blueprints for a fifth-generation jet fighter.
Think you could build it?
Not a chance. Building a fighter requires a lot of technical knowledge that you almost certainly don’t have.
Imagine you had all the blueprints, unlimited resources, and a team of engineers to help you. Think you could build it now? I doubt it. Even if you could, it would take a long time to finish, and your fighter would probably be out-of-date by the time you did.
Military technology is really, really complicated. A modern platform might consist of hundreds of thousands of individual components, some of which are highly advanced and require a great deal of precision to interoperate properly. In one flight test of an early U.S. stealth fighter, the aircraft was visible on radar up to 50 miles away because it had three loose screws that extended an eighth of an inch above its surface.
It takes a tremendous amount of technical knowledge to master even a single subsystem on a military platform, much less an entire platform like a fighter. If you had a PhD and a career’s worth of experience in engineering, you’d probably only be competent in one very specific technology used in the aircraft, and you might not understand how it operates under a range of environmental conditions. Getting the system to completion requires a team of thousands of experts with highly specific knowledge across a massive number of disciplines.
Even then, unless your experts are the same ones who built the original fighter that the blueprints are based on, they likely wouldn’t be able to parse through the designs and give you the exact same type of aircraft as the one you’re imitating. Political scientists Andrea Gilli and Mauro Gilli (they’re also brothers) observe that, because military technology is so complex, a large share of the knowledge needed to produce advanced systems can no longer be summarized in the written record:
[Tacit knowledge] entails knowledge derived mostly from experience and hence is retained by people and organizations: for this reason, it does not diffuse either easily or quickly. … To replicate a given weapon system, an imitator needs direct access to the innovator’s tacit knowledge—that is, access to the very people who worked on the system. Otherwise, it will struggle to figure out what each part does, the requirements it is intended to meet, how to produce it, and how it is connected to other components—in other words, its design, development, and production know-how.
That’s because the real world is a lot more complicated than a set of blueprints.
[T]oday designers, engineers, managers, and specialized workers face an infinite number of decisions, each entailing inherent trade-offs … Identifying the most appropriate choices and solutions relies heavily on experience, judgment calls, and educated guesses—all of which are, by definition, tacit.
Gilli and Gilli argue that this is why China has failed to imitate the most advanced U.S. military technology, despite having the most prolific spies and hackers in the world. China’s fifth-generation fighter, the J-20, is a far cry from the American F-22 on almost every metric. It’s more detectable and more vulnerable in air-to-air combat, and it has a less sophisticated engine and electronic systems. Its unit cost is only slightly lower than the F-22, and it took only slightly less time to go from project launch to first flight.
All this said, let’s return to our hypothetical. Imagine you’re the Chinese government and you want to use AI to help you imitate U.S. military technology. You already have the designs and blueprints for U.S. fighters, but your engineers can’t figure out the nitty-gritty details that you need to produce a functional aircraft.
OpenAI’s GPT model and other similar models won’t help you. They’re trained on codified knowledge, the type that can be written down—and that’s precisely the type of knowledge that won’t help you bring your aircraft to fruition.
OpenAI’s newest product—o1, also called Strawberry—is a lot better at reasoning, however. Given some more development, it should be able to break down and solve highly specific engineering problems without just giving an answer that sounds right based on which words are associated with which other words in its training data, or that you can already find on the blueprints. (At least, this is the impression I get from what more AI-literate people are saying. I’m not a technical expert here!)
If that’s true, the implications for international order could be serious. OpenAI already acknowledges that o1 poses “medium risk” for proliferating chemical, biological, and nuclear weapons. The risks for proliferating advanced conventional weapons may be at least as high. Although the problem would only be posed by actors with a large enough defense industrial base that they can produce state-of-the-art platforms, it could still mean a massive diffusion of power away from the United States. Since the end of the Cold War, American hegemony has been upheld by the United States’s dominance in power projection platforms. If other states gain the capability to copy advanced U.S. systems and produce them at scale, we could be looking at a lot more global instability.