Part 2 of our interview with Shashank Joshi, defense editor at the Economist, and Mike Horowitz, professor at Penn who served as Biden’s US DAS of Defense for Force Development and Emerging Capabilities. Here’s part 1.
In this installment, we discuss…
AI as a general-purpose technology with both direct and indirect impacts on national power,
How AGI might drive breakthroughs in military innovation,
The military applications of AI already unfolding in Ukraine, including drone capabilies and “precise mass” more broadly,
Whether AGI development increases the probability of a preemptive strike on the US.
Listen to the podcast on iTunes, Spotify, YouTube, or your favorite podcast app.
I’m hoping to expand on this show with an interview series exploring AI’s impact on national security. Too often today, debates center on “superweapons” lazily pattern-matched to the nuclear era or go in circles on cyber offense vs defense. The goal instead is to repeat the exercise Dario did for biotech in Machines of Loving Grace: deeply explore the bottlenecks and potential futures across domains like autonomy, decision-support, stealth, electronic warfare, robotics, and missile defense. Guests will be engineers and technologists who can also explore second-order operational and strategic impacts.
But this needs a sponsor in order to happen! If you work at an AI firm, defense tech, VC, university or think tank and want to help facilitate the best conversations about the future of warfare, please reach out to jordan@chinatalk.media.
Military AI and the Drone Revolution
Jordan Schneider: Let’s talk about the future of war. There is this fascinating tension that is playing out in the newly national security-curious community in Silicon Valley where corporate leaders like Dario Amodei and Alex Wang, both esteemed former ChinaTalk guests, talk about AGI as this Manhattan Project-type moment where war will never be the same after one nation achieves it. What’s your take on that, Mike?
Michael Horowitz: There’s a lot of uncertainty about how advances in frontier AI will shape national power in the future of war. I’ve been, historically, extremely bullish on AI from a national defense perspective. I remember when Paul Scharre and I were in a small conference room at CNAS with basically everybody in Washington D.C. who cared about this. Now it’s obviously attracting much more attention.
But I think the notion that AGI will inherently transform power in the future of war makes a couple of questionable assumptions. The first is that AGI is binary and immediately causes a jump in capabilities, which essentially means that you can then solve all sorts of problems you have been trying to solve that you couldn’t solve before.
That might be true. It also might be true that you have continuing growth in AI capabilities that may or may not constitute AGI. In that case, you never have one specific moment, yet you still have ever-increasing frontier AI capabilities that militaries can then potentially adopt.
This other assumption is that technical breakthroughs are the same thing as government adoption, which the history of military innovation suggests is incorrect.
I worry that US companies will lead the world in AI breakthroughs, but the US Government will lag in effective adoption due to legacy bureaucracy, budgeting systems, and the relationship between the executive and Congress.
Maybe the PRC will get there later, but adopt the upside faster.
Jordan Schneider: What is the AI eval that would convince Mike Horowitz that this is the next stealth bomber, or the next nuclear weapon?
Michael Horowitz: I just think that’s the wrong way to think about it. AI is a general purpose technology, not a specific widget. If what we are looking for is the AI nuclear weapon or the AI stealth bomber, I think we’re missing the point.
When general purpose technologies impact national power, they do so directly and indirectly. They do so directly through, “All right, we have electricity, now we can do X,” or, “We have the combustion engine now we can do X.” They do so indirectly through the economic returns that you get, which then fuel your ability to invest in the military and how advanced your economy is. AI is likely to have both of those characteristics.
The thing that’s not entirely clear yet is whether there’s essentially a linear relationship between how advanced your AI is and what the national security returns look like.
Jordan Schneider: Just to stay on defining terms, the direct applications we’re talking about, like the AI for science, are still in the very early phases. There’s a really fun book by Michael O’Hanlon called Technological Change and the Future of Warfare, that includes this cool table of all of the different vectors on which technology can get better over a 20-year horizon.
Show me some crazy material science breakthroughs that you can put in weapon systems, and I will be convinced that this stuff is really real and going to matter on a near-term horizon on the battlefield.
But I think the Dario Amodei framework strikes me as really not grappling with the challenges both on the scientific side as well as on the adoption side. Maybe Shashank, before we do adoption, anything on what this can potentially unlock that we would want to?
Shashank Joshi: We could split the applications of that general-purpose technology up a million different ways. The way I have tended to do it in my head is thinking about insight, autonomy, and decision support.
Insight is the intelligence application. Can you churn your way through satellite images? Can you use AI to spot all the Russian tanks?
Autonomy is, can you navigate from A to B? Can this platform do something itself with less or no human supervision or intervention? The paradigmatic case today, which is highly impactful, is terminal guidance using AI object recognition to circumvent electronic warfare in Ukraine.
The third interesting thing is decision support. This includes things that nobody really understands in the normal world, like command and control. It’s the ability of AI to organize, coordinate, and synchronize the business of warfare, whether that’s a kind of sensor-shooter network at the tactical level for a company or a battalion, or whether it’s a full theater-scale system of the kind that European Command, 18th Corps, and EUCOM has been assisting Ukraine with for the last three years.
This involves looking across the battlescape, fusing Russian phone records, overhead, radio frequency, satellite, IM satellite returns, synthetic aperture radar images, and all kinds of other things into a coherent picture that’s then used to guide commanders to act more quickly and effectively than the other side. That’s difficult to define. But if we’re talking about transformative applications, that is really where we need to be looking carefully.
Michael Horowitz: I agree. General Donahue is a visionary when it comes to what AI application can look like for the military today, and in trying to at least experimentally integrate more cutting-edge capabilities.
To the Dario point though, it’s all a question of timeframe and use cases. If you imagine AGI as instantly having access to 10,000 Einsteins who don’t get tired, then that’s going to lead to lots of breakthroughs that will generate specific use cases.
These could lead to new material science breakthroughs that decrease radar cross-sections, new advances in batteries that finally mean the dream of directed energy becomes more of a reality, or advances in sensing in the oceans that create new ways of countering submarines. It could lead to all sorts of different kinds of things. The challenge is it’s difficult in some ways, ex-ante, to know exactly which you’ll get when.
Jordan Schneider: Perhaps Dario would say that your framework, Shashank, is weak sauce and he’s talking about an entirely new paradigm.
Which of these applications are currently being developed in Ukraine?
Shashank Joshi: The autonomy piece is super interesting because of the pace of change. To sum this up, when I was talking to Ukrainian first-person view (FPV) strike operators, they were saying that if you are a member of an elite unit with loads of training, you can get a hit rate of up to 70-75%. But if you’re an average strike pilot, this is not easy. Sticking those goggles on, navigating this thing — you don’t know when the jamming is going to kick in and cut the signal. You have to get it just right, and you’re getting like 15-20% hit rates maybe.
What I am seeing with the companies and entities building these AI-guided systems for the final 100 meters, 200 meters, 300 meters, and increasingly up to 2 kilometers in some cases, is that the engagement range is going up. You can hone in on the target beyond the range of any plausible local jamming device. That’s a huge deal. More importantly, the hit rate you’re getting is 80% plus. That’s phenomenal. That changes the economics, the cost per kill — that changes the economics of this from an attrition basis.
There are all these interesting ripple effects. You can achieve this with like 30 minutes of training. Think about what that unlocks for a force, particularly sitting here in Europe where we have these shrunken armies with no reserves, with the manpower requirements as well as the training times to bring new people in when you have attrition in a war in the first round.
This little tactical innovation — terminal guidance, AI-enabled — looks very narrow, but it has all these super interesting and consequential ripple effects on the economics of attrition, the cost per kill, lethality levels, the effectiveness of jamming, and on manpower and labor requirements. That’s why it’s so important to get into the weeds and look at these changes.
Michael Horowitz: We’re seeing at scale something that we kind of thought might happen, but it just always been theoretical rather than something real. The argument for why you would need autonomy to overcome electronic warfare has been obvious for decades. When they were questioning if the technology is there or if we want to do it this way, there were different kinds of approaches.
What we’ve seen is that when you are fighting conventional war at scale, if you want to increase your hit rate and overcome jamming when facing electronic warfare, you can update software to try to counter the jamming. You can try to harden against jamming, and although it increases costs, you can use different concepts of operation to try to get around it, to sort of fool the local jammers.
But to the extent that autonomy becomes a hack that lets you train and operate systems more effectively in much less time — that’s a game changer, and it’s not one that we should expect to be confined there. Imagine all of the Shaheds that Russia fires at Ukraine with similar autonomous terminal guidance out to a couple of kilometers. Imagine all sorts of weapon systems with those kinds of capabilities. We’re seeing this at scale in Ukraine in a way we just had imagined.
To be clear, it’s not just in the air. Let me now just give my 15-second rant on the term “drone.” We are currently using the term “drone” to refer to a combination of cruise missiles, loitering munitions, ISR platforms, and uncrewed aerial systems that themselves launch munitions. All of those are getting called drones right now, even though we actually have correct names for them. It might be helpful if we used those names since we’re talking about different capabilities, essentially. But yeah, plus one to everything.
Shashank Joshi: Mike, do you want me to start using terms like UAS, UUVs? I’m going to get sacked if I start using all those acronyms.
I’d like to ask you about something since you’ve raised the issue of different domains. One of the questions I often get asked by readers is: where are all the drone swarms? Where are the swarms that we were promised? Maybe this is a kind of ungrateful thing because we bank a bit of technology and we are desensitized to it and then we forget everything else.
What interests me is, when writing about the undersea domain a while ago and submarine hunting, I was struck by how difficult it is — this is obvious physics — to communicate and send radio frequency underwater. Radio waves don’t penetrate water very much, if at all. Acoustic modems and things like that are very clunky. So the technologies we have relied on for things like control signals, navigational signals, oversight in the air domain operate very differently in places where signals don’t travel as much — the curvature of the earth or in the water. Do you think that uncrewed technology and autonomy operates in some kind of fundamentally different way in those domains or will it be less capable in those places?
Michael Horowitz: That is a great question. Let me give you a broad answer and then the answer to the specific undersea question.
We’ve essentially entered the era of precise mass in war, where advances in AI and autonomy and advances in manufacturing and the diffusion of the basics of precision guidance mean that everybody essentially can now do precision and do it at lower cost. This applies in every domain — it applies in space, in air for surveillance, in air for strike, to ground vehicles, and can apply underwater and on the surface now. The specific way that it plays out will depend on the specifics of the domain and on what is most militarily useful.
If the question is “Where are the swarms we were promised?” and what we end up with is a world where one person is overseeing maybe 50 strike weapons that are autonomously piloting the last two kilometers toward a target, there may be actually military reasons why we don’t want them to communicate with each other. If they communicated with each other, that would be a signal that could be hacked or jammed, which then gets you back into the EW issue that you’re trying to avoid. There’s an interaction in some ways between the “swarms we were promised” and some of the ways that you might want to use autonomy to try to get around the electronic warfare challenge.
This points to the huge importance of cybersecurity in delivering essentially any of this. Part of the issue is that swarms potentially are vulnerable to some of the same issues that face FPV drones and other kinds of systems in different kinds of strike situations. It wouldn’t be surprising to me if you then see a move more towards precise mass in the context of autonomy without swarming in a world where you think that you’re getting jammed.
Now underwater, you absolutely have the physics issue that you’ve pointed out — communication underwater is just more difficult. To the extent that something like swarming requires real-time coordination, that becomes even more difficult the further away things are from each other. It wouldn’t be surprising then that the underwater domain would be challenging here.
To take it back to the AGI conversation we were having, two notes are relevant here. One, the “where were the swarms we were promised” question reminds me that how we define artificial general intelligence often is a moving target. We’re constantly shifting the goalposts because once AI can do things, we call it programming. Artificial general intelligence is always the thing that’s over the horizon.
Going back to my belief that there might not be one AGI moment, it reflects that way that the definition of AI has tended to be a moving target. But specifically, if you’re in the “AI will transform everything” paradigm, one of the things you would probably try to use AGI to do that would have transformative impact would be to solve some of these communication issues in the undersea domain that can potentially limit the utility of uncrewed systems in mass undersea. That’s an example essentially of a science problem that then maybe major advances in AI help you address when you have paradigm-shifting AI.
Debating China War Scenarios
Jordan Schneider: I want to talk about two odd theories for why a US-China-Taiwan war could kick off. One is China’s dependence on TSMC, and the other is this idea that if one side is close to AGI, then the other would do a preemptive strike to stop their adversary. What do you think about these scenarios?
Michael Horowitz: Those are great questions. A lot of what we know from the theory and reality of the history of international relations and military conflicts suggests that war in either case would be pretty unlikely.
Let me start with the Taiwan scenario. I am extremely nervous, to be clear, about the prospect of a potential conflict between China and Taiwan. There is real risk there. But the notion of China essentially starting a war with Taiwan over TSMC would be kind of without precedent. Put another way, there are lots of paths through which we could end up with a war between China and Taiwan. The one that keeps me up at night is not an attack on TSMC.
Jordan Schneider: Ben Thompson just wrote a piece that defines China’s reliance on Taiwanese fabs as an important independent variable in Beijing’s calculation of whether or not to invade. How do you think about that line of argument in a broader historical context?
Michael Horowitz: The best version of the argument, if you wanted to make it, is probably that if China views itself as economically dependent on Taiwan, it would then seek to figure out ways to get access to the technology that it needs from Taiwan. You could imagine that effort happening in a couple of different ways.
One is to mimic what TSMC does, which obviously they’re already attempting to do. Another would be attempting to coerce Taiwan to get better access to TSMC. But starting a war with Taiwan where tens of thousands, if not hundreds of thousands of people are likely to die, and which could trigger a general war with the United States and other countries in the Indo-Pacific over the fabs — I think that’s relatively unlikely since China would have lots of other ways to try to potentially get access to chips that they need.
Jordan Schneider: You keep coming back to the straw man of going into TSMC to take the chips. But there’s another line of argument that as long as China still needs TSMC and is able to buy NVIDIA 5-nanometer or 3-nanometer NVIDIA chips and needs the output from those TSMC fabs to run its economy in a normal, modern way, then that would drive down the likelihood of China wanting to start a conflict.
Michael Horowitz: There’s certainly an argument in that direction, which is to say that economic dependence could generate incentives to not start a conflict as well. My belief tends to be that the probability of war between China and Taiwan will be driven by broader sociopolitical factors.
Jordan Schneider: I would also agree. Putting on my analyst hat for a second, I can think of several factors that are orders of magnitude more important to China than TSMC in deciding whether to invade — domestic Chinese political dynamics, domestic Taiwanese politics, the perception of America’s willingness or Japan’s willingness to fight for Taiwan.
How about the preemptive strike over AGI?
Michael Horowitz: The argument is insufficiently specified, and I will say my views on this could change. For a situation in which, say, China would attack the United States because it feared the United States was about to reach AGI, that presumes three things.
It presumes that AGI essentially is a finish line in a race — that it’s binary and once you get there, there’s a step change in capabilities.
It presumes that there’s no advantage to being second place, and that step change in capabilities would immediately negate everything that everybody else has.
It assumes that advances are transparent, such that the attacker would know both what to hit and when to hit it to have maximum impact.
There are a lot of reasons to believe that all of those assumptions are potentially incorrect.
It’s not clear, despite enormous advances in AI that are transforming our society and will continue, that there will be one magical moment where we have artificial general intelligence. Frankly, the history of AI suggests that 15 years from now we could still be arguing about it because we tend to move the goalposts of what counts as AI. Since anything we definitively have figured out how to do, we tend to call programming and then say that it’s in support of humans.
It’s also not clear that these advances would be transparent and that countries would have timely intelligence. You need to be not just really confident, but almost absolutely certain that if somebody got to AGI first, you’re just done, that you can’t be a fast follower, and probably that it negates your nuclear deterrent.
If you believe that AGI is binary, that if you get it, it negates everything else, and that there’ll be perfect transparency — in that case, maybe there would be some incentive for a strike. Except that military history suggests that these are super unlikely.
What we’re talking about in this context is a bolt from the blue — not the US and China are in crisis and on the verge of escalation and then there’s some kind of strike against some facilities. We’re talking about literally being in steady state and somebody starts a war. That’s actually pretty unprecedented from a military history perspective.
Leaders tend to want to find other ways around these kinds of situations. If you even doubted a little bit that AGI would completely negate everything you have, then you might want to wait and see if you can catch up rather than start a war — and start a war with a nuclear-armed power with second strike capabilities. It’s so dangerous.
Jordan Schneider: I am sold by arguments one and three. If the story of DeepSeek tells you anything, it’s not even fast following like three years with the hydrogen bomb in the Cold War; it’s fast following like three months with a model you can distill.
If it’s not a zero-to-one thing, then maybe the more relevant data point is Iran and Israel in the 2000s and 2010s. You don’t literally have missiles being fired and airstrikes, but you have this increasingly nasty world of targeted assassinations and Stuxnet-like hacking of facilities.
What is now a happy-go-lucky world in San Francisco could become a lot more dark and messy. Mike, what could trigger that potential timeline?
Michael Horowitz: Let me be more dire and ask, what’s the difference between that and the status quo?
Obviously there’s a difference if we’re talking about assassination attempts and those kinds of things. But every AI company around the world, including PRC AI companies, is probably under cyber siege on a daily basis from varieties of malicious actors, some of them potentially backed by states trying to steal their various secrets.
To me, this falls into a couple of categories. One is cyber attacks to steal things — hacking essentially for the purposes of theft. A second would be cyber attacks for the purposes of sabotage, like a Stuxnet-like situation. A third would be external to a network, but physical actions short of war — espionage-ish activities to disrupt a development community.
On the cyber attack aspect, there is a tendency sometimes to overestimate the extent to which there are magic cyber weapons that let you instantly intrude on whatever network you want. Are there zero days? Yes. Are cyber capabilities real? Yes. Many governments, including the United States government, have talked about that, but I’m not sure it’s as easy to say “break into a network” that is, to be clear, pretty hardened against attack, and just flip a switch like, “oh, today we’re going to launch our cyber attacks."
There are effectiveness questions about some of those things. But also those networks are constantly being tested.
Stuxnet was really hard to achieve. Stuxnet is probably the most successful cyber-to-kinetic cyber attack arguably in known history. It’s this enormous operational success for Israel against Iran.
Jordan Schneider: But the difference with Stuxnet versus what we’re talking about, is that a data center in Virginia or Austin is much more connected to the world. They hire janitors. It’s not like in a bunker somewhere.
Michael Horowitz: Those are more accessible, but there are also more data centers. Targeting any one data center in particular is not likely to grind all AI efforts to a halt.
Frankly, if there is one AI data center that is widely regarded as doing work that will be decisive for the future of global power, that’s going to be locked down. The company will have incentives to lock it down, just like defense primes have incentives to lock their systems down, even if we’re not talking about defense companies. Companies like Microsoft and Google have incentives to lock down non-AI capabilities as well.
My point isn’t that there won’t be attempts and even that some of those attempts won’t succeed, but there’s sometimes a tendency to exaggerate the ease of attack and its structural impact. In a world where we’re talking about hitting a very accessible facility in Virginia, that means there’s probably similar accessible facilities in other places that also can potentially do the job.
Now the toughest scenario is the espionage one where you’re talking about essentially covert operations targeting companies. It wouldn’t surprise me if some of those companies are intelligence targets for foreign governments. The challenge analytically is that these arguments quickly enter the realm of non-falsifiability.
If I tell you that I think this kind of espionage or that kind of espionage wouldn’t be that likely, you could say, “Well what about this?” We’re not going to be able to resolve it with facts. Non-falsifiable threat arguments make me nervous analytically. Maybe this is the academic in me that makes me want to push back a little bit because I feel like if an argument is legitimate, we should be able to specify it in a clearly falsifiable way.
Jordan Schneider: Like what? Give me the good straw man of that.
Michael Horowitz: The best straw man argument would be if you could basically demonstrate that the PRC is not really trying to target for collection varieties of AI companies and that it would be relatively easy for them to do so. That would raise the question of why they’re not doing that now. Then we get back to the question about what’s the point at which they would start those kinds of activities.
They would need to have enough information that they believe some company or set of companies is getting close to AGI, but not enough that they would have done something previously — assuming they’ve got good capabilities on the shelf that they would pull off if they have to.
The non-falsifiable part is about to what extent they could ramp up attacks, to what extent there would be defenses against those attacks, and to what extent those non-kinetic strikes would actually meaningfully delay the development of a technology. Another way of saying this — my prior is that there’s lots of espionage happening all the time. I want to see more specificity in this argument about what exactly folks mean when they talk about escalation.
Jordan Schneider: One of the things that has been remarkable about China, at least how it deals with foreigners, is that you haven’t seen what Russia did with all these targeted assassinations. The sharpest we’ve gotten, at least with dealing with white people, has been the handful of Canadians who were grabbed and ultimately let go after a few years in captivity following the Meng Wanzhou arrest.
People are very focused on China starting World War III out of the blue. But there are also world states in which China becomes much more unpleasant while not necessarily kicking off World War III.
Michael Horowitz: I’m a definitive skeptic on the “China starts World War III over AGI” point. I buy the notion that China could become more unpleasant as we approach some sort of AGI scenario — including non-kinetic activities, espionage, etc. I tend to maybe not view those as decisive as some others do potentially.
You’re right that they certainly could become a lot more unpleasant. If the question is why they haven’t already, the answer is probably twofold. One, there’s an attribution question — suppose Chinese espionage involved doing physical harm to AI researchers or something similar. If they were caught doing that, they’ve now potentially started a war with the United States, and they’re back to the reason why they wouldn’t launch a military strike in the first place.
If it’s non-attributable, then the question is exactly how much are they going to be able to do? I wonder whether there is something about their broader economic ties with the United States that maybe makes some of the worst kinds of these activities less likely in a way that is less troubling to Russia.
Jordan Schneider: This is a decent transition to precise mass in the China-Taiwan context. What can and can’t we infer about military technological innovation in Ukraine to what a war would look like over the next few years?
Michael Horowitz: It’s not necessarily the specific technologies, but it is the vibes. By that, what I mean is the advances in AI and autonomy, advances in manufacturing, the push for mass on the battlefield that we see already in publicly available documents and reporting on how the PRC is thinking about Taiwan. We see that already in the US in the context of Admiral Paparo and Indo-Pacom and the Hellscape concept or something like the Replicator initiative in the Biden DoD — and full disclosure, I helped drive that, so I certainly have my biases.
We see that if you look at some of the systems that Taiwan has been acquiring over the last couple of years. You essentially have a growing recognition that more autonomous mass, or what I’d call precise mass, will be helpful in the Indo-Pacific. It’s unlikely to be the exact same systems that are on the battlefield in Ukraine, but variants of those scoped to the vastness of the Indo-Pacific.
Shashank Joshi: I have a few thoughts on this. One way to think about what Mike is saying is for any given capability, you can have more intelligence that is defined however you like, whether that’s in terms of autonomy or capacity to do the task on the edge at a lower price point.
That capability could be a short 15-kilometer range small warhead strike system in the anti-infantry role. It could be a 100-kilometer system to take out armor with bigger warheads, or it could be significantly longer range systems that have to be able to defeat complex defensive threats. Obviously, the third of those things is always going to be more expensive than the first.
What that revolution in precise mass, if it is a revolution (we can debate if that’s what it is), does is push you down. The capability per dollar is going up and up. That is the essential point.
Michael Horowitz: Just to be clear, that’s the reverse of what we saw for 40 years, where in the context of the precision revolution, you were paying more and more for each capability, whereas now we are seeing the inverse of that in the era of precision mass.
Shashank Joshi: Is the transformative effect comparable across each level of sophistication or capability or range? Are there specific things to FPV-type systems because they, for instance, rely on consumer electronics, consumer airframes, and quadcopters — they can draw upon a defense industrial base or an industrial base that has existed for commercial drones? Is it easier to have that capability revolution for intelligent precision mass at one end of the spectrum relative to building a jet-powered system that has to travel significantly further, has to defeat defense mechanisms, may have to have IR thermal imaging, etc? Is a revolution comparable at each end?
Michael Horowitz: I was with you up until the end about defeating all of those systems, because the thing that’s so challenging about this for a military like the United States is it’s a different way of thinking about fighting. You’re talking about firing salvos and firing at mass as opposed to “we’re going to fire one thing and it’s going to evade all the adversary air defenses and hit the target."
Look at Iran’s Shahed 136. That’s a system that can go, depending on the variant, a thousand-plus kilometers. It can carry a reasonably-sized warhead that, in theory, could have greater or lesser levels of autonomy depending on the brain essentially that you plug into it. That’s not going to be as sophisticated a system as an advanced American cruise missile that costs $3 million or something. But it doesn’t have to be because the idea is that these are complements where you’re firing en masse to attrit enemy air defenses. Your more sophisticated weapon then has an easier path to get through. It’s just a different way of thinking about operating, and that creates all sorts of challenges beyond just developing the system or buying the system.

Shashank Joshi: That’s a really interesting point. This gets us to a phrase, Mike, that is very popular in our world and you and I have talked about this, which is the mix of forces that you have and specifically the concept of a high-low mix. It’s not just that small drones will replace everything. You have a high-low mix where you will have some, albeit fewer, very expensive, high-end capabilities that can perform extremely exquisite, difficult tasks or operate at exceptionally high ranges. Then you will have a lot more in quantity terms of lower-end systems that are cheaper, more numerous, and that will not be as capable — they can’t do things that a Storm Shadow cruise missile or an ATACMS missile can do, but they can do it at a scale the Storm Shadow can’t do or the ATACMS can’t do.
The most difficult concept when I’m writing about this for ordinary people who are maybe not into defense is explaining the mix, the interaction of those two ends of the spectrum. Here’s the difficult bit — pinning down what is the right mix. Do we know it yet? Will we know it? How will we know? Does it differ for countries? That’s where I’m struggling to understand all of this.
Michael Horowitz: It probably differs for countries and even within countries differs by the contingency. For example, if you are fighting, if you’re back in a forever war kind of situation and you’re the United States, then you might want a different mix of forces than if you’re very focused on the Indo-Pacific and on China in particular. What comprises your high-low mix probably changes.
The way that I’ve tended to think about this is you have essentially trucks, which are the things that get you there. Then you have brains, which is the software that we’re plugging in. And then you have either the sensor or the weapon or the payload piece. In some ways, what I think we’re learning from Ukraine that is applicable in the Taiwan context is that sometimes it matters a lot less what the truck is than what the brain is.
Shashank Joshi: The other thing that I struggle to get across is the relationship between the “precise” bit and the “mass” bit, particularly the role of legacy capabilities in this. The great conflict I see today is in the artillery domain. Do strike drones replace or supplant artillery? Strike drones now inflict the majority of casualties in Ukraine, not artillery, as was the case at the early phase of the conflict.
There is this really interesting line in the British Army Review published in 2023 — “There is a danger that the enemy will be able to generate more combat echelons than we have sensors or high-end long-range weaponry to service.”
You can have these remarkable AI kill chains that can spot soldiers moving and feed that data back to your weapons — but if you don’t have the firepower to prosecute those targets and then keep prosecuting them week after week in a protracted conflict, you don’t have deterrence.
That is what we are waking up to now.
Michael Horowitz: The important thing here is the notion that you didn’t need a high-low mix, that you could just go high, presumes short wars where you can just use your high-end assets and sort of shock and awe the adversary into submission. Whether we’re talking about a forever war situation or in the Indo-Pacific, if you’re fighting in a world of protraction, then you need much deeper magazines in all ways, including in your platforms frankly.
Maybe in that world AI is actually helping you with what you’re manufacturing and how you’re manufacturing and can deliver a bunch of other benefits on the battlefield. A big challenge here is that I don’t think these capabilities necessarily mean there’s no role for traditional artillery. Although if they can do the same job better at lower cost, then they will eventually displace those capabilities, or militaries that don’t adopt them will fall behind. We’re more at a complement than a substitute stage right now for those capabilities. But things could change.
The challenge right now for a military like the US is you have all these legacy capabilities, and maybe you wish to invest less in them to be able to invest more in precise mass capabilities, which is something I advocate. But then the question becomes what are you doing with those legacy capabilities and when across what timeframes?
This does tell you some things that are really important from a force planning perspective. For example, the “one ring to rule them all” approach to air combat that led to the F-35, where you’re just going to have one fighter that will operate forever and presumably be useful for every single military contingency. Turns out that means it’s optimized for none of them, and that generates a bunch of risk. One of the things that the new administration is probably going to be doing is figuring out how to address those risks. I don’t want to hate on the F-35 and its stubby little wings. Sorry, I’ll stop now.
Shashank Joshi: I’m really glad you raised F-35 because this gets to another one of my points of thinking. You said, “If it can do the same job,” right? What are the jobs it can and can’t do?
When you look at a simple mission, let’s take an anti-tank guided missile — it’s looking pretty clear to me that a kind of mid-range, low-cost strike system, one-way attack strike system is looking like it can do the job of an ATGM very effectively at significantly lower cost. Therefore, unless the ATGMs also get transformed, you’re going to see a supplanting of roles.
However, I’m also acutely conscious that we are thinking about some battlefields that are in some ways uncluttered. We’re looking at a stretch of the Donbas in which anything that moves is going to be the target that you want to hit — it’s going to be a Russian military vehicle. You’re not going to accidentally hit a school bus full of children. In the Taiwan Strait, you are using your object recognition algorithms to target shipping. You’re not accidentally going to hit something in the context of an amphibious invasion of Taiwan.
Michael Horowitz: Yeah, if the balloon goes up, it’s not like there’s lots of commercial fishing just chilling in the Strait.
Shashank Joshi: Well, and if it is, it’s probably MSS operatives. You’re fine. But in the air power situation too, right? You may be doing stuff like this. However, urban warfare is not going away. I’m imagining fights over Tallinn, over Taipei, over thin, cluttered, complex, multi-layered subterranean environments like Gaza, like Beirut, like places like that. I worry a lot more about the timeline over which autonomy will suffice.
To end on one last point before I spin off the RAF, the Royal Air Force believes that an autonomous fighter aircraft will not be viable prior to 2040. Now maybe that’s ultra conservative, but that’s based on some of the assumptions about the tasks they think it will need to do. I know you have your debate over NGAD and long-run capabilities. So are there limits to this process?
Michael Horowitz: The limits are how good the technology is. Frankly, I’ve argued this for years in the context of autonomous weapon systems. The last place that you would ever see autonomous weapon systems is in urban warfare. Not only whatever ethical moral issues that might surround that, but that’s just way harder to do than figuring out whether something is an adversary ship or an adversary plane or an adversary tanker in a relatively uncluttered battlefield.
Shashank Joshi: The second part was to do with the air picture, right? Countries are now having to decide what our air power is going to look like in 2040, and how much can we rely on the technology being good enough by then? You’re right, that is the question — will it be good enough? But you have to make the bet now. You have to make the judgment now because of the timelines of building these things.
Michael Horowitz: There are two questions there. One is, how good is the AI technology? Frankly, even if Dario is overestimating how quickly we get to something universally recognized as artificial general intelligence — and just to be clear, dude’s way smarter than me and I’m not saying he’s necessarily wrong, just that there’s uncertainty here. He’s super smart and thoughtful.
Side note, the fact that CEOs of today’s leading companies post their thoughts on the internet and come on shows like this is super useful. Now that I’ve taken off my Defense Department hat and I’m back in academia, it’s excellent for understanding how a lot of the people designing cutting-edge technology are thinking about it and interacting with government policy and militaries.
If you’re talking about what the future of a next-gen aircraft airframe would look like or what a collaborative combat aircraft could do, my bet is that the militaries are underestimating how quickly AI will advance and the ability to do that. What they might be accurately assessing is, given the way the process of designing new capabilities works today and given today’s manufacturing capabilities, how long it would take to actually design and roll out a new system.
From a parochial American perspective, shortening that timeline is way more ambitious than what we were attempting to do with Replicator. I expect the new team actually will continue to push forward a lot of those things, whether they call it Replicator or not. You could have a scenario in which your AI is at a level that you believe you could have more autonomous operations of a collaborative combat aircraft, and you have some advances in manufacturing that mean you could produce maybe more of them at a slightly lower price point, but still be unable to do so before 2040 for various bureaucratic and budgetary reasons.
Jordan Schneider: Looking back over the past four years, where are the places that the Biden administration made progress in defense innovation?
Michael Horowitz: We accomplished a lot in getting the ball moving, specifically toward greater investments in lots of important next-generation capabilities like collaborative combat aircraft and varieties of precise mass through the Replicator initiative. But there is a lot more to be done.
It was a journey in two ways. One was bringing everybody along on that defense innovation journey to get to the point where folks bought into the importance of some of these emerging capabilities for the future of warfare, specifically in the Indo-Pacific. Then there’s what we could accomplish before the clock ran out.
To start with the first piece, taking everybody on the journey — nothing happens in the Pentagon without getting lots of people on board. It took a little while for there to be consensus on the state of the defense industrial base. If you look at varieties of think tank reports about what DoD should do differently, there are often all these suggestions like, “DoD needs to scale more of this system, scale more of that system, build more of that.” All of that is great, except that even if you fully fund really exquisite munitions like the Long Range Anti-Ship Missile or JASSM or something like that, you’re going to be really capacity constrained because those facilities just have limits. You can change those limits, but you can’t change those limits quickly.
What can you then do to scale capability relevant for the Indo-Pacific in the short term? The answer is precise mass capabilities — more attritable systems, more AI-enabled and autonomous systems, systems that maybe sometimes, but not always, are built by non-traditional companies. There’s a coalition of the willing that pushed through all of those in a bunch of different contexts, including Replicator, which DIU I think did a brilliant job of leading implementation of, and that generated some growing momentum.
But it also highlighted some real limits. Reprogramming 0.05% of the defense budget to fund multiple thousands of attritable autonomous systems for the Indo-Pacific under the first bet of the Replicator initiative required over 40 briefings to Congress, including a ton by the Deputy Secretary of Defense who’s really busy. Congressional oversight is really important, but the degree of effort required to reprogram essentially less than a billion dollars demonstrates a budget system unable to operate at the speed and scale necessary given the rate of technological change and given the threat that the US is under from Chinese military advances in the Indo-Pacific.
We did some good things. We moved the ball forward. I’m proud of some of the things that we did, but there’s a lot more work to be done.
Jordan Schneider: Ray Dalio recently said that “America can’t produce things, any manufactured goods, cost-effectively.” Shashank, if Ukraine can figure out how to build a million and a half drones a year, couldn’t the US do this too if we were more determined?
Shashank Joshi: Let’s unpack this a little bit. First of all, Ukraine is doing this under wartime conditions. It’s ripping up rules — not all of them, but a lot of them. It doesn’t have to worry about pesky things like health and safety standards, like “Can I build this explosive warhead facility next to this town?” You try doing that in the US or the UK or in Europe. The Ukrainians can get around that because they’re at war, it’s fine.
Secondly, what are they building these things with? The Ukrainian supply chain for UAVs and for lots of other things, including electronic warfare systems, is still full of Chinese stuff. They haven’t got it all out. Yes, there are companies beginning to find alternative supply chains and finding stuff from Taiwan and other countries, but the US government can’t do that. There’s the little pesky matter of the NDAA which prohibits you from just sticking Chinese components into all of your drones and building them out. So you have to get supply chains right. That’s problem number two.
Problem number three is to what standard are you building these things? Are you demanding that they can cope with this level of electronic warfare, radiation hardening, cybersecurity standards? Those are all the requirements being placed on UAS in many Western governments. I won’t speak directly to the US exactly, because Mike knows it better than I do, but certainly in my country I’m aware of this. The Ukrainians can build things to a satisfactory standard that would never stand up to the scrutiny of an accounting or auditing official in a Western country to the same way.
The final point on all of this is the level of mobilization for the defense industrial base is quite different. Ukraine has nationalized its IT sector. It had a brilliant IT sector before the war. It had some great technicians, tech-minded people, software-minded people. That is just not the case in most of our countries. We can’t nationalize our tech industry to go work to build software-defined weapons at a low cost very effectively and quickly. Those are some of the reasons I can see for the discrepancy. Mike, do you disagree?
Michael Horowitz: I think that is all correct. I’m probably more optimistic than you about the ability to build at low cost. Although what exactly you define as low cost? If we think about attritable as replaceable kinds of systems, and what’s attritable probably varies depending on how wealthy you are and how big your defense budget is. What’s attritable for the United States might be different than what is attritable for Ukraine.
There’s a lot of possibility to get lower cost systems. One example is last year there was an Air Force DIU solicitation for a low-cost long-range cruise missile that would be about $150,000 to $300,000 a pop. That’s way more expensive than an FPV drone in Ukraine, but that’s a lot more capability than an FPV drone in Ukraine. Given that existing systems, some of those existing systems cost a million dollars or more — if you can deliver that at what is a fraction of the cost, you’re buying back a lot of mass in a useful way.
The thing that I will be really interested to see, shifting actually to Europe for a second — if you look at what Task Force KINDRED has done for Ukraine, my question is, when is the UK going to buy those capabilities for the UK? If these are militarily useful capabilities for fighting Russia, that seems like something that might be useful for the United Kingdom’s military.
Or say the German company Helsing — they produced 6,000 drones that were going to get sold to Ukraine. Obviously that’s in the context of millions being produced, but presumably those 6,000 are pretty good from a quality perspective. Why isn’t Germany buying those drones?
Shashank Joshi: Helsing is a really good example. In Ukraine, distributed in Ukrainian manufacturing facilities, they are building these UAVs where the hardware is not completely standardized. The software has to adapt to the different airframes being built by different producers. But they’re good enough, they’re doing a good job, they’re making a difference, they’re producing Lancet-like capability at considerably lower cost as far as I understand it.
But Helsing is also building drones for NATO countries in southern Germany in its own factories. The advantage is that it controls the supply chains. It can standardize the process, it can build software-defined weapons in ways where the hardware is optimized to take that on — in the way that Tesla once built other people’s cars and put its code in them. But since that initial phase, it’s built its own hardware because it’s easier to build a software-defined car if you do it yourself. But it’s going to be more expensive and it’s not going to be as cheap and quick and easy as the Ukrainian manufacturing. I believe there are some trade-offs that you see even within the same company.
Michael Horowitz: Absolutely. That also is why you need more open architecture all around and why in some ways the government needs to own more of the IP. If you think about what I said before about trucks and brains — if you’re buying a truck that only can have a brain from the same company, then you’re locked into a manufacturing relationship that’s almost necessarily going to generate higher costs over time than if you can swap out the brain over time with something that might be more advanced. Frankly, whether it’s lower cost or not, it’s probably a better idea to be able to swap it out.
This reflects differences in not just the defense industrial base, but in how the US and western militaries have thought about requirements for capabilities over time in ways that now require challenge.
Shashank Joshi: There was a really interesting speech. The Chief of Defence Staff in the UK, the head of our armed forces, in this case Admiral Sir Tony Radakin, gives a speech every year at Christmas at the Royal United Services Institute, the think tank in London.
Michael Horowitz: Is there port?
Shashank Joshi: There is wine after the lecture, but we’re not drinking port during the lecture.
Michael Horowitz: But I’m imagining like brandy and cigars and port and like a wood-paneled room.
Shashank Joshi: You’re not far off on the room itself.
Jordan Schneider: Are there oil portraits? Do we have like Hague and stuff?
Shashank Joshi: There are oil portraits. I must declare I’m on the advisory board at RUSI, so I’m very fond of it. But anyway, I’ve gone down this rabbit hole. The reason I brought it up is because I wanted to quote a line from that speech he gave at Christmas where he said:
[W]e have only been able to demonstrate pockets of innovation rather than the wholesale transformation we need.
Where we have got it right is because we used an entirely different set of permissions which elevated speed and embraced risk so that we could help Ukraine.
But when we try to bring this into the mainstream our system tends to suffocate the opportunities.
He then proposed a “duality of systems… Whereby major projects and core capabilities are still delivered in a way that is ‘fail-safe’ – clearly the case for nuclear; but an increasing proportion of projects are delivered under a different system which is ‘safe to fail’”.
Mike, this is pretty much what you told me when we spoke a few weeks ago, right? A willingness to embrace failure — not for your Ohio-class SSBNs or your bombers, but for your smaller systems where the cost of failure is not terrible and you need to fail to innovate. That is so much of what it’s going to take for us to be able to be more Ukrainian in our own systems.

Jordan Schneider: Let’s contrast that with a quote on the US side of the pond from Bill LaPlante, who is the DoD’s top acquisition executive during the Biden administration:
“The Tech Bros are not helping us much… If somebody gives you a really cool liquored-up story about a DIU or OTA, ask them when it’s going into production, ask them how many numbers, ask them unit costs, all those questions, because that’s what matters… Don’t tell me it’s got AI and Quantum in it. I don’t care.”
Michael Horowitz: Bill LaPlante thinks that it’s still 1995. The thing that was remarkable about that quote is even at the time it was so profoundly incorrect in the way that it described the ability to scale emerging capabilities. Shortly before we left office there was a speech or maybe it was from under a question or something where he said that he had learned from the Houthis and what they’d done in the Red Sea that it was possible to produce low-cost munitions. Unclear why what was happening in the US had not triggered that revelation. But he got there.
That is the mindset that requires challenge. If you view the only things worth using as an extremely small number of exquisite platforms, then that takes you down a road where emerging capabilities, even those you can scale, might seem less useful because they might require using force differently — if you’re operating at mass rather than just operating those exquisite capabilities.
The bigger challenge though, is every single major defense acquisition program in the US military is either behind timeline, over budget, or both. Most are both. What that suggests is that the current system, which is designed to buy down risk and produce these great capabilities — and it does, but just slower than it’s supposed to, with higher prices than it’s supposed to — in a way that suggests that the current system is not succeeding.
Risk is the right way to think about this. The scale and scope of the challenge posed by the Chinese military is unlike anything I have seen in my lifetime from an American perspective. That means that the assumption that undergirded the 90s, frankly, about the inevitability of American conventional military superiority, is just no longer the case. It’s not just that we can’t sit on our laurels, which is something that I think I wrote maybe a decade and a half ago. It’s that we are being actively pushed and challenged across almost all domains.
What that requires is accepting more risk in the capability development process, which I feel comfortable doing not only because I’m generally bullish on the ability of emerging technologies to deliver, but also because the status quo system just isn’t working.
Shashank Joshi: What we can’t ignore then is what is stopping us — or you in the US case — from taking that risk. Often it’s the politics. You talked about how shifting 0.05% of the budget requires this Herculean bureaucratic political effort on the Hill to plead with Senators and Congresspeople, “Please let me move this $15 million here and there.” That’s not sustainable if you’re trying to make a systemic effect.
You have to have appropriators who are willing to say, “Actually I trust you with this money, and I trust you to be able to spend it in a way that’s flexible and won’t lock you into a spending path for the next six months without wasting it. And let’s test you on that in six months,” but not micromanaging everything.
I don’t know how you’re going to be allowed to have the failure that you need to have the innovation you need if Congress doesn’t trust people to innovate at scale, not just in these little pockets of innovation as Radakin called it.
Michael Horowitz: Every single rule that the appropriations committees have exists because of something that happened in the Department of Defense in the past. To be clear, we are in a different era now with a different set of risks and a China challenge that’s unlike anything we have faced before. We need a new bargain in some ways with the appropriations committee to be able to innovate at the speed and scale we need.
Keep in mind the Pentagon’s budgeting process was invented by Robert McNamara during the Vietnam era and has not changed since then. In the best of times, it is a two-year cycle between when one of the military services decides it wants to invest in a technology and when it gets the money to invest in that technology.
We’ve spent several years of the last decade and a half in continuing resolutions, which means Congress can’t pass and appropriate a budget. This means you can’t start new programs, which then delays adopting new capabilities even further. Something has to give there.
Jordan Schneider: I want to recommend the podcast series Programmed to Fail: The Rise of Central Planning in Defense Acquisition by Eric Lofgren, who’s now working in the Senate. He used to run Acquisition Talk and do shows with me about this stuff. You know, it wasn’t just McNamara — it was McNamara trying to learn from the Soviets.
Michael Horowitz: The assumption was we’ve got the technology we need. We think that our basic tech development system works. What we need to be able to do is produce this stuff and produce good stuff, and then we’ll beat the Soviets. It worked, but we’re in a different period now.
Shashank Joshi: There’s an interesting book I wrote a review essay on for Foreign Affairs about a year ago by Edward Luttwak and Eitan Shamir, head of an Israeli think tank, called “How Israel Fights.” I don’t find all of it persuasive, but it raises the question of how this country in the 1960s — this agrarian society that is poorer than many parts of Southern Europe in GDP per capita — produced these anti-ship missiles that are able to defeat the Soviet weapons being carried by the Egyptians of that era and the Syrians.
What did they do right? What have they done right? There are many things that they’ve done wrong, and there are many cases in which tech innovation did not help them strategically or even contributed to complacency. But there’s something about that innovation, including innovation under conditions of peacetime or semi-peacetime, that I think we should be thinking about.
Jordan Schneider: I have a five-hour Ed Luttwak episode in the tank that I’ve been dreading editing. But we did get into it, and there is something about this topic. It comes back to some of the Ukraine stuff — Israel is a semi-mobilized society. It’s playing at a smaller scale.
There’s this great anecdote where someone walks into an office and says, “You should arrange the tank this way instead of that way.” Then they do it because somebody thought it was a good idea. You take all his stories with a grain of salt, but still conceptually, the fact that this is all among friends in this small network of, by the way, the best minds in the country.
Michael Horowitz: Whereas in our system, fourteen different people can say no and stop a capability, but no one person can say yes and move it forward.
Jordan Schneider: One of the many shames of the Trump imperial presidency is that despite having enough control of Congress to do this well, getting Pete Hegseth to be the one to lead it is just one of these unfortunate timelines we’re in because the President couldn’t give two shits about this stuff. Maybe there are enough tech people around him though.
Michael Horowitz: Let me muster a point of optimism here, frankly, on this. In the brilliant article that Shashank wrote in The Economist on some of these questions, I sounded a similar note. If you look at Hegseth’s testimony, his discussion of defense innovation is very coherent. He makes points that are not structurally dissimilar to the ones that we have been making for whatever the last period of time has been.
If you look at Stephen Feinberg’s testimony yesterday to be Deputy Secretary of Defense, he actually makes some very similar points, and you hear some of those echoed by various tech sector folks that look to be entering either the White House or the Defense Department.
There is a potential opportunity here for the Trump administration to push harder and faster on precise mass capabilities, on AI integration, and frankly, on acquisition reform in the defense sector, because the president right now seems to have a strong hand with regard to Congress. Whether the president’s willing to use political capital for those purposes is not clear. How the politics of that will play out is unclear. But if the Trump administration does all the things that it says it wants to do from a defense innovation perspective, that may not be a bad thing. There are a lot of things they want to do that are very consistent with things that many of us have advocated for over the years.
Shashank Joshi: I really admire your dispassionate assessment of that and the willingness to think about it apart from the politics. My concern is that you have people who are good at radicalizing and disrupting many businesses and sectors and fields of life. But the skills required to do that are different from the skills needed in a bureaucracy like this.
Just because you were able to navigate the car sector and the rocket sector doesn’t mean you know how to cajole, persuade, and massage the ego of a know-nothing congressman from — I’m not going to name a state because that’ll end up being rude — who knows nothing about this and who simply cares that you build the attritable mass in his state, however stupid an idea that is, and who wants you to sign off on the $20 million.
I worry that they will either break everything — and what I’m seeing Doge do right now with a level of recklessness and abandon is worrying to me as an ally of the United States from a country that is an ally — but also that they will just not have the political mouse to navigate these things to make it happen. Just because Trump controls Congress and has sway over Congress doesn’t mean that the pork barrel politics of this at the granular level fundamentally change. You need operatives, Congressional political operatives, and a tech pro may have many virtues and skills, but that isn’t necessarily one of them.
Michael Horowitz: No argument. There’s a huge gap between being willing to lean further forward on defense innovation and transformation and the ability to bureaucratize, essentially, as a friend of mine is fond of saying, and be able to get the job done delivering. The Pentagon is the world’s largest bureaucracy and it will continue to be the world’s largest bureaucracy even with whatever is happening. That requires a lot of bureaucratic political acumen to be able to deliver results. It is a very open question whether this administration will be able to deliver on that. Frankly, there are early signs that are concerning. But again, it’s still early days.
Jordan Schneider: I want to reflect a little bit about the role of inside knowledge and outside knowledge when it comes to understanding what’s happening in Ukraine as well as the future of war. What does stuff like Shashank’s reporting get? What can it not get? How does all of the open source analysis that’s happening today about the war in Ukraine filter into discussions about budgets in Congress and R&D?
Michael Horowitz: Systematically drawing insights from open source material, both Shashank’s and others, in ways that inform what we do in the Defense Department is important. In the context of Ukraine lessons learned, there are actually a number of different efforts, both classified and unclassified, that try to dive into those things. In the case of Ukraine specifically, there’s heroic effort both inside and outside the context of just the Ukraine desk at the Pentagon or on the Joint Staff to do that.
The challenge sometimes is making some of those insights more visible and then connecting them to the change that you wish to see. Part of the issue is that folks are really busy in ways that are sometimes even difficult to comprehend on the outside. Your read time, even to read things that you really want to read, is just extremely limited.
The role of an influential columnist like Shashank that lots of people trust is invaluable because for a lot of senior folks, that might be the only outside thing they read about defense in a given week. This points to the importance of networks. I think about this a lot from the perspective of how academics should bridge the gap between academic research and policy — networks play a huge role there.
In the case of Ukraine, I think there actually has been really good pickup inside, at least in the US, on what lessons learned look like because of a lot of the great reporting out there. Sometimes people would say, “Why haven’t you purchased this drone that Ukraine uses?” It’s a great question, but it’s really challenging to consume all of the information out there that you should consume. A lot of it ends up getting mediated through staffs.
Jordan Schneider: Here’s a deep cut for you guys. Ian Hamilton, who ran the Gallipoli campaign, was a journalist who covered the war in Manchuria between Imperial Russia and the Japanese and saw the future of war and wrote about it really clearly. But even though this guy was there and then was on the battlefield running it in Turkey, he was not able to instantiate the lessons that he saw firsthand into the way he ended up killing a few hundred thousand people who probably didn’t need to die if he had made smarter decisions. None of this shit is easy.
Michael Horowitz: As part of research for a future book, I was at the National Archives in College Park last week looking for information on US military procurement decisions in the early 20th century surrounding General Purpose Technologies. I found some really interesting back and forth between the War Department and the Wright brothers about the airplane that sounded a lot like modern debates. The Wright brothers are saying, “Well, send us the cash and we’ll send you the airplane.” And the War Department’s responding, “Prove it works and meets these metrics, then we’ll pay you and then you deliver the airplane.” It was like, “Oh dear God, maybe nothing has changed” in some ways in some of these debates.
Jordan Schneider: Shashank, any reflections on the role of popular writing in all this?
Shashank Joshi: I’m amazed by the cut-through we can sometimes get. People will say, “I can’t take my classified system on a plane to read, but I can take a copy of the Economist.” So you suddenly have this responsibility. I’ve had deep experts on something like armored warfare and tanks say, “I’ve been screaming this message into the ether for years, but it was only when you quoted me that this general read me."
We’re sometimes in the strange position of being — I don’t want to say conduits because we would never wish to be uncritical conduits for anything — ways that can short-circuit these networks and cut across them in strange and amusing ways. I have the grave responsibility of not only telling my readers about the big stuff, like what’s Trump going to do next or where’s Ukraine headed, but — if this is not too condescending — feed them their vegetables, make them think about Mike’s essay on precise mass.
Maybe I have to bury it in a piece that’s about the future of drones, but I can make them think about budgeting. Mike tells me you have to understand budgets to understand innovation. Then I think, “Okay, now there’s a challenge. My editors may not like me talking about budgets for a page, but this is my job to get it across and to make people read it and listen to it."
I’m fortunate that I have access to expertise like that of Mike and others to be able to translate that. Fundamentally, Jordan, I see my job as not giving people the answers. It’s just giving them a sense of the debates that the knowledgeable people are having. That’s not to say one person’s right, one person’s wrong, but to say, “Here’s the lay of the land. Here are the arguments on each side. Here are the debates,” and give them a flavor. Let them peer through that window into the world of the conversation that Mike may be having with his colleague on what they disagree on.
Jordan Schneider: What you do, what Mike Kaufman does, what Mick Ryan does, what Mike Lee can do — which I imagine would be harder if you are sitting on the Ukraine desk and your job is to cover what’s happening in electronic warfare — is to think about this all synthetically and across the different domains. In your case, Shashank, even across regions.
This is what I feel like I do in some sense with ChinaTalk as well. Shashank, you have an editor who it seems like you can just bowl through at this point, and I’m very glad you can basically write budget articles. Picking what you want your readers to read and think about is the game. Doing that in a really thoughtful way when there is so much new happening, so many battles occurring in every moment, and so many tactical innovations and counter-responses is essential. This is particularly important for the public at large and also for senior leaders who only have about 3% of their time to really sit down and absorb this stuff — or God forbid, the president.
Shashank Joshi: Or the vice president who has lots of it.
Jordan Schneider: Mike, continuing on, why outside writing matters.
Michael Horowitz: Now that I’ve left the government again, when I think about how, as an outsider, as an academic, to try to influence policy, one way to think about this is: if you want the US Government and the national security arena to be doing something and they’re not doing it, there’s usually one of two reasons.
First, you might be wrong. There could be classified information or some other information you don’t have access to that shows you’re wrong.
Second, you’re right, but your bureaucratic allies are losing. It is hubris, given the size of the national security agencies, including the Pentagon, to think that you have some idea that literally nobody in the entirety of the Pentagon, the intelligence community, and the State Department has thought of. Generally, somebody wants the same thing that you want, but they’re losing. The question is how you give them ammunition to help make the case to move that policy forward.
In the kind of writing that involves advocating for policies or making arguments for policies, I think it makes sense to think about this in terms of how you’re providing support to your bureaucratic allies, even if you don’t know who they are and even if they don’t know you until they see something you write show up in the Early Bird in the morning at the Defense Department or through some sort of press clippings. That’s how I think about the role of outside writing and how you can try to influence policy.
Jordan Schneider: It’s always weird for me when I write something in ChinaTalk and get an email from someone I’ve never met saying, “Thanks for this. This was helpful.” Just putting your arguments with some good, thoughtful analysis out into the ether sometimes works in mysterious ways.
Michael Horowitz: There’s this fiction that you write an op-ed and it somehow ends up on the desk of the president, and then all of a sudden US foreign policy changes. That’s just not how the real world generally works, especially if what you’re trying to influence is policy within a bureaucracy.
I’m encouraged to hear people within the government should be listening to ChinaTalk and getting insights from it. That’s terrific. That’s evidence for this idea that you have allies and you’re trying to help give them ammunition to make the case in whatever fora they’re engaged in.
Jordan Schneider: I got one last question for you guys, if you don’t mind. All of this, compared to reading and writing and doing podcasts about the PLA itself, is just so much more interesting because I’ve read a lot of PLA books and I’ve really thought about doing more shows about it. It’s so difficult to talk in hypotheticals when you’re reading doctrine stuff and doing the OSINT and whatever. It’s just so hard to actually learn things and talk about them in an interesting way. I’m curious if you guys have any advice for me about what better PLA coverage in outlets like ChinaTalk or just more broadly could look like to get people thinking more seriously about all this stuff.
Michael Horowitz: This is super hard. We think about this a lot actually in the context of PhD students, junior faculty, and what the academic China-watching community is doing in this space. Especially given the way that Xi has consolidated control in China, there’s still a lot available, but there’s so much less available frankly than there was 20 years ago or even 10 years ago. That creates analytical challenges because what you can get raises questions like, “Well, why could I get this?” There are essentially, to be really nerdy, selection effects that govern what you’re able to access.
The truth is there’s no military in the world where we probably have a greater uncertainty parameter about its potential performance in a conflict than China’s military — the PLA — because it’s just been decades since it fought. We know what weapons they have. We know what their doctrine says. The ability to put all that together, as we know from the Russia-Ukraine context or any war in history, is very different than what it looks like on paper.
I do not envy the task. All you can do in some ways is acknowledge that irreducible uncertainty and do your best to give folks the information that is available.
Shashank Joshi: I would just add to that: let’s learn from our analytical errors in the past and think about how they might apply. I’ve really enjoyed some of the writing done by people like Sam Bresnick at the Center for Security and Emerging Technology at Georgetown on the way the PLA thinks about AI. He doesn’t say, “Oh, they’re miles behind, that’s useless.” But he does give a flavor of Chinese debate, saying they worry about many of the same things that we do. They worry about explainability, about control, command, oversight. They even worry about ethics — that’s not completely absent. They’ve got issues around compute capacity and all these other things.
It’s just helpful to be reminded that what they say on the page about intelligentized warfare and this and that — they’re grappling with some of the same challenges that we all are. I admire the work of people like Sam and others and his colleagues and many others who think about these from a very fieldwork-based or empirical perspective, getting their hands dirty, reading the stuff, talking to people, looking through journals. That’s great work.
Jordan Schneider: Well, as Sam’s peer advisor in high school when I was a senior and he was a sophomore, I am going to take complete credit for all the brilliant work that he’s done both in Beijing and now in Washington. All right, last thing. Each of you give one book for everyone to read.
Michael Horowitz: My junior colleague at the University of Pennsylvania, Fiona Cunningham, is the most talented academic scholar of the Chinese military of her generation. She has a book that just came out with Princeton University Press titled Under the Nuclear Shadow: China’s Information-Age Weapons in International Security. I would highly recommend checking out Fiona’s book. [We recorded a pod already!]
Shashank Joshi: The orthodox choice is someone who came up earlier — Paul Scharre’s book, Army of None, which I still think is fantastic on the issue of autonomous weapons. It’s brilliant on the history and thinks about it historically — fantastic book for anyone still thinking about autonomous weapons.
But the slightly left-field choice I want to put out there is The Billion Dollar Spy by David Hoffman, which is the story of one of the CIA’s most difficult operations in Moscow running Adolf Tolkachev, a Soviet engineer. The reason it’s relevant to this conversation is it’s about the application of technology to operations — in this case, intelligence operations, running an agent in Moscow, communications technology, miniaturization, the way that the emerging plastics industry and transistor industry affects the CIA’s choices in the ’50s, the way that changes with satellites. I love the idea of thinking about this in a completely different field, intelligence, espionage, and the parallels and ideas that may spark for us thinking about the defense world.
This was a great writeup. The parts that stood out to me the most were not so much the AGI as much as the trends towards warfare (i.e. drones) becoming cheaper, combat becoming more remote, and the use of AI to bridge the gap between remote operators and a disconnect from their payload.
By coincidence I recently published a much less erudite blog post looking at similar topics from a cyberwarfare perspective: https://er4hn.info/blog/2025.03.27-cyberwarfare/