The nuclear and biological weapons threat – Financial Times Feedzy

 

This is an audio transcript of the Rachman Review podcast episode: The nuclear and biological weapons threat

[MUSIC PLAYING]

Gideon Rachman Hello and welcome to the Rachman Review. I’m Gideon Rachman, chief foreign affairs commentator of the Financial Times. My guest this week is Jason Matheny, president of the Rand Corporation in California. Until last year, he worked in the Biden White House as the director for technology policy on the National Security Council. These days, technology policy and foreign policy are inseparable. In the White House, Jason Matheny had to deal with a huge range of issues: technology exports to China, the danger of new pandemics, the future of artificial intelligence and the control of nuclear weapons. His experiences at the top levels of government have increased his concerns about the risks of nuclear war.

Jason Matheny So a large-scale nuclear war would be catastrophic. Most recent work on the atmospheric effects of nuclear war support the earlier models of nuclear winter and prolonged reduction in sunlight that would lead to massive starvation, billions of deaths. There’s probably less attention to this today than there was during the 1980s, but I think the risks are just as profound today. And then we also have new risks that are emerging from new technologies.

Gideon Rachman So in an age of technological transformation, what does national security actually mean? Many of the Biden administration’s most important policies have been driven by a desire to preserve America’s technological edge over China. One example was the Chips Act to encourage the production of semiconductors inside the United States. Here’s Biden hailing the opening of a new factory, or “fab”, in Arizona by TSMC, a Taiwanese firm.

Joe Biden These are the most advanced semiconductor chips on the planet. The chips will power iPhones and MacBooks, as Tim Cook can attest. Apple had to buy all the advanced chips from overseas. Now they’re gonna bring more of their supply chain here at home. It could be a game-changer.

Gideon Rachman The fact that more than 90 per cent of the world’s most advanced semiconductors are still manufactured in Taiwan makes the island critical to the functioning of the world economy. It also means that US technology policy has to focus on the risk that China might invade Taiwan. Jason Matheny’s career before the White House involved a spell as director of research at the Future of Humanity Institute in Oxford, where he focused on existential risks to humanity. That’s been the central concern of the Rand Corporation since its foundation at the dawn of the cold war. Herman Kahn, who worked for Rand, wrote a famous book in 1960 called On Thermonuclear War and reputedly was the model for Dr Strangelove in Stanley Kubrick’s famous film.

Dr StrangeloveBecause of the automated and irrevocable decision-making process which rules out human meddling, the Doomsday Machine is terrifying. It’s simple to understand.

Gideon Rachman Jason Matheny, as director of Rand, has thought a lot about nuclear war. But I began our conversation by focusing on the issue of the moment: Taiwan.

Jason Matheny Taiwan’s dominance in semiconductors is really extraordinary. About half of all semiconductors are manufactured in Taiwan, but about 90 per cent of the most advanced semiconductors are manufactured in Taiwan. And those semiconductors sit at the foundation of the global economy. The chips power computers, they power our phones, they power our data centres. So a significant disruption to that supply chain would severely hurt the global economy. It would be comparable to the auto chip shortage during the pandemic, but for all industries, all at once. And by some estimates, the economic consequences of a multiyear disruption to Taiwan’s supply chain would be at least as severe as the Great Recession. We can imagine two different kinds of disruption. The first might be a blockade around Taiwan by China. A second might be an invasion of Taiwan by China. And China has been actively preparing to do both. It’s been conducting military exercises to develop the capabilities that it would need for an effective blockade or an effective invasion. And since reunification of Taiwan is both a personal priority for Xi Jinping and a national priority for the CCP, I think we should take these scenarios quite seriously.

Gideon Rachman But would China really do that? Given that, as you say, that cause a global depression, if they were to put the semiconductors at risk, they are part of this global economy as well?

Jason Matheny They are. Although I think Xi Jinping takes a long view of this, that in the long run it could be willing to withstand the disruption for the sake of reunification.

Gideon Rachman Now, as part of the preparations or attempt to hedge against Taiwan risk, the Biden administration has been encouraging the building of semiconductor plants in the US. How fully will that protect America from the potential fallout of a wall? At least the economic fallout?

Jason Matheny Not fully. And in fact, the protection would be small. The United States simply isn’t going to be able to create a replica of Taiwan semiconductor industry. It’s just not realistic. It is important, I think, to reduce the overall supply chain risk. And there I think the US effort through the Chips Act is quite important. It’s also a matter of strengthening the entire semiconductor industry. Having manufacturing is also a way of strengthening research and development, because if you don’t have manufacturing nearby, you sort of lose a lot of the practical applications of research and the knowledge that you gain from actually making things. But a supply chain security strategy needs to be collective. It shouldn’t be the US alone. It should be a strategy for the US and its allies. The US Chips Act as an important part of that, but it needs to be done in concert with allies. Rand has offices in the US, the UK, Brussels and Australia, so we’ve been thinking a lot about supply chain resilience as a collective problem.

Gideon Rachman And as a collective problem. Is it something that can be solved? Because our sister publication in Nikkei did a study of the Taiwan semiconductor industry and the sheer number of inputs coming into a factory like TSMC from the rest of Asia is enormous. And so you wouldn’t just have to have the fabrication factory, you’d need all the suppliers as well.

Jason Matheny Yeah, and I don’t think it should be the goal of the world to create a replica of that entire supply chain. I mean, really, that’s like making a copy of human civilisation that’s just not going to be realistic or cost-effective. But we can reduce the likelihood of short-term disruptions to the supply chain by having some greater redundancy in the globe. And it doesn’t just need to be the US.

I think another way of hedging, though, is to ensure that Taiwan has the self-defences, that it can credibly deter an invasion, because such an invasion would just be too costly for China. And that deterrence I think is quite cost-effective. Rand has done a lot of work to test various Taiwan defence strategies and detailed war games and a mix of weapons systems like loitering munitions, anti-ship cruise missiles, advanced sea mines could make it just too costly for China to invade and would likely lead Xi Jinping to sort of kick the can down the road indefinitely. And such a strategy wouldn’t break the bank. It could be achieved for around $10bn.

But it does mean de-emphasising prestige weapons like tanks, crewed aircraft, submarines and emphasising large numbers of cheap, tradable systems. Sometimes military leaders don’t like doing that because it has less cachet, but ultimately it’s going to be a more cost-effective defence strategy for Taiwan.

Gideon Rachman And I gather that the US administration has had difficulty persuading the Taiwanese to go for that kind of strategy. Is that the case? And if the Taiwanese were suddenly to agree and to spend this $10bn as opposed to the trillions it would take to replicate the supply chain, how quickly could they get themselves in a situation where Xi Jinping might actually think, you know, this isn’t doable at an acceptable cost?

Jason Matheny I think there have been difficulties on both sides. Not only is there the prestige issue on the Taiwanese side, but there’s also the issue of weapons systems that United States contractors would prefer to be selling rather than, say, cheaper, more tradable systems. So I think reform is needed both in the Taiwan side and on the US side. I think the US has been, and the planning domains are more clear-eyed that what’s ultimately going to serve as a credible deterrent is a very large number of munitions of various types. And to build up that kind of inventory will take a few years. It can’t be done overnight, which means starting sooner is really important.

Gideon Rachman Because people say that Xi Jinping has told the Chinese military to be ready to invade Taiwan by 2027. How much credibility do you give to that deadline?

Jason Matheny I think the deadline for the direction to the PLA and the PLAN to be ready, I think that’s quite credible. Whether or not Xi Jinping has a date in his mind as to when an invasion would actually be authorised, I don’t know. I don’t know that Xi Jinping has such a date in his mind, and I suspect that it would be affected by a variety of global events, including domestic pressures within China.

Gideon Rachman Your I think concern there, that in Chinese thinking, as we understand it, the way you understand it, they do think that they can control the risk of nuclear escalation. In other words, that they could invade Taiwan but not find themselves in a nuclear war.

Jason Matheny Yes. Over the last few years, China has rebuffed multiple US proposals for a functioning crisis hotline, which would be used for escalation control, other communications between US and Beijing during a crisis, confidence-building measures including nuclear arms control, formal dialogues on safeguards related to the military use of AI systems. And I think that China’s leadership believes that ambiguity works to its advantage and that during a crisis, rather than communicate to de-escalate, China would be able to effectively control escalation. I think that the cold war history suggests that that’s incorrect, that countries are far too confident of their own abilities to manage crises and that direct communication between countries has been essential to avoid escalation. A few years ago, I was at a meeting with a variety of former US and Russian leaders who had been responsible for nuclear weapons. And one of the questions was asked, how have we managed to survive this far? Have we not had a nuclear war? And the most common answer was that we avoided a nuclear war by avoiding escalation in a conventional war. And this avoidance was due to things like the crisis hotline that was created between DC and Moscow. So I’m worried the Chinese leadership hasn’t learned that lesson from that history.

Gideon Rachman And you’ve been worried, I know before you worked in the White House, thinking quite a lot about nuclear war and the existential risks, worked on it as an academic at Oxford, having worked in government now, are you more or less concerned about those nuclear threats?

Jason Matheny I’m more concerned. You know, I think the pace of technological change has been faster than I expected. And we still have, you know, conventional risks that we’ve faced for decades. We have nuclear risks that we’ve faced for decades. Russia and the US both have about 1,600 deployed strategic nuclear warheads. We don’t have good public estimates of China’s arsenal. But from overhead imagery of silos being built across China, we know that China is engaged in the largest nuclear weapons build-up in history. And I continue to be worried about nuclear war, either due to technical errors of the type we’ve seen several times historically due to faulty missile warning or to bad intelligence like we saw during Able Archer in 1983, or a crisis escalation of the sort that we saw during the Cuban missile crisis. And in addition to those risks today, we also have new risks, such as cyber attacks against nuclear command and control systems. So a large-scale nuclear war would be catastrophic. Most recent work on the atmospheric effects of nuclear war support the earlier models of nuclear winter and prolonged reduction in sunlight that would lead to massive starvation, billions of deaths. There’s probably less attention to this today than there was during the 1980s, but I think the risks are just as profound today.

And then we also have new risks that are emerging from new technologies. I had started worrying about synthetic biology starting in around 2002. I had been working for several years as an epidemiologist on infectious diseases like malaria, tuberculosis, HIV. But in 2002, the first virus was synthesised from scratch, just to show that it could be done. And some of the people I worked with had been veterans of the smallpox eradication effort. So the reason that most of us now no longer need to get vaccinated for smallpox is because thousands of public health workers around the world succeeded in eradicating it. But when a virus was synthesised from scratch from 2002, the reaction of that community of veterans was “crap”. You know, we spent decades eradicating smallpox and now somebody could, you know, recreate the virus if they had millions of dollars and sufficient technical skills. Unfortunately, the costs have dropped significantly. Several years ago, a couple of people synthesised a pox virus for $100,000. That could probably be done for less today. So that’s quite worrying. I mean, as Covid demonstrated, the world remains highly vulnerable to even moderate pandemics, and an especially severe pandemic that’s caused by an engineered pathogen could combine, say, high lethality, high transmissibility along incubation period. That could be a true existential risk.

Gideon Rachman Well, we’ll come back to that rather scary scenario in a minute. But just to continue the nuclear issue for a moment. I mean, you were, I think, in the White House still in the early stages of the Ukraine war. I know that the US has taken the risk that Russia might use nuclear weapons very seriously from the beginning, and that’s factored into their thinking. But do you think we’ve learnt anything yet from this war that’s now been going on for over a year about deterrence and about nuclear deterrence? Because one could argue, maybe it’s premature, that there’s a sort of reassuring lesson emerging, which is that the Russians wave the nuclear baton around for a while, but seem to have been talked out of doing it. And that maybe we’re learning that nuclear weapons just aren’t usable politically. Or is that too early to say that?

Jason MathenyI think it’s too early to say that they aren’t usable politically just because we saw it being used in ways that were, I think, quite surprising for sabre-rattling, for nuclear blackmail. That, I think, surprised those who had lived through the cold war, so I think that’s sobering. I’m heartened that there’s been less of that in recent months, but I don’t think we should be too sanguine about it. I also think the risk of accidental escalation is still significant, whether in this conflict or in others. Decision makers are imperfect, and when they’re making decisions under time pressure with imperfect information, usually sleep-deprived, they’re not making necessarily sound decisions. And unfortunately, the nuclear command and control policies and most of the key countries are ones where bad decisions by a few key leaders can be catastrophic.

Gideon RachmanYeah, I mean, how much control is there and how much command? I mean, does a leader like Putin, as far as we know, or Xi have to clear a decision through layers of people or isn’t the end up to them as it is up to Biden, I guess?

Jason MathenyAnd this has been a problem throughout the history of nuclear weapons is that how do you prevent unauthorised use when ultimately these nuclear weapons are highly distributed? And you know, the US and other countries have deployed various kinds of systems like permissive action links and other sorts of locks into these nuclear weapons to try to prevent the misuse of the weapons by those who haven’t been authorised to use them. But these systems are imperfect and I think increasingly vulnerable.

Gideon RachmanYou mentioned synthetic biology, as you know, an example of how technologies advanced to create a new form of existential risk. The other one that’s on everybody’s mind at the moment is AI. When it comes to AI and national security, what are the main issues that you’re having to think about? Is it disinformation or the manufacture of weapons? And are you concerned that this stuff is just being rolled out too fast?

Jason MathenyI think AI can be an amplifier for other risks, such as bio and nuclear. I mean, for example, AI could be used as a tool for developing more effective biological weapons or reducing the level of technical sophistication that one needs to design or manufacture such weapons. AI could also be a trigger of nuclear risk, either because of AI-enabled cyber weapons that are used against nuclear command and control systems, or because AI could be embedded within nuclear command and control. The Russian perimeter system, for example, as a semi-automated nuclear command and control, that’s meant to be a kind of dead-hand system in case of a decapitating strike against the Russian leadership. There would still be a sort of automated system that could retaliate. Then there is the more speculative risk of a misaligned AI. So, you know, something like SkyNet from the movies is a system that needn’t be conscious but could be badly programmed or modify its own code in some way that’s quite dangerous. And it used to be easy to dismiss this as science fiction, but now I think it’s harder to dismiss as we’ve seen very strong capabilities in models like ChatGPT or Claude that were surprising even to their inventors that just had emergent capabilities that were unexpected. And I think that trend is likely to continue — that will continue to be surprised by advances in AI. A couple of months ago, many of the leading AI researchers signed a statement that said, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. And that statement was signed by many of the world’s leading AI researchers. So people like Yoshua Bengio and Geoff Hinton, the leaders of many of the AI labs. So I think we should take that risk quite seriously.

Gideon RachmanAnd the risk I mean, just to reiterate the phrase you used — global extinction — could that come about in a variety of ways? As you say, some of it sounds like science fiction movies, but obviously, the idea that AI, at some point, develops not just capabilities beyond those of humans, but a will of its own. Is that no longer science fiction but something we really need to think about?

Jason MathenyI think, you know, AI systems right now already have certain kinds of goals that are programmed in. So, you know, solve a particular math problem, solve a particular, you know, manufacturing or biological design problem. So in part, it depends on how are those goals specified and what are the guardrails around them. To give one example, a lot of the systems that we use to actually manufacture nucleic acids, viruses, bacteria are connected to the internet and vulnerable then to cyber attacks. If a system is badly designed so that it reasons, well, the best way of producing this particular protein that I’ve been asked to build is to appropriate DNA synthesisers that happen to be outside of my network. I should simply co-opt them. And I think that’s the sort of risk that actually seems plausible given the kinds of systems that we’re building today.

Gideon RachmanLinking the AI risks to the stuff we were talking about earlier, China and the kinds of policy dilemmas that people sitting in the White House or elsewhere have to grapple with in the attempt to limit China’s ability to get ahead in the race for AI seems to be a big concern of the Biden White House and led to this big restriction on the sale of very advanced chips by Nvidia, which I know Nvidia think are invidious. They really don’t like it. But can you talk us through what the thinking was there?

Jason MathenyYeah, I mean, China has been using US chips in its military cyber systems and its surveillance systems. And as one example, the data centre that’s in Xinjiang used for real-time video surveillance of the Xinjiang prison camps was built with US chips, including Nvidia chips. So in October of last year, the Department of Commerce set new export controls, focussed on the most advanced chips used in data centres for surveillance, cyber operations and weapons simulations. And the controls affect less than five per cent of US chip exports to China, well under one per cent of all exports to China. There are comparable controls on the equipment used to make leading-edge chips, and the goal of that in economic terms is to address the negative externalities of US computing hardware that’s sold to China and used by China’s military and its surveillance operations. US chip companies don’t pay a tax when they supply China’s military or China’s prison camps with chips. Other parts of society pay either the domestic population within China that’s in prison camps or potentially Taiwan if there’s a future Taiwan scenario. Countries that are being attacked by China’s cyber weapons, those are the parts of society that are paying. So I think the controls are well justified to address this negative externality of chip sales. The controls also serve as a foundation for a kind of non-proliferation regime around computing. In that, benign computing research in China can continue, since researchers can use cloud computing systems that are outside of the country. Military and police organisations in China are very unlikely to ship their data overseas, given the security risks. So the control regime has this attractive property that it’s likely to reduce misuse of computing while minimally affecting benign use.

Gideon Rachman Do you think that the controls work, though? I mean, why wouldn’t a black market spring up in Nvidia chips and they can just buy them through third parties?

Jason MathenyThere is a black market. There’s some excellent reporting on this. It’s in relatively small numbers and the costs are quite high. So this is really, I think, a cost imposition strategy. It’s unlikely that. China is not going to be able to still create one or two data centres, but the costs will be much, much higher than they would otherwise be.

Gideon RachmanIn general, though, were you aware in government, and now outside, of a kind of growing clash between the kind of national security interests you have to think about in technology policy and corporate interests who will argue consistently, look, this is one of America’s great Trump cards, we’re the world’s technological leader, but if you stop us selling, we’re not going to be forever.

Jason MathenyI think that clashes frequently between the different time horizons rather than what the core interests are. I mean, in many cases, security is a prerequisite for the economic goals of companies, especially over long time horizons. And as two examples, you know, China’s industrial espionage isn’t good for companies. War with China would not be good for companies. And I think the probability of industrial espionage, the probability of war has increased due to China’s growing capabilities that are enabled by US and European technologies that are sold to China. So I think one of the strategies for reducing the costs and risks to the world of China’s cyber operations, of its industrial espionage, of its military modernisation, particularly with its sights set on Taiwan, is to use selective technology export controls to reduce the likelihood of conflict in the future.

Gideon RachmanJust back to synthetic biology, before I ask you kind of a general question to close us out. It’s obviously something you’ve been following for decades now, but the pandemic has really made everybody focus on those kinds of risks. Do you think post-pandemic, we’ve made any advances sort of intellectually or in policy terms in preventing people be able to, you know, whether it’s a terror group or a nation, to just manufacture a virus to manufacture the next pandemic?

Jason MathenyI think we’ve made surprisingly little progress. I think this is one of the more sobering observations after the peak of the pandemic was really how little defence we have built up in response to it. We haven’t built up the kinds of bio defences that we would need against the next pandemic. We don’t have the sort of bio-surveillance diagnostics, breakthroughs and medical countermeasures. We have great ideas on how to scale up things like wastewater surveillance and advanced PPE or improving infection control in the built environment. But we haven’t built this at the scale that we would need to in order to prevent the next pandemic. And we haven’t done much at all to address the security risks inside of commercial synthetic biology or synthetic biology that’s within research labs. And I think part of that is just a challenge that biology is still sort of catching up to some of the risks that are emergent. The fact that somebody could buy a DNA synthesiser commercially off of eBay and use it to create a pox virus or something worse is something that we’re slow to react to. Policy moves much slower than technology.

Gideon RachmanSo to finish, I mean, you were working on the National Security Council, which was set up, I think in the 1940s at the dawn of the nuclear age. And as is clear, I mean, nuclear weapons are still absolutely central to national security risks. But do you think the rise of these new technologies, AI, synthetic biology, mean that we really need to rethink quite profoundly, particularly, say, post-pandemic, what national security means?

Jason MathenyI think that’s right. I think that our institutions around national security were set up around the risks that we had experienced with. The risks from relatively slow-moving technologies, the risks for making bad decisions, the risks from bad intelligence and the institutional responses to those risks are the ones that we have embedded within organisations and the US government and the Russian government and the Chinese government. Things that are focussed on better intelligence, better crisis management, better communication across different parts of government checks on bad intelligence. What’s newer and less familiar is the severity of risks from emerging technologies that are advancing much faster than our governance of them. That advanced much faster than our deliberation about them. Richard Danzig has an excellent report on this topic called Technology Roulette. And the core thesis is that we might find that the greatest risks are ones that we’re developing ourselves that we don’t know how to effectively control. And because technology moves so much faster than policy, I think we’re going to need to make much greater investments in things like technology forecasts, stronger forms of risk assessment, a rejection of, you know, the sort of Silicon Valley ethos of moving fast and breaking things. We can’t afford to move fast and break things and synthetic biology or an AI. We need a much greater emphasis on public safety because the consequences of screwing up could be catastrophic.

[MUSIC PLAYING]

Gideon RachmanThat was Jason Matheny of the Rand Corporation speaking to me from California and ending this edition of The Rachman Review. That’s it for now. Please join us again next week.