Chapter 7 – So, What do You Think About This?
“Ready, Mr. President?”
“Yes, Dick,” Yazzi said. “I’m all ears.”
“As requested,” Dick Gould, the Director of National Intelligence, said, “We’ve created a detailed proposal for a public-private partnership to accelerate the advancement of AI R&D. The focus will be on advancing the state of AI capabilities rather than actual products – that part would still be left to the private sector. The elements are as follows.
“First, a set of detailed technology goals, such as the development of increased autonomous robotic capabilities, will be established that map to our best estimate of national security threats and military needs for both the near and the long term.
“Second, a set of ethical rules relating to the use of AI, big data, and privacy will be outlined for discussion and refinement, with the expectation that the final version will apply to all government contract work.
“Third, the participants will be sought from all appropriate sources: academic, private companies, research laboratories, and so on.
“Ownership of the results of the initiative is obviously an important and tricky topic. If we don’t get this right, employers either won’t let their people participate at all, or they won’t let them say anything if they do. Focusing on basic research will be the most important message here. There are plenty of examples of that approach working with industry collaborations and at universities.
“Still, we would expect and hope that patentable discoveries would result. If there’s a dispute when they do, an arbitration panel will determine which participant or participants will have the right to patent that discovery.
“Also, the government won’t claim ownership to any discoveries, or any technology or products that result from them. That said, all participants, and their employers, will agree that the government will have the guaranteed, non-exclusive right to purchase all software, chips and hardware that the employer of any participant develops based on the initiative.
“Of course, the entire process would be conducted under rigorous security and confidentiality. In addition, the government would provide a waiver from antitrust concerns, so participants can disclose whatever information and plans with their competitors they’re willing to share.” Gould paused, waiting for a reaction from the president.
“Sounds right,” Yazzi said at last. “How would you operationalize that?”
“The obvious first step would be to come up with the target list of participants. At the same time, a senior administration representative would reach out to the big high tech companies on a confidential basis to see whether they’re open to the concept,” Gould said.
“And the pitch? Both a carrot and a stick?”
“That’s right, sir. The first carrot is that all the individual AI experts, whether at big high tech companies, universities, or startups will have an opportunity to propose the ethical rules they’ll be bound to. Their employers should find it easier to take on government work after that, because their own employees will be helping to write the rules.
“The second carrot is that the administration will urge Congress to lay off on creating any new regulations until this initiative has been completed. Once it’s over, we’ll urge Congress to use the resulting rules as the basis for any new laws the folks on the Hill think are still necessary.
“The third is that the collaboration of so many experts should lead to an explosion of new products back in the labs that can be sold to public and private purchasers alike.”
“And the sticks?”
“I think the big high tech companies will figure out the sticks would be the mirror image of the carrots: the administration standing by while Congress goes on a regulatory binge, less lucrative government work to go around, and fewer new product ideas based on higher R&D budgets.”
“Okay,” Yazzi said. “It will be interesting to see how that sells. What will all this cost?”
“As government initiatives go, not much. We would pick up the travel and other expenses of the academics and any other non-profit participants. The high tech companies would be expected to pay their own way, but we’re thinking we would pick up the tab for meeting sites, meals, ongoing administrative costs, and so on. Under a hundred million dollars a year, we’re thinking.”
Yazzi frowned, and said nothing.
“Any other questions, sir?” Gould said. “Is there something that doesn’t sound right?”
“There is,” Yazzi said. “What about the Chinese.”
Now it was Gould’s turn to frown. “I’m afraid I don’t take your meaning, sir.”
“Our current public policy is that we will neither develop nor deploy fully autonomous lethal weapons systems. But privately the Pittsburgh Project is capable of doing exactly that. The Chinese have made no such commitment and we know through our intelligence that they’re already aggressively developing such systems. If their own spy agencies are as good as ours, they may have ramped up their program in reaction to our efforts – which is exactly what the Soviets did when they learned of the Manhattan Project, since you mention it. Either way, if we announce this new initiative, you can bet the Chinese will really go into overdrive on AI development, and we can be sure they’ll apply the same advances to their LAWS efforts as well. Once they do, we can expect the Russians will follow.
“If you assume – and I’m told by our military experts that we do – that a LAWS-heavy force will be superior to a traditional force, we would have no alternative but to go the same route. That would mean dialing the Pittsburgh Project back up and restoring our mothballed LAWS strategy. Then we’ll have a full-scale arms race on our hands with no end in sight. My goal is to get China off their LAWs track, not accelerate it.”
“Well, of course that’s a predictable result,” Gould said. “But isn’t that always the case with any kind of technical advance that can be weaponized? How could we prevent that?”
“By inviting the Chinese to participate as well,” Yazzi said.
“Excuse me, sir?”
“Let them into the process, on condition they sign a LAWS non-development and non-deployment treaty.”
“I get the concept, sir, but I’m having a hard time understanding why the Chinese would accept the invitation. Can you elaborate?” Gould said.
“Sure,” Yazzi said. “Let’s look back at the nuclear arms race. Neither side knew how many war heads and missiles the other side was building, and often one side or the other thought it was way behind when it really wasn’t. By the time the sides finally agreed on disclosure and arms control agreements, each had thousands of bombers and missiles and over ten thousand warheads – more than they could possibly use for any strategic purpose in an actual war.
“One of the things that made the arms control treaties possible was the fact that each side shared the details regarding its arsenal with the other side, and allowed for inspections as well. Once each side had confidence it knew where the other side stood, it was willing to decommission thousands of warheads and missiles. They also agreed not to test any new weapons, making it much less likely new and more terrible devices were designed, since their developers couldn’t be entirely sure how well – or even if – they would work.”
“But you can’t count or verify AI programs, sir.”
“Not programs, no. But you can count drones and robots. And if you have confidence that the other side isn’t ahead in the state of their AI skills, you’ll start thinking about where you could spend that part of your military budget instead.
“Also, don’t forget where much of this started – with the Made in China 2025 plan. If China wants to be co-equal with the United States by then, this is the guaranteed best way to get there. What China wants most is to dominate world trade. Developing ultra-sophisticated, ultra-autonomous, and ultra-expensive drones and other LAWs isn’t a goal in itself. It’s a way to serve a policy goal. If we offer China a better and cheaper way to achieve the same goal, they should take it.
“Finally, there’s a long history of Chinese behavior to take into account. China basically withdrew within its historical boarders over five hundred years ago. True, when the communists came to power, they extended their power into their immediate neighbors, Mongolia and Tibet. But after developing nuclear weapons, they never bought into the arms race. They built just enough to be sure they would never be attacked by another nuclear power. Even now, their military expansion is directed at ensuring their access to resources and protecting their trade opportunities. I’m sure the Chinese president would rather spend his budget on raising the standard of living of the Chinese people to ensure domestic stability than blow it on accelerating a race to build weapons he hopes he’ll never need to use.”
Yazzi paused and looked around the table. “Thoughts?”
Carson Bekin spoke first. “It’s a very creative idea, sir. I’m concerned, though, about the political repercussions. Our good friends on the other side of the aisle would be sure to call this out as a sign of weakness, and those in the know about the Pittsburgh Project are looking for an opportunity to revive it, not bury it. They’d say you were selling out American interests, and a whole lot worse.”
“What do others think?” Yazzi said. “I’d like some more reactions.”
“Well, sir,” Linus Schulz, the Secretary of Defense said, “The argument about avoiding an expensive arms race does cut both ways. The right will compare you unfavorably to Ronald Reagan. The enormous amounts we poured into the arms race helped push the Soviets to realize they could no longer keep up with us, and that brought them to the table. Your initiative would make it easier for them to catch up in conventional arms.”
“That argument won’t hold up,” Yazzi responded. “The Chinese are far better off economically now than the Russians were then. The Russians couldn’t afford to keep up. The Chinese can.”
“Fair enough, sir,” Shulz continued. “But next our opponents in Congress will say you’re freeing up resources the Chinese can then use to fund their China 2025, Belt and Road, and traditional military buildup initiatives.”
The Secretary of State, Annie Gray, added her doubts. “Also, sir, if both China and the United States back off on militarizing AI, that will give Russia, and perhaps other countries, a chance to catch up to us on LAWS, and even surpass us.”
After a pause, Bekin spoke again. “Sir, there’s another factor I think we should consider. Let’s not forget that program – Turing, I think it was called – the one that guy at the NSA created. When it went rogue, it wreaked a lot of havoc, and even killed a few people. Are you sure you want your administration to go all in on making AI even more intelligent?”
“Absolutely,” Yazzi said. “High tech companies are moving ahead as aggressively as they can trying to achieve general intelligence in AI. You can bet they aren’t playing it as safe as they should, and as you point out we’ve already seen what can happen when someone doesn’t. Rules relating to containing and controlling AI will be part of what the project will be charged with creating. Those can become the foundation for regulations and international treaties as well.
“Anyone else?” he added.
Not one of his closest advisors had spoken up in favor of his proposal. He’d been prepared for that.
“I’m not sensing strong support here. So, here’s how I’d like to proceed. Next week at this time I want to meet again, this time to review the three worst case scenarios our best experts on LAWS can come up with, assuming we and the Chinese end up in a LAWS arms race. I don’t want anything ridiculous, but I don’t want any punches pulled, either. I’ll see you all then.”
Chapter 8
Haven’t I Seen You all Somewhere Before?
A week had passed, and the same cast of characters had reassembled, joined by a LAWS expert that would provide the scenarios Yazzi had requested. His presentation was stark.
“In order to provide the broadest picture, sir, we explored three separate situations: the first assumes that China invests an average of one percent of its current gross national product each year for the next ten years towards developing and deploying LAWS. The second assumes that China, or another state, successfully hacks our own AI-enabled weaponry. And the third assumes that a super intelligent AI is developed and then control of that program is lost.
“Turning to the first scenario, let me comment on the assumptions. In 2018, China spent approximately one-point nine percent of its GNP on its military – about two hundred thirty billion dollars. That’s only about thirty-eight percent of what we spent on defense in the same year, and China’s economy is now almost as large as ours. In this scenario, we would assume that some percentage of the current budget is redirected to LAWs development and the overall military budget is increased. Over ten years, this would provide over one trillion dollars for R&D and then production – an amount sufficient to build and deploy more than a million LAWs.
“This scenario should not be considered unlikely when it is remembered that China is already committed to ramping up its military spending, and that many types of battlefield LAWs can be manufactured, deployed and managed more cheaply than human warfighters, providing a strong incentive to transition from a human-based military to an AI-based force. Taking projected growth of the Chinese economy into account as well, the assumed annual expenditure would be less than three quarters of a percent of China’s GNP over the next ten years. By the end of that decade the overall defense budget of China would have decreased on a percentage basis as a result of retiring almost half of its current ground forces.”
Yazzi interrupted. “Do you have any idea what the costs on our side would be to adapt to such changes in the Chinese capabilities?”
“No sir, but they would certainly be substantial. As you’ll see from what I’m about to describe, much of our current warfighting capability could become irrelevant, as a robotic force would present a very different target profile than a human on.
Yazzi had already seen evidence of that. “Thank you. Please continue.”
“Should China decide to follow such a course of action a great variety of new threats could arise. For example, instead of facing massed troops subject to the control of human commissioned and non-commissioned officers and highly vulnerable to air attack, China could unleash hundreds of thousands of autonomous devices, each roaming the battlescape like a member of a guerilla force, except that each would be capable of operating twenty-four hours a day, and far less dependent on a supply chain to provision it.
“It would be extremely difficult for our current troops, tactics and weaponry to target and take out forces such as these. Small robots of this size would be easy to camouflage, hard to detect, difficult to target and destroy, able to quickly group to launch an attack, and just as capable of melting back into the landscape. They could also operate with far less command and control infrastructure than our own forces.
“At the same time, airborne LAWs of all sizes would become ubiquitous. Some might be as small as bees armed with Sarin-tipped stingers. It’s not impossible to imagine forward positions being wiped out before they knew they were under attack.
“LAWs forces of these and other types could be very effectively deployed in small-scale situation like Russia’s intrusion into Georgia or Ukraine, or on a massive scale, such as an attack on Pakistan or India. It would be easy to imagine such a force taking effective control of vast areas within a matter of hours unless the defending forces were radically redesigned to address these new threats.
“Do you have any questions before I proceed to the next scenario?”
“Yes,” Yazzi said. “I understand your point about mobility, but how would a robotic force maintain control if the LAWs are constantly on the move?”
“Excellent question, Sir. One would assume that the Chinese would at the same time mount an aggressive cyber war, taking control of radio stations and Internet sites and informing those living in the invaded country that any one resisting would be tracked down and destroyed by one of the robots.”
“Wouldn’t that violate the rules of war?” Yazzi asked.
“Difficult to say, sir. During World War II the Allies firebombed Hamburg, Dresden and Tokyo, in each case incinerating close to, or more than, one hundred thousand men, women and children. One could argue that targeting individual citizens, rather than destroying them en masse, is a more humane approach.”
“Okay. Next question. If such an attack were to be launched today, how would we seek to counter it?”
“Sir, I’m not sure we could. It would be very much like when the British marched on Lexington and Concord and then were routed by colonial marksmen hiding behind trees. Our entire military configuration today is based on the assumption that the enemy will comprise massed forces and fixed points of vulnerability, such as ammunition depots and airfields. Faced with millions of targets instead of hundreds, we would have no way to establish more than temporary control of any area. The same issue would arise at sea, where one could imagine China releasing tens of thousands of autonomous torpedoes, each roaming the seaways looking for targets. Russia claims it’s already developed such weapons. Even assuming that we reverted to a vastly expensive and commercially disruptive convoy system, our current technology and weapons systems might not be able to detect and destroy such weapons in time to interdict them. The same weaponry could be used to strike U.S. bases abroad, and even coastal targets of the U.S. itself.”
“Very well,” Yazzi said. “Please proceed to the next scenario.”
“Sir, for this scenario we assume that the United States continues to develop sophisticated unmanned weapon systems. We already use drones not only for surveillance, but for taking out targets using air to surface missiles. Current rules provide for future AI-piloted fighters and bombers to be controlled by remote pilots. We’re also developing multiple land-based weapons systems of various sizes, power, and throw weight. Classified development work is also underway on autonomous submarine weaponry, as well as satellite-based systems.
“The unavoidable point that needs to be made under this scenario is that no internet-based system has ever been made that can absolutely be assumed to be immune to hacking. Even many “air gapped” systems, such as the Iran nuclear centrifuges destroyed in our Stuxnet account, have been compromised, despite the fact they were never connected to the Internet at all.
“In an autonomous system, the risk would be even higher, as an enemy could gain control without our being aware that this has happened. Note that even if a device is truly autonomous, it will still need to be informed by external data, such as GPS feeds and other sources of data essential to the completion of its mission. That means that air-gapping is impossible.
“With this by way of introduction, there is the potential for any number of dire scenarios. In-theater weaponry could be turned on our own troops. Stateside armaments could be activated to launch a broad-based attack against civilians here at home. Suffice it to say that no commander could ever feel completely safe reviewing his or her own autonomous forces. Any questions, sir?”
“No, I don’t think so. I believe the risk is so self-evident it doesn’t require any elaboration.”
“Thank you, sir. I believe the third scenario – the potential for a super intelligent AI to go rogue – can be handled briefly as well, since it has already happened to disastrous effect. That said, I’d be happy to elaborate if you wish.”
“No,” Yazzi said, “I’ve been well-briefed on the Turing incident. Thank you for your presentation.
“Now,” Yazzi continued, speaking to all in attendance, “I hope this little exercise has made the impression I intended. If we do nothing, each of these scenarios provides an existential threat. In the first, at minimum, we would need to radically overhaul our entire military establishment – something that will be vastly more disruptive for us than the Chinese, since our military is vastly larger while there’s has largely yet to be built. Even if we pursue this approach, we will face countless unknowns and opportunities to be outflanked by the Chinese and others.
“Turning to the second scenario, there is no complete defense that can be assured. The more we turn to autonomous systems, the more vulnerable we become. Indeed, the more successful we are in winning such an arms race, the greater the risk if those systems are compromised.
“With regard to the final scenario, the only rational approach must be to never give a super intelligent AI control over weapons that could be turned against humanity in a fashion not ordered by a human controller.
“Unless I am very much mistaken, I assume you are all now prepared to support inviting the Chinese to the table in an effort to avert an arms race that would turn each of these scenarios from a morbid exercise into a source of potential disaster. Am I correct?”
Yazzi glanced around the room. Either he was, or no one was confident enough to suggest the opposite.
“In that case, I’d like to move forward on the assumption we invite the Chinese – and the Russians, too, to the table. Also, the Brits and the Israelis, since they also have LAWs under development and they’d raise hell if we don’t offer them a chance to participate. If I’ve forgotten anyone, include them, too. While accepting the concerns you raised in our last meeting as valid concerns, I think there’s more to gain than lose by giving this a try. I came into office promising this administration would work towards making the world a safer and more secure place. We know how the past worked out, and we can expect the same thing to happen again if we don’t look for new ways to confront old challenges. That’s it for today.”
As the meeting broke up, Yazzi motioned to Carson Bekin to stay behind.
“So, what do you think, Carson?” Yazzi asked.
“Honestly?” he replied.
“Yes. Let’s hear it.”
“I think you’ll get flamed by the conservatives and played by the Chinese and the Russians.” Bekin was certainly the only person in the administration who could be that frank with the president. But then again, they’d known each other since childhood.
“Granted,” Yazzi replied. “That’s the script every morning after my alarm goes off.”
“But seriously, Henry. Lou Hays will go ballistic. What’s your plan for handling him?”
Louis Hays was the Chairman of the Industry Advisory Panel on Military Robotics, a small body created by Yazzi’s predecessor to advise him on advancements in this area and make recommendations on how the U.S. military could best make use of these developments. The committee’s work, as well as its advice, was classified as Top Secret. Louis was also the Chairman of the country’s largest diversified defense contractor, although his appointment to the IAPMR had less to do with his domain expertise than his status as one of the largest contributors to the campaign of the president Yazzi had replaced. Straw’s control of the committee was secure, given that he had hand-picked each of its other members. His influence on the Hill was substantial as well, as most of those in the House and the Senate relied to some varying extent on the votes of defense workers in their states.
“I’ll make sure he hears about it first from me personally.”
“That will stroke his ego, sure, but not make him an ally.”
“Nothing short of turning the Pittsburgh Project back into an all-in LAWS initiative would do that. If military robots continue to require human guidance, defense contractors won’t be able to sell any more robots than there are controllers to pilot or supervise them. We’ll just have to work him the best we can. What else?”
“I don’t know, Henry. I meant it when I said it was a creative idea. It’s bold, too, and it would be a refreshing change to do something creative and bold. But will it work? I don’t know. And will it be worth the political cost? Same answer. On both, I’d call the odds long.”
“Well, you’re going to have to get past that, Carson. Whatever your private thoughts might be, I’m going to need you show the commitment of a true believer,” Yazzi said.
Bekin’s eyebrows went up. “Why with this one any more than any of your other initiatives?”
“Because I’m asking you to manage it.”
Bekin’s eyebrows were trying to crawl off his forehead now. “Why on earth me?”
“Because everyone else thinks it’s a hare-brained scheme, too, and I trust you more than I do any of them.”
“You mean run the whole show?” Bekin asked.
“No, not the whole show. I’ll have to let State and their arms control people handle the treaty side. But for the rest, there’s lots of eligible candidates but no logical winner. The Pentagon, to start with, and then the CIA, NSA, and so on. They’ll each have to support a part of it and all of them will want to control it, but that’s not what I want. Let them be on an advisory committee if they want, but I want this project to report directly to me.”
“Have you forgotten that I’m also your Chief of Staff? Assuming you want me to keep that job, there’s only so much of me to go around,” Bekin said.
“Fair enough. So, your first job is to find the right people to run this on a daily basis. Once you do, you’ll be ninety percent of the way there.”
“And who would they be, as you see it?” Bekin asked.
“I may have rejected a lot of Dick Gould’s idea, but not all of it. His concept of analogizing an AI initiative to the Manhattan Project makes sense. The folks who ran that initiative were allowed to break all the rules in order to achieve a vital mission on an impossible schedule. For that project, they picked Robert Oppenheimer, a physics scientist, and General Leslie Groves. Their collaboration was uniquely effective, and their success in an incredibly short time frame was unprecedented. This time around, we’ll need a universally respected AI expert who’s also a great team leader and a top-flight business manager capable of working with the AI expert and a lot of other scientific prima donnas.”
Bekin started to speak, but Yazzi got there first.
“And I want you to do this fast.”
“Okay, but can I make two suggestions?” Bekin asked.
“Sure.”
“Let’s make this a two-step process, with the first being to come up with the ethical rules for robotic warfare that will serve as the basis for an international treaty. Starting that way will limit risk while providing an opportunity to build trust. If it goes well, we proceed to step two, and the actual sharing of AI science. If it doesn’t, we can retreat with no national security harm done.”
“I like that,” Yazzi said. “And your second suggestion?” Bekin asked.
“We disclose the second step on a contingent basis to the countries we invite. That way, if things don’t work out and we never get to it we don’t have to take the political heat for no purpose. But if things do work out, with step one in the bag, it won’t seem as reckless.”
Yazzi nodded. “Excellent ideas both. You’ve earned your keep today.”
Author Notes for this week: Not to much to remark upon this time, except that there are a whole lot of words here and no action. That said, if you think about a typical Tom Clancy thriller, that’s actually a feature rather than a bug. The challenge is to make the scene work well on it’s own merits. It still needs shortening, tightening, and the addition of some visual effects. The inset picture is the Situation Room in the White House, and I might leave it there (with more description) or move it somewhere else.
You’ll also notice some new administration members here, conforming to the same naming convention as before. See if you can match them up to the right comic strips.
Next Week: The president as salesman. Continue reading here
Download the first book in the Frank Adversego Thriller for free at Amazon and elsewhere
This really sets the stage for the project.
However, when reading the chapters, I would feel there might very well be some kind of “intermission” between the chapters. As it is now, there is a week between these chapters with nothing to fill it.
Rob, yes, it is a bit linear. Generally, the chapters in my book are like a deck of cards – the suits are different plot lines, and there are frequently flashbacks as well. Part of my editing in later drafts is to balance out action and development, and I do this in large part by reshuffling the deck, moving the order of chapters, and moving bits of chapters into other chapters. So it’s very likely that in the final book you’ll see another plot line scene pop up between these two.
Somewhat relatedly, when I come up with a new plot line, I’ll often write out the first six scenes or so, and then drop them in, one by one, into, or between scenes that already exist. I’ll then advance all of them in parallel from that point forward. But even into the final draft I’m likely to still be shuffling to get the best pacing and to make it as easy as possible for the reader to follow what’s going on. Sometimes I succeed.
Your Turing scenes are very powerful. Reading of the Turing program as if it is a human character gives me the creeps—and keeps me turning pages. Turing is a wounded monster implacably seeking healing. And when it heals, what havoc will it create? I want to find out!! While not true action scenes, they serve that purpose in this part of the plot when you are imparting a lot of information.
Your plot line about LAWS seems to be taking us into arms control. That’s what Yazzi’s scheme is, in part. If you keep going you’ll find yourself writing about verification. Verifying mobile weapons systems, especially small ones, is a daunting task. For example, counting fixed ICBMs is relatively easy compared to counting cruise missiles. A verification scheme to count drones and robots would be very hard. A lot of real-world effort was put into verification of the arms control treaties of the seventies through nineties. You may find it useful to go to this experience if you move ahead with the arms control aspect of your plot. If you wish to do that, I recommend reviewing the full text of the SALT II Treaty and a book that provides a lot of detail, both technical and political, of its negotiation: Endgame, by Strobe Talbott.
A couple of thoughts about the viability of LAWS:
• If GPS becomes unavailable (through jamming or satellite destruction), how much does this handicap the robots’ ability to fight, both individually and as a unit?
• Battlefield coordination of drones and robots requires high speed data links. Tempting targets for jamming.
• What is the power source for individual AI war machines? Is it vulnerable?
As your author notes reveal, more action /conflict in these chapters would be worthwhile. How about this: Yazzi essentially steamrolled his advisors into accepting his vision. Might one of them be very angry and alarmed by what he sees as a madcap scheme that will harm the nation? And resolve to secretly oppose it at every opportunity? That might lead into an interesting subplot.
At one point you wrote, “Straw’s control of the committee was secure.” Who is “Straw”?
Doug,
First off, thanks for the kind words about Turing. That charactier is one of my favorites as well, which is why I decided in book 4 to not let Frank knock it off for good. I grew quite fond of Jerry Steiner and Dirk Magnus as well, but sadly let them both go.
Your comments are dead on point about arms control and my own concerns, and you’ll see in future chapters that Yazzi is worried about verification. That said, I haven’t come up with many good answers for him, particularly since there’s no visible difference between a guided drone robot and an autonomous one.
You’ll also see that there will be a pro-LAWS wing that Yazzi has to deal with. I toyed with the idea of having someone more actively trying to undermine Yazzi, but haven’t tipped that far – yet. We’ll see what happens in the next draft.
Thanks for the tip about the Talbot book; I’ll add it to the list. I’m reading Fred Kaplan’s new book, The Bomb: Presidents, Generals and the Secret History of Nuclear War https://amzn.to/38haZ04 and while it gets pretty repetitious as successive administrations grapple with the challenge of arms hawks and arms control, it’s also predictably chilling.
Excellent comments about the technical aspects of LAWS; I do need to continue to give this a lot of thought and research.
Lastly, Straw is a slip that I need to correct. As you know, I’ve been echoing the Manhattan Project and the principal players in that drama. One of them was named Louis Strauss (he insisted on pronouncing his last name “Straw”), and hence “Straw” Louis here – except that earlier in the same paragraph I called him Louis Hays. Not sure what his name will be in the final draft.
Strauss, in real life, was notable as being, at first, an Oppenheimer fan, but later his nemesis, engineering the grossly unfair and rigged hearing that resulted in Oppie losing his security clearance. Part of his turning against Oppie had to do with Oppenheimer’s advocacy against the hydrogen bomb, so here I style him as the proponent of the Pittsburgh Project.
Doug, I see that Scientific American has a story in the February issue that makes many of the same points you make above, relating to jamming and spoofing. It goes on to make the point that, given that both sides would obviously make their tactics highly secret, at least in the early days of LAWS, the actual conduct of a battle between LAWS forces would be highly unpredictable, raising the risks of unintended consequences and the potential for situations to get out of control.
And, as with my book, the author draws the analogy to the nuclear arms race, where they same concerns made the risk profile particularly high.
I’ll look for that article, Andy.