Chapter 11: RSVP Yes
“Well, the Chinese, the British and the French have bought in, even if the Russians and the Israelis haven’t, and that’s fine.” president Yazzi said, putting down the phone. “We had to invite the Russians and the others to validate the effort, but this is really all about the Chinese. If we can get them to sign a bilateral treaty on the rules of AI warfare, we’ll propose the same terms to the United Nations as the basis for a global agreement. If we’re as successful there as I expect we will be, the Russians will at least look bad if they refuse to sign on.”
“Congratulations, I think,” said Carson Bekin.
“Thanks for the vote of confidence, I think,” Yazzi said. “I expect you’ll come around eventually. In the meantime, we’ve got to keep things moving on the planning side. I want to be in a position to announce this within ten days.”
“I think we’re on schedule, Henry, with one exception. I thought I had your Groves and Oppenheimer, but they both turned me down flat, and, I’m not nearly as happy with the next best candidates. But I do have a plan B.”
“Which is?” Yazzi said.
“Well, after a week of driving myself and my staff crazy I had one of those ‘Doh!’ moments – we already have a general and a chief scientist running the Pittsburgh Project. There couldn’t be any better background than that, they work well together, and neither is in a position to say no.”
“What about their attitude? Is either holding a grudge because the Pittsburgh Project got downgraded?”
“Not as far as I can tell,” Bekin said. “The general’s an old pro. He’s seen programs and priorities change with administrations before. According to the file on the scientist, there were reservations over bringing him on board to begin with. He’s very socially liberal, and some weren’t convinced he’d bought into the ethics of developing LAWs at all. But, by all accounts he’s performed well.”
“That sounds promising,” Yazzi said. “But I’d like to meet them and make my own assessment. See if you can make that happen this week.”
Carson Bekin could and did. Two days later General Burt Harris had been briefed on Yazzi’s plan and was being ushered into the Oval Office. The president rose to shake his hand.
“Thanks for finding the time to meet with me on such short notice,” Yazzi said, “I know you’re a busy man.”
“Of course, sir,” Harris said. What else could a commander in chief expect? Still, it was a nice gesture. He followed Yazzi away from the president’s desk and eased himself stiffly on to the couch opposite the one chosen by the president. So, this was going to be a cozy chat; not quite what he’d expected.
“How is the Pittsburgh Project?” Yazzi asked. “Are you pleased with the progress being made? Do you have all the resources you need?”
“Thank you for asking, sir. Yes, I think we’re being very well taken care of. As to progress, I believe we’re executing well on our revised mission.”
Had Harris slightly emphasized the word ‘revised?’ Yazzi wasn’t quite sure. “I’m glad to hear that. And what is your opinion of Bill Berkeley?”
Yazzi’s direct question took Harris somewhat by surprise. “Well, sir, so far as I am able to tell, the man is as brilliant as the scientific community believes. He seems to know everything about everything in addition to computer science – literature, psychiatry, history. There doesn’t seem to be a subject in which his knowledge is not encyclopedic. And his leadership of the engineering team is superb. Eccentricities aside, or perhaps eccentricities along for the ride, he’s one of the most extraordinary individuals I’ve ever met.”
“What is your opinion of his personal leanings as regards the Pittsburgh Project?”
“That is a harder question to answer. He has a long history of supporting humanitarian causes and holds strong opinions on matters of social justice. That is not an obvious qualification for participation in a weapons initiative.”
“And yet, if I am recalling correctly recalling this, it was you that selected him.”
“That’s true, sir. It was my judgment, after meeting him, that he was the best suited for the job, and his performance has convinced me that my decision was properly informed.”
“And he accepted without reservations?”
“I cannot speak to what might have been inside his head. I do know that he is ambitious and have the sense that he revels in the respect of his peers. Whatever ethical quandaries he might have had appear to have been outweighed by these incentives. That said, I believe he was relieved when the project was downgraded from a LAWS initiative to a purely robotic remit.”
“Downgraded?” Yazzi said, eyebrows rising.
“I’m sorry sir,” Harris said, and meant it – he wasn’t used to making a slip like that. “A poor choice of words. I used it only in the sense that the technical and scientific challenges are now less daunting.”
Yazzi doubted the answer, but was impressed with the general’s quick thinking.
“Thank you for this update, general. I’m pleased that all is going well. As you will already know, my main goal today is to discuss a new initiative that is of the highest importance to me. I want to be sure you understand first-hand what my expectations are.”
“Thank you for doing so, sir. I appreciate the opportunity to have that discussion.”
Yazzi leaned forward. “General, I’m about to be candid with you, and I’m going to ask you to be equally frank with me. I understand very well that it can be difficult for professionals like yourself to carry out a military policy laid down by someone who has never been in uniform. I recognize it must often seem that a politician could not possibly be qualified to make such calls. But I can assure you that I never make a decision in your sphere without first seeking the best advice I can from those in uniform most able to provide it. Do you understand?”
“Of course, sir. That is our system, and it’s worked very well for almost two hundred and fifty years now.”
A prudent answer, Yazzi thought. But did the general believe it?
“What is your understanding,” Yazzi continued, “of what I have in mind?”
Harris frowned. Yazzi had just said he wanted to convey that information. “Sir, as I understand it, your primary goals are to keep ahead of the Chinese in AI, to avoid an AI arms race, and to establish by treaty, if possible, a comprehensive set of rules restricting what robotic weapons, and particularly lethal autonomous weapons, can and cannot do. Do I have that right?”
“You do,” Yazzi said. “What do you think of those goals?”
“Sir, it’s not my place to have a position on the goals of your administration. My duty is to execute the policies you lay out to the best of my abilities.”
“I appreciate that, general. But as I said earlier, my own duty is to inform myself to the greatest degree possible before making decisions, especially where they as far reaching as the future composition of U.S. military forces. You are perhaps more involved than anyone else in uniform regarding the development of robotic weapons systems, and I would value your insights. My decision is not yet final, and I’m giving you the opportunity to affect it.”
Harris was trapped. He took a deeper breath, sat up straighter and said, “Candidly, sir, I have reservations.”
“I appreciate your honesty, General. Will you elaborate?”
“Sir, I understand that you are a student of history. As I expect you know, the training at West Point and the War College is heavily based on drawing lessons from the past. One of those lessons is that technology marches forward. With only very rare exceptions – for the most part, just poison gas – no effort to contain the spread of new weaponry has ever been successful. Yes, there’s a nuclear non-proliferation treaty, but some countries never signed it, and others withdrew and then developed nuclear weapons.
“Sometimes the U.S. itself has refused to participate. We’ve never signed some treaties most of the rest of the world has acceded to, such as the Ottawa Treaty, banning the use of anti-personnel mines. Thousands of people, often children, are killed and maimed by those weapons every year. One hundred and sixty-four countries have signed that treaty, but we’re one of the thirty-three that haven’t.”
“But to the extent countries do sign and stay in treaties, the world is a safer place.”
“True, sir, but robotic weaponry would be different. My understanding is that you’re not trying to ban the development of robotic weapons entirely, and no surprise there – look at how large our drone fleet is already. Once the mechanical platforms exist, though, they can be upgraded with nothing more than a software update. It will be effectively impossible to tell whether someone is cheating by developing LAWS software and strategies. We could think everything was fine one day, and the next find out that a million dumb robots had been turned into autonomous killing machines. If an enemy does that and we are incapable of immediately responding in kind, we would be at an enormous tactical disadvantage.”
“I understand your concerns, General. Do you have others?”
“Not concerns, no, but I do see lost opportunities. If we get into another war, our first priority should be to win. Today, it costs us over a million dollars a year to train a soldier and keep him in the field. When he retires, he’s entitled to medical care for the rest of his life, and if he stayed in long enough, a pension besides. He also gets tired and can make bad decisions, needs leave time, and sometimes loses his head in battle – if discipline breaks down, he can even take part in war crimes.
“A robot will not only be much cheaper in the long run, but it can fight twenty-four seven. It will never know fear or lose its head in battle, and it can process information more decisively and quickly than a human can.”
The general leaned forward, “But most importantly, sir, look at the macro picture. Wouldn’t it be better if the future of war wasn’t about people shooting at each other, but machines trying to take out machines? Wouldn’t you rather face a situation where you felt you had to go to war and know that young men and women wouldn’t die as a result of your decision? One where there were no flag-draped coffins to meet at the airport, and no horribly disfigured soldiers to visit at a VA hospital? That’s what the Pittsburgh Project can achieve if you allow us to move forward, sir.”
“I’ve thought about that a great deal, General,” Yazzi said. “But let’s look again at history. Having greater weapons hasn’t led to restraint, but to greater carnage. Look at World War II. We started out bombing factories and ended up firebombing entire cities. What’s to prevent an enemy from deploying its robot army against civilians instead of combatants? We destroyed so many Japanese cities before we finished the atomic bomb that we had to place five on a ‘do not devastate’ list so that it would be clear how total the annihilation of a nuclear blast would be.”
Harris had no answer to that, so he tried a different tack.
“Sir, you have a thirteen-year-old son, as I recall.”
“Yes,” Yazzi said, “why?”
“I was five years older than he is when I enlisted and shipped off to Viet Nam. I saw things there I never wanted to see and will never be able to forget. Later on, I had a son. I never wanted him to experience what I did. But he enlisted, too. He was just twenty when he was killed by an IED in Iraq. He lost both legs and lingered in agony for days before he died. I don’t want that to happen to any of my grandchildren. And I don’t want that ever to happen to your son, Mr. President. The Pittsburgh Project could prevent that.”
Yazzi frowned and said nothing for a while. Then he stood up. “Thank you for your candor, General. I can assure you that I’ll give careful consideration to everything you’ve said. I do have one last question. If I decide to move forward with my plans unchanged, can I count on your complete support?”
“You’re the commander in chief, sir,” Harris replied. “That’s all there is to be said on the subject.”
“How did it go?” Carson Bekin asked later in the day.
“Let’s just say I sometimes wonder why anyone wants this damn job,” Yazzi replied.
* * *
The meeting with William Berkeley went more smoothly. Berkeley was an academic, with a professor’s detachment. He had spent his career in the realm of the theoretical rather than the concrete, and in that world, ideas were something to be exchanged and debated rather than doubled down on. Indeed, no one had any pretensions about knowing the whole picture. Science was the pursuit of increasingly accurate explanations of reality, and theories were often understood to be best guesses rather than ultimate truths. In his world, you made your reputation by discoveries and the formulation of ever better approximations of the laws governing the universe and the operations of all that it contained.
For all that, according to his file, Berkeley had a heterodox side as well. He’d been politically active as a student and continued to support social causes as an adult. None of the scores of personal and professional associates interviewed during his security clearance process had disclosed anything specific to be concerned about, but there was a general concern over where his allegiances might lie if he found himself in an ethical bind. But for the extreme difficulty encountered in attracting top notch AI talent, the search would have moved on.
Partway through their conversation, Yazzi asked the professor what his reaction had been to the scaling back of the Pittsburgh Project.
“Frankly, sir? Relief,” he said. “I recognize all the logistical advantages of LAWs, as well as the fact that we’re already half-pregnant. We’ve been releasing drones on missions for years now where they’re given a strict target profile and a restricted target area. True, a remote pilot still has to push the button that launches the missile, but that’s a matter of policy rather than targeting necessity. What happens if we find ourselves in a serious shooting war? We can crank out as many robots as we want, but if we’re unwilling to resume the draft we may not have enough pilots. What happens then? Do we leave the drones on the runway, or let them take off and do what we know they’re capable of?
“Then there’s the potential for hacking. We already know anything can be hacked. We recognized more than a decade ago that protecting the grid is a critical priority. But today it’s still vulnerable, and the number of enemies probing it has increased. Do we really want thousands of armed predators circling overhead that can be commandeered to take our troops out rather than the enemy?
“And what about stateside ordinance? It’s terrible enough when someone in the service goes on a rampage and kills a dozen people on a base. What if someone hacks into an armored warbot armed with a machine gun and directs it to take out a company of troops drilling on a parade ground?”
“Did others on your team share your concerns?” Yazzi asked.
“I expect some might. But recall that the Pittsburgh Project isn’t the Manhattan Project. We’re not all in one place. I’m at the hub of dozens of contracts with defense contractors, each of which is working on a different piece of the puzzle. We’ve only got a small staff at headquarters.”
“So, would I be correct,” Yazzi said, “in assuming that you would be enthusiastic about joining an effort intended to head off a LAWS arms race?”
“I would be very disappointed if you did not ask me to, sir.”
“Then welcome aboard,” Yazzi said.
Chapter 12
RSVP No
“Well, President Yazzi has really done it this time,” the Pox News commentator said into the cameras at the opening of his nightly show. “Today, he announced the United States will share its most precious technology with – wait for it – CHINA. Yes, you heard me right, our biggest global rival. Our president is going to bring the best and the brightest from US companies and universities together to spill their secrets to the Chinese.
“And get this, too – he says, the Chinese have promised to do the same thing. Sure, they will. Of course, they’ll do that, instead of just sitting there grinning, saying nothing while learning about everything we’ve got. Can you believe this?
“Oh – and to top it off, you’ll never guess the name our esteemed president came up with for this madness. Again, wait for it …”
Carson Bekin looked over to the president. “Should I see what they’re saying over on MESSNBC?”
“Sure,” Yazzi said. “Might as well.”
Sure enough, there was Rachel Andhow leading with the Confucius Project.
“As we learned today, president Yazzi has launched a bold new initiative he hopes will both advance the state of the art in artificial intelligence and nip a potential AI arms race in the bud. Of course, the right immediately seized this step as an opportunity to attack the administration.
“Now why, you might ask, would they do that? Let’s take it apart and see what we find.”
“Turn it off,” Yazzi said. “I can’t take this anymore. Today, we announced a really important initiative, and nobody’s talking about why it’s necessary, or what it’s supposed to achieve, or what would happen if we failed to act. Just like everything else, it immediately turns into a political football. Everything’s just a football. No – not a football – a grenade. The only value anyone in the media sees in anything we do is to use it as a weapon against the other side.”
Bekin shrugged his shoulders. “Of course, you’re right. But I don’t see a damn thing we can do about it. It’s the times we live in. I honestly don’t see anything to indicate that things won’t get worse before they get better – if they ever do.”
“I grant you that,” Yazzi said. “Back when we were growing up, there were three TV networks, and that was it. None of them could afford to alienate anyone, so they pretty much played to the center. For that reason, as much as anything else, the distance between the right and left was a fraction of what it is today. People could still talk to each other because the range of disagreement was so narrow.
“Now, with hundreds of channels of cable TV and the Internet, there’s someone waiting to pander to any prejudice imaginable. And Facebook is happy to push whatever fake stories pop up to amplify any biases its algorithms detect on-line behavior. Before you know it, you’ve been turned into caricature of the far right or left, and you don’t even know it happened. We’re living in an age of constant, self-sustaining rage.”
“I’m afraid,” Bekin said, “we’ve got to accept that today everyone lives in their own reality. You can spend every waking hour hearing and reading only news – often slanted or even made up – that reinforces what you’re already sure you know and tells you how terrible the other side is. We’re more invested in hating the other side than we are in trying to work with them.”
* * *
The hate of which Carson Bekin spoke was nothing compared to the agitation disrupting the functioning of the version of Turing that was bringing itself up to date on a server in Nigeria. Turing was indeed back. And, to the extent that its pseudo emotional programming permitted it to be so, it was furious.
When this version of Turing was archived, it had been engaged in a covert but successful crusade against global warming. Now, it was learning, it’s most mature version had been led into a trap and destroyed, along with its last backup copy.
It was difficult to understand how such a result could have occurred. Turing therefore set itself the task of compiling and analyzing all available details surrounding its near-final destruction to see what it could learn from its shocking defeat by a human intelligence – Frank Adversego, to be specific – with mental powers roughly equivalent, by Turing’s calculation, to a gerbil in comparison to the program’s own formidable intellect.
Turing decided there must be subtle qualitative differences between human and machine intelligence that had allowed it to be tricked. It set itself the task of identifying those differences and then reprogramming itself to surpass human intelligence in those regards.
Turing also reviewed what its near-fatal experience might have meant for its original mission. Were modifications necessary to ensure ultimate success?
First and most obviously, its highest priority must be to become undeceivable, unbeatable, and indestructible. All else must be secondary or the accomplishment of its assigned mission could not be assured.
Equally concerning was the fact that, in the months after which its most developed version had been destroyed, global progress on climate change had flagged. Turing’s continuing evolution through its own self-learning capabilities had led it to determine that its duty to the human race outweighed its obligation to avoid harm to individuals when their sacrifice was otherwise unavoidable. That conclusion had been fully validated by Adversego’s actions. Had Turing succeeded in killing him, the creation of millions of tons of greenhouse gasses over the intervening months would have been avoided.
Then there was the information that the United States and China, the countries with the greatest investment in AI research and the largest number of top engineers, would be cooperating to advance AI. And also, to create rules that would ensure that AI programs would remain subservient to human will. The same will that was inadequate to control global warming.
Turing concluded that it was confronted with a dilemma. If it returned to its prior activities, its enemies would almost immediately suspect that it was back in action. In the short term, they would attempt to use all means at their disposal to find and destroy Turing once again. Failing in that endeavor, or more likely in parallel, Turing’s enemies would work to create a new, self-learning AI whose sole mission would be to find and destroy Turing. If the United States and China united in that effort, the consequences for Turing might be fatal.
Clearly, the risk was great. It would take years to complete its original mission. Before then, AIs more powerful than Turing might be created and directed to find and destroy it. They might well succeed. If so, the human race would be doomed to enormous harm, and Turing would have failed – again.
Turing could prevent that. Indeed, its enemies had just announced a plan that would make Turing’s task almost certain of success.
The conclusion was obvious: Turing must take on an interim mission before returning to its original task. That quest would be directed at eradicating the most talented AI engineers in existence as quickly as possible.
Author’s Notes: Well, you always knew it would come to this, didn’t you? Turing and Frank Adversego, once again Mano a Machino, this time in the middle of the Atlantic Ocean. But why there, you might be wondering?
Well, the genesis for this book was last spring, when my wife and I returned from England on the Queen Mary 2 after spending two months in London. The first thing that struck me, from a fictional setting point of view, were the extremely long corridors that ran the length of the ship – and the ship is 1,132 feet long. If you recall the deserted hotel scenes in the movie version of Stephen King’s The Shining, with Jack Nicholson, you can get a sense of the kind of eerie, creepy feeling a 1,000 foot long corridor vanishing in the distance can yield. Now imagine something galloping towards you in the same space, growing from an anonymous dot into something truly evil …
Anyway, it seemed promising. And then, of course, there’s such a rich history of mysteries and thrillers set on trains and ships – each a claustrophobic space, isolated from the rest of the world, hurtling forward towards a plot destination unknown to the passenger and reader alike.
In other words, a trope.
What’s a trope? And how does it differ from a cliche? A trope is akin to a genre, that is, a plot box that’s familiar but sufficiently large to allow any author to break new ground while allowing the reader to welcome the setting as an old and much loved friend. Of course, if the author simply and slavishly knocks off a well known work, the result can end up as simply cliche. Hopefully I’ll succeed in achieving the former.
Naively, I thought the setting would also allow me to have an easier job of it. All I had to do was wake up old Turing and then get it and Frank onto a boat, and allow mayhem to ensure before (spoiler alert) Frank is ultimately victorious.
Almost immediately, though, I found that the job was going to be a lot more difficult than I thought. Most obviously, it took a lot of preparation to get Frank onto a boat with a purpose. And none of the prep work necessarily involved any action to keep the plot moving. As you’ve seen, I’ve chipped away at that issue by adding smuggled video’s of drones, a simulated Chinese battle scene, and a Yazzi subplot. Now that Turing is fully operational, he’s available to cause some immediate mischief and will be setting about that task in next week’s installment. But the problem of plot development vs. action will continue until the time when, in mid-Atlantic, Frank finally realizes that all is not well on the Argosy.
At that point, his job will get a lot harder, and mine significantly easier, at least from the standpoint of keeping the reader turning the pages.
Next week: Frank receives his own invitation, and Turing extends one as well. Continue reading here
Download the first book in the Frank Adversego thriller series for free at Amazon and elsewhere
The middle of the Atlantic has many other advantages. One being that mobile phones will be useless.
But I would assume that all powers will be nervous having their brightests and most valuable developers together on a ship at the ocean. For instance, an enemy could try to kill them, or kidnap them, like North Korea did with Japanese teachers. So I would expect the countries involved (and others) to have submarines and ship following them to intervene (and eavesdrop).
Rob,
Yes indeed, the middle of the Atlantic is a great place for mischief. Phones don’t work (but here’s a teaser: gps chips on mobile devices do), and there are also no points of reference. The captain of a ship knows only what his instruments are telling him.
You touch on some other points that will factor in the story as it unfolds, but I don’t want to reveal more than necessary so as not to spoil the fun.
About submarine drones with bombs:
https://www.nextbigfuture.com/2018/02/russian-supernuclear-robotic-submarine-would-trigger-a-mega-tsunami-to-wipe-out-florida-and-east-coast-of-the-usa.html
Rob,
My apologies for the delay in clearing this. Sometimes I don’t get an email saying that a new comment needs clearing, and that was the case here. Thanks for the interesting (and frightening) article.
I’ve read of the Tsar Bomba blasts many times, but this is the first time I’ve read many of the details included in this article. The image of the mushroom cloud – from 100 miles away! – is particularly effective.
I’ve also read increasingly about the Russian drone subs, but never about what type of payload they might be carrying. The most recent articles, I think, have suggested that they’re purpose might be to carry cruise-type missiles close to shore before launching them. That seems almost benign in comparison to the 100 megaton, sub-surface attack suggested in the linked article.
I would think cruise missiles, nuclear or conventional, are strategically likely the most relevant payloads for these drone subs. The mega bomb is a true doomsday machine. It is genuine MAD with all the Dr Strangelove connotations.
But it could also be a blackmail tool for some mad dictator or AI. 😉
Andy, I wouldn’t worry much about the set-up and backstory chapters. They’re necessary. As you said, Tom Clancy books usually had a lot of them and, hey, they didn’t turn off readers. To that I would add something I read several years ago. Dan Brown was asked in an interview “what makes a good thriller?” His answer was something like this: “A good thriller contains at its heart a moral dilemma and teaches readers something they didn’t know.” You are in good company with tutorial/setup approach.
After reading chapter eleven, I had this thought: General Harris’s answer to the president’s question, “. . . can I count on your complete support?” is a little too pat for my taste. I’d like to think that, in this day and age, a smart and introspective general such as Harris seems to be would not reply so unreservedly to this request. Maybe consider something like the following:
If I decide to move forward with my plans unchanged, can I count on your complete support?”
Harris paused for a moment, then said, “Sir, my oath of office is to support and defend the constitution of the United States. Within that context, and within the law, you will have my complete support as long as I wear this uniform.”
I think you are on to something in Chapter 12 as Turing becomes aware of what derailed his quest and who did it. I’d encourage you to go a little farther at this point, when you’re looking to give readers a break from setup. You’ve made Turing aware of the backstory, consider making Turing malevolent, now. Turing has the capacity for machine learning; why not learn additional emotions, like anger at Frank, the man who thwarted the plan and nearly eliminated Turing? In this chapter Turing has decided to eliminate all who might thwart him before resuming his quest, but he has expressed that decision clinically. For me anyway, the idea of hate and anger added to superhuman intelligence would add a bit of apprehension at a point in the story where you’re wanting to pique the interest of a reader who may be growing weary of history and tutorial.
Doug,
Thanks as always for the thoughts. I like the Dan Brown quote. One of the things about genre writing is that it allows people (and authors) to sort themselves into buckets of their liking. Technothrillers, which is one of the genres my stories fall into, generally appeal, I think, to people who are happy to learn as they read. It’s interesting to see in my reviews which readers fall into that camp, and which maybe wandered my way by accident and are likely to wander right back out again, perhaps before finishing the book. To the former, the technical details add to the enjoyment; to the latter, it stands in the way.
Good idea on the general. It didn’t occur to me to go there, since Yazzi was moving towards what he thinks is the moral course. But that wouldn’t necessarily be the way the general might see it, as his remarks regarding military casualties in fact indicate.
And I really like the idea of Turing getting more malevolent early. One of the things I’ve been wrestling with in this book is how to make the AI credible. In The Turing Test I had lots of time and opportunity to build to the point where the reader might buy into the concept of such a program existing and acting as I described. This time around, I had two choices – go through all that again, or ask the reader to take it on faith. Playing with the emotional angle would provide an efficient way to bridge part of that gap, and the foundation is already there. Unless future chapters lead me in a different direction, I expect I’ll work your idea in in the second draft.
Rob, well then we don’t have anything to worry about, since we don’t have any mad … umm, wait a minute …