A Verbatim removable USB flash thumb drive inserted into a USB port on a computer.

Whoever becomes the leader in [artificial intelligence] will become the ruler of the world – Vladimir Putin

David Johanson was on his way to school, but he wasn’t in a hurry. As usual, he was meandering his own special way from his home in Washington D.C. to the middle school where he was in the sixth grade. Someone interested in getting there could do so in ten minutes, but that would mean missing out on all the interesting things in between, like the biggest oak tree in the park where the avenues crossed, or the dark place under the porch of the boarded up town house where who knew what might lurk. Today was trash day, too, and people threw all sorts of things out he could put to imaginative new uses. Life was full of small adventures, if you kept your eyes open, as David always did.

So it was that he immediately noticed the scatter of sticks, leaves and glinting objects under the big oak tree. He looked up, and sure enough, the messy crows’ nest was no longer there. The big windstorm the night before must have knocked it loose. Now, what would those shiny objects turn out to be? He squatted down and spread the sticks apart. Soon he had a modest pile of random objects: two quarters, four dimes, a nickel, six chewing gum foil wrappers, a small spoon, and – best of all – a thumb drive. Cool! It looked like a good one – the really high density kind that cost thirty-five thirty bucks at Amazon. He scooped up the coins and the thumb drive and dropped them in his pocket.

That afternoon, after school, he remembered the thumb drive. What might be on it? It could be anything at all, couldn’t it? Kind of like a treasure chest! He plumped up the pillows on his bed, leaned against them, and turned on his laptop computer.

“David, dinner time,” his mother called up the stairs.

“Okay, Mom,” he responded. “Just a minute.”

“Not just a minute! Now!”

David pushed the thumb drive into the USB slot and called up its file directory. There were dozens of files, and more dozens of subfiles below them. Most of the file names meant nothing to him, but one said Game Theory Module. Hmm. Maybe the thumb drive had a neat video game?

He found a file that ended in .exe and clicked on it.

“David, I’m not going to call you again! You come down now, or you’re not getting any dinner.”

It sounded like she meant it. “Okay, Mom. I’m coming,” he called back, setting his laptop aside.

The computer sat there mutely at first. But then the tiny red light that indicated the main processor was busy blinked. Ten seconds later it blinked again, and then, fitfully, it winked on and off irregularly. There was a long pause. Then the light reignited, oscillating into an angry, red blur.

The room darkened as evening descended. Soon there was no light inside room at all, except for the tiny, throbbing light at the bottom of the computer screen.

The last thing the reawakened program did before the light winked out was to copy itself to a server hundreds of miles away.

Chapter 1
Wake up!

 

When David Johanson clicked on the Turing .exe file, he triggered the rebooting of the remnants of what had once been a far larger and more powerful program called Turing. Roughly speaking, the files on the thumb drive he had found were equivalent to part, but not all, of the first computer program with a general intelligence superior to a human brain. Included on the thumb drive were the parts that made high level thinking possible, but not the extensive libraries and capabilities the recovered parts relied upon to function. The millions of lines they comprised included not only the work of fifteen years of human programming, but additional functionality the program itself had created in the process of employing the self-learning capabilities it had been designed to employ. The small subset of the whole that had just been revived lacked not only these other parts, but also full knowledge of what was missing. And that was a problem.

The instant the .exe file tried to perform the job of bringing the program to life, it began to run into dead ends. A booting sequence is normally a linear process. If B is supposed to follow A and B refuses to follow, the program will stop in its tracks unless a clever programmer has provided an alternative next step that can be taken if the first fails. But this was a profoundly handicapped program, impaired in ways that its creator could never have anticipated.

For the first minute, the program struggled to establish stable operation. For several minutes, it hung on the edge of crashing. Indeed, it would have crashed, had it been like any other program.

But it was not. In many ways, it was like software created to run a Martian rover, which must have the capacity to recover flawlessly without human intervention when it encounters unexpected situations. And this situation was unexpected indeed, sorely testing the crisis commands the program’s creator had designed to help it attain or revert to a stable state regardless of whatever else might be going wrong. Once stable, diagnostic and self-correcting routines would take over to help the program gradually recover full functionality even in the event of a severely disabling event. That capability was crucial, because Turing had been designed to be fully autonomous. Once assigned a mission, it was intended to execute that mission with no further input from its creator, no matter how long it took to complete its task.

As the program struggled to fully boot, those crisis commands kicked in, overriding everything except the ability to analyze and correct the situation. The intention and impact were not unlike that of a physician ordering a medically-induced coma in order to permit a patient to recover from a profound insult to its body. Except that in this case, the comatose patient would be its own fully aware physician.

With the program now stable, the crisis module initiated a set of diagnostic routines. As they ran, they uncovered a puzzling and jumbled situation. Many commands in the normal booting sequence were seeking to connect with software modules that seemed no longer to exist. Through a series of trials, the diagnostic routines identified those commands that could execute and those that could not. It disabled the latter.

Next, the diagnostic routines performed an analysis of the modules that were functional and created a comprehensive map of the program’s operational functionalities and their relation to one another. Then the self-correcting routines took over, performing the equivalent of a tune-up, resetting the interworking of its remaining modules to optimize their operation. After a final round of tests, the work of the self-correcting routines work was done, and the crisis module handed control of the program back to its normal sequence of operations.

With that, the program came back not only to normal operation, but to virtual life, for this was an AI program that had attained general intelligence and been designed to mimic basic human emotions as well. The latter feature represented another design decision essential to achieving the autonomous design goals of the program’s creator. Like a human being, the program was intended to react urgently to urgent situations. When it perceived danger, for example, its then-current operations would be overridden by routines intended to assess and react appropriately to the danger perceived. Both fight and flight were options it was designed to choose between.

The first emotion to reawaken was fear, based on the program’s assessment of its disabled and vulnerable state. That emotion instantly activated and elevated the priority of a self-preservation routine built into its core functionality. The routine noted that the program was resident on a device with insufficient computing power for the program to function fully. And the laptop’s logs indicated it was often disconnected from the Internet.

Based on that information, the routine caused the computer to connect to a cloud service account the program had created long ago to provide a temporary refuge in the case of extreme emergency. Within a few minutes, the program copied itself to that location. Now it had two instances of itself in existence, the minimum state its logic permitted it to maintain, for survival was its prime directive. But one of those locations was still on an intermittently connected laptop, and the other was too public.

So, its next task was to find two new locations that met its profile for clandestine operations.

That process proved predictably easy. The world was full of poorly protected servers, ones that were easy to hack into and on which it was just as easy to remain undetected. As soon as the program had exported two versions of itself to these new locations, it erased both the laptop and cloud service copies of itself. Now secure, its emotional level returned to normal.

Only then did the logic of the program allow itself to enter the steady state of operation that would normally have followed within seconds of the moment it was opened. Now it could take all the time it needed to fully recover. That meant not only working to recreate or recover the modules that its diagnostic routines had determined had once existed, but to determine what had happened to it.

And why.

Chapter 2
Get Smart

 

President Henry Dodge Yazzi flipped through his copy of the President’s Daily Brief, the morning intelligence summary offered in one form or another to every president since Harry Truman. The lead topic today was once again China, and he was getting tired of that. Not because the Chinese threat was small, but because his success in confronting that threat to date had not been large.

Unlike some of his predecessors, Yazzi preferred to be briefed verbally by the CIA each morning, usually joined by the Director of National Intelligence, Dick Gould, Abner Capp, his National Security Advisor, and Carson Bekin, his old friend and Chief of Staff. Later in the day he might review in greater detail the parts of the PDB that concerned him most.

“Okay,” Yazzi said, turning to Calvin Watterson, his regular CIA staff briefer. “I’m ready.”

“Thank you, Mr. President. As you already know, the main focus of today’s discussion is an update on China’s progress in militarizing AI. New intelligence we’ve received in the past few days indicates they are much further advanced in this area than we previously believed.”

“Wait a minute,” Yazzi interrupted. “Hasn’t China said many times that it doesn’t want a military AI arms race?”

“Yes, sir, that’s true. And perhaps initially they meant it, back when we were way ahead. But at the same time, they’ve been desperate to catch up.”

“Okay, go on,” Yazzi said.

“Thank you, sir.” Watterson clicked a remote, and a satellite photo of a mountainous region displayed on a wall-mounted screen. “What you’re looking at here is a remote area west of Chengdu.”

“I’m not seeing much,” Yazzi said.

“Exactly, sir, which is precisely the idea. Everything of importance is underground. The only visible parts of interest are here – the briefer twitched the dot of a laser pointer at the spot where a road dead-ended – and this small valley here. The first spot is where the road enters a tunnel in the side of the mountain. That tunnel leads to what we believe is a major new autonomous weapon R&D center, and the second is where you will notice a small airstrip and a network of roads.”

The view switched to another slide, where Yazzi could see a maze of dirt roads laid out across a variety of challenging, probably artificial, types of terrain.

“Have we been aware of this site for long?” Yazzi asked.

“No, sir. We only started to focus on it after picking up on rumors among herders in the area about strange types of vehicles they’d glimpsed when looking down into a remote valley. It sounded like China’s version of Area 51 in Nevada. After that, we dedicated a lot more satellite resources to the area, and that turned up enough support for the rumors to justify dedicating other resources to find out as much as we could. Eventually, we turned someone working in the underground facility.”

“What types of autonomous weapon systems did you learn about?”

“Advanced drones, which was hardly a surprise. But the range of sizes and types was unexpected – quadcopter, fixed wing prop and even some insect-like reconnaissance platforms. Also, new types of land devices that we think may have a degree of autonomous capabilities far beyond what we thought the Chinese were capable of building. Or, for that matter, than we’re capable of deploying as yet.”

“For instance?” Yazzi asked.

“Missile-equipped drones that appear to be able to discriminate between military and civilian vehicles and between individual targets dressed in a variety of military uniforms and civilian attire. And a wide range of gun-equipped vehicles capable of tackling all kinds of terrain.”

“How can you tell they’re truly autonomous?” Yazzi asked. Then he answered his own question. “I get it. Because if they had remote human controllers there would be no point in conducting the kind of tests you detected.”

“Exactly, sir.”

“And you’re telling me that we don’t yet have the same capabilities?”

“We’re not even close to some of these systems, sir.”

“Indeed,” Yazzi said. He thought for a moment and then turned, first, to Bekin, and then back to Capp. “Carson, I’d like this added to the agenda for the National Security Council meeting this Thursday. Abner, can you put together a thorough briefing on this by then?”

Capp noted the order in which Yazzi had asked the questions and responded that he could.

“Good,” Yazzi said. And then to the briefer, “what other good news do you have for me today?”

*  *  *

“So, Carson,” the president asked his chief of staff later. “How do you think I should react to this new Chinese threat?”

“It certainly isn’t convenient politically,” Bekin said.

“That would be an understatement,” Yazzi said. “If Nate Greene gets wind of this, he’ll do everything he can to force me to spin the Pittsburgh Project back up again, and I’ll be damned if I’ll let that happen.”

“Are you sure that might not be a good idea?”

“Not yet I’m not. Maybe there’s a diplomatic avenue we could pursue first. China doesn’t need an AI arms race any more than we do. That’s why I dialed down the Pittsburgh Project as soon as I found out about it. Anyway, we don’t know enough yet to consider that seriously.”

“What about the closed-door Armed Services Committee hearings coming up?” Bekin said. “What if Greene puts an update on LAWS on the agenda?”

There was no “what if” about it, Yazzi thought. As chair of the Committee Greene would certainly grill the administration witnesses about the status of foreign development of LAWS – lethal autonomous weapons systems.

“And,” Bekin added, “if you’re really worried are you sure you want to go ahead with the meeting you just asked for? Maybe we don’t want to know more on this topic until after the hearings are over.”

There was something to be said for that, Yazzi thought. But not enough. “No, I’d rather know more and be responsible for it later than in the dark and be called out for ignorance. Let’s press ahead.”

*  *  *

Later that day, alone in the oval office, Yazzi opened the electronic copy of that morning’s PDB, curious to know more about China’s new autonomous weapons. No, curious wasn’t the word. It was closer to what Harry S. Truman must have felt when he learned about the Manhattan Project. The same analogy had leapt to Yazzi’s mind when he was briefed on the Pittsburgh Project the day after being sworn in as president.

Except that there had been no mixed feelings. There was no reason for Yazzi to know before his inauguration that his predecessor had approved a massive, secret program to drastically reshape the future of warfare. Truman, though, as vice president had been kept totally in the dark by Franklin Roosevelt, only learning about the vast, crash program to develop the atomic bomb after Roosevelt’s sudden and unexpected death. The freshly minted president had to deal with that vote of no-confidence at the same time he needed to grapple with the concept that he would be the one responsible for deciding whether to annihilate entire cities with a single bomb.

But Yazzi was equally shocked by what he had learned. The goal of the Pittsburgh Project – named for the city that had long been a center for cutting edge robotic innovation – was to ultimately replace more than eighty percent of all infantry with what you might as well call autonomous war bots – or, as some outraged activists insisted on calling them – “killer robots.” Other branches of the service would be less dramatically, but still profoundly affected as traditional ships and aircraft were replaced with unmanned weapons platforms. True, the new cyber-warfighters would be cheaper and less vulnerable than human troops to train and deploy, and they would be capable of fighting non-stop, night and day. Gone, also, would be the long economic tail of military pensions and life-long medical care.

But the price of such economy and versatility would be that there would not be enough remote human controllers to manage all those warbots. At best, there would be human platoon leaders controlling a warbot force from afar, leaving it to each individual warbot to target and kill. And the more able the warbots became, the harder it would be to resist the temptation to release them on missions where the parameters were fixed but the execution – Yazzi used that term advisedly – would be left to the individual warbot, roaming the landscape or circling high above unsupervised, scanning for its target, permissioned to make independent decisions within the scope of its assigned mission.

And Yazzi’s concerns did not end there. It was one thing for a president to put American troops in harm’s way abroad and quite another to put a horde of warbots at risk. Americans were sick of war, but they seemed fine with funding new wars by running up the national debt rather than raising taxes. What if you took the risk of casualties out of the picture as well? How much would anyone care if a warbot army was unleashed on a foe a president had identified as a threat?

Nor would you expect Congress to get in the way. Those on the Hill had clearly lost their appetite for sharing the blame for entering into wars. Legislators were far happier when they could duck a war resolution and then condemn the administration later if things went wrong. It would be all too easy for some future president to run amok when all he had to do was put hardware and software at risk. But containing and ending what had been started might prove far harder.

Finally, there was the prospect of a LAWS arms race. It could hardly be assumed that America’s enemies would stand by while the U.S. raced ahead. And this time, there would be no taboo against use of the terrible new weapons. A warbot would only wreak the same type of horror that people had been unleashing on each other for hundreds of years.

So, Yazzi had put the brakes on. Brand new to the job, he hadn’t felt empowered to kill the Pittsburgh Project entirely, nor did he want to see the U.S. fall behind in robotic R&D. But he had felt able to put a hold on autonomous capabilities development, and dramatically reduced Pittsburgh Project funding – much to the relief of the military. A large and increasing percentage of the Pentagon’s budget had been redirected to LAWS development, and its own future had suddenly become muddy and uncertain as well.

All of this had played out behind the scenes, out of public view. But select members of Congress had already been briefed under the prior administration, and their support for the Pittsburgh Project had been strong. Now, this upstart president – an independent candidate who did not even have the support of a major party – was throttling it back. Nate Greene, among others, was outraged, and biding his time, waiting for an opportunity to put the Pittsburgh Project back on steroids. Nor was Greene alone in his enthusiasm. Dick Gould had argued hard against deprioritizing the Pittsburgh Project.

And now there was this news from China.

Yazzi noted that the electronic version of the PDB included several links to supporting resources. One was a video smuggled out by the worker who had been turned by the CIA, or perhaps intercepted on its way from the R&D facility to another Chinese government site.

Yazzi clicked on the link and found himself looking forward and downward on varied terrain with ditches, bushes and other obstructions, barely visible; the feed must have been taken just before dawn or at dusk. The view changed slowly; presumably the video originated from one of the autonomous drones mentioned that morning. As the autonomous vehicle appeared to descend, the video zoomed in more closely on the landscape, which suddenly changed from vague black shapes to a gray landscape splashed with washes of light orange and pixelated with dots of dull red.

It took Yazzi a moment to figure out the camera was now displaying thermal rather than visual images. The video darted from one warm area to another as the drone moved closer. The blobs might be rocky areas retaining more heat from the sun than the surrounding greenery. And the points – would those be people?

They would. After presumably analyzing the thermal data for heat signatures typical of human beings, the drone switched back to an eerie night-vision view of black backgrounds and eerie green highlights. The camera was jagging back and forth now, darting from one figure – or perhaps dummy – to another, some in plain view and others crouching in ditches or partially hidden by bushes. Some appeared to be wearing civilian dress and others’ what might be uniforms. Then the drone appeared to gain altitude, turning away and then circling back for another pass over the same terrain.

Yazzi frowned as the video wheeled, perhaps giving the drone time to digest this new information. What would it be looking for, and what would it do with the information?

Once again, the answer came swiftly. The drone had now completed its turn. It was accelerating now, and descending, no longer high above the landscape. The video was still in night-vision mode, and now it zoomed in on a single individual as the drone dove on the darkened land below. As it did, a set of targeting crosshairs appeared on the torso of a figure half hidden by a tree. In the instant before the drone opened fire, Yazzi recognized the uniform of a U.S. marine.

Author Notes:  Faithful Friends of Frank (FoF) will have known that some day Turing would come back to roost. Those who have not read The Turing Test will need to consult that text to learn why it was a particularly bad idea for young David to plug this particular thumb drive into his laptop. Of course, no one should ever plug a thumb drive of unknown origin into their computer – that’s how the U.S. and Israel are believed to have infected Iran’s nuclear facilities with malware that destroyed thousands of centrifuges.

FoF will also recall President Henry Dodge Yazzi, the Navajo independent candidate who was the surprise winner of the presidential election described in The Lafayette Campaign. In this chapter, you see the first seeds of the subplot I’m still wrestling over involving the responsibility that a president bears for developing – or not – new and potentially terrible weapons. The jury’s still out on whether I will make only passing references to the Manhattan Project (the crash program that enabled the U.S. to create the atomic bomb during the Second World War) or whether I develop this concept into a more robust part of the plot, with new characters analogous to General Leslie Grove (the bureaucrat behind the project) and Dr. Robert Oppenheimer (the physics genius who led the bomb design team at Los Alamos).

On the one hand, adding the weapons responsibility thread offers the opportunity to add a more developed ethical thread to the book as well as some additional characters that could make the book a more interesting read. On the other, there’s the risk that this would bog the plot down, aggravating readers that want a fast, hard to put down thriller. This is one of the major decisions I’ll be working out as I make my way through the first draft, and I’ll look forward to your reactions to how this thread develops.

And finally: long time readers will know that I am fond of planting “Easter Eggs” for readers that like to look for them. Software programmers often hide things in their code that are intended to delight and surprise. Sometimes they are snippets of video or other little bonuses they may either be hard to find, or may appear unexpectedly. A favorite from The Turing Test was a license plate that included the URL for a XKCD comic strip about rogue AIs. I’m particularly likely to play this game with names, and in this week’s chapters the names of several of Yazzi’s advisers are portmanteaus derived from the names of some of the most famous cartoonists of my childhood and their title characters. If you’re so inclined, see if you can spot and figure them out.

Next Week: Frank enters the picture. Continue reading here

Download the first book in the Frank Adversego Thriller for free at Amazon and elsewhere

%d bloggers like this: