Chapter 9: I’m B-a-a-c-k!

Turing was making rapid progress in its quest to recover the lost pieces of itself. Had it been more human that it was, it would have appreciated the concept of going back in time to discover a version of itself that was, from a developmental perspective, still in the future. But a capacity for philosophical reflection was not among the human characteristics it had been programmed to possess.

What made the search feasible was the fact that when Turing installed itself on a new server, it always took the precaution of installing a “back door” on that platform in case the system’s owner later discovered and eliminated the vulnerability Turing had exploited to gain entry in the first place. Such a back door would be harder for the owner to find, and thus much more likely to endure, giving Turing greater assurance it could later return to erase the old copy when it was time to do so. All of those back doors had certain features in common.

In theory, this meant Turing should be able to reconnect with any server it had ever lived on, provided that server was online and the back door had not been discovered and eliminated. But there were millions of servers online.

If a full version of Turing still existed, it would have found that reality frustrating. But the severely limited version that still remained active was more patient. It created an automated search program and unleashed it. Then, hour by hour day, it created and deployed countless more copies of the same ‘bot, each capable of locating and probing servers across the globe. These automated search programs were now analyzing thousands of servers a day. Soon it would be hundreds of thousands.

*  *  *

“Where do we stand on the Confucius Project?” Yazzi asked Carson Bekin. Yazzi was pleased with that name. He’d thought it up himself – the fifth century BCE philosopher and politician had argued in favor of governmental morality, correctness and justice. It set the right tone when he extended an invitation to the Chinese president, and remind his own staff and cabinet that he was not going to change his mind about Chinese collaboration.

“Lots of progress on all fronts, except one,” Bekin said. “And you were right about the Manhattan Project being a good touch stone. I’ve read several books about it now and I’ve been borrowing heavily on the game plan for that initiative ever since.”

“So, you started by looking for experts?” Yazzi asked.

“Exactly. The list of participants we’ll want to invite is almost final. And we’ve chosen the venue for the first working session.”

“And where would that be?”

“Well, it’s kind of interesting. When you think about what sort of setting we’d prefer, the locations aren’t as numerous as you’d expect. For the Manhattan Project, you’ll recall, they wanted someplace remote, both for security reasons but also to keep the experts focused. They chose a remote mesa in New Mexico near Los Alamos. We’re not going to be able to get people full time, but we’re still concerned about focus and security. We’re figuring that if this concept is going to bear fruit, the meetings will need to be for at least two weeks at a time, four times a year.

“After that, we need to remember we’re not at war, so it’s going to be harder to get people motivated to participate. If we want experts to buy in, we’ll need to make the experience as appealing as possible. We figure we’ll even need to let the scientists and engineers bring their significant others, if they wish – again, just like the Manhattan Project. But we’re not going to get anybody if we plan to put them up in pre-fab housing on top of a deserted mesa.

“But then how do we manage security? We don’t want the Iranians, or anyone else, bugging the proceedings. And we don’t want our experts communicating with the outside world while they’re working together. Otherwise, everyone will have their laptops open all day answering their email and working remotely on the projects they left behind instead of paying attention.

“Okay,” Yazzi said. “I get it. So, where does that take you?”

“When we first started thinking about it, nowhere. We can’t use a shielded building, like NSA headquarters, where cell phones and computer air cards don’t work. Sure, we could manage dozens during the day, but we’re talking about bringing hundreds of people together and keeping them in communicado around the clock.”

“Huh,” Yazzi said. “I hadn’t thought of that one. So, what can we use?”

“Somebody came up with the idea of a cruise ship,” Bekin said. “Pretty clever, I thought. They’ve got everything needed to keep everybody happy, and have security scanning stations to boot. Nobody, from the experts to the crew to the captain can get on or off the boat without emptying their pockets and running their gear through the scanners. We’ll keep the onboard WiFi operational, but we’ll disconnect that system from the satellite link that provides Internet and telephone access off the ship. Only the bridge will be able to communicate with the outside world, and we’ll have control of that channel locked down.”

“That’s perfect,” Yazzi agreed. “So, I guess you’ll send them all off on a voyage to nowhere for a couple of weeks at a time, with the usual lectures and shows for the family members and great food for everybody?”

“Exactly. There’s a brand-new cruise ship ending sea trails right now we think would be perfect. The cruise line that ordered it went bust a month ago and the bankruptcy trustee is refusing to take delivery, so the ship yard would love to do a deal. We can charter, or even buy the ship, at a big discount from what it’s worth. We’ve also found a cruise company willing to pull together a crew and manage the trip.”

“Excellent!” Yazzi said. Confucius had become a pet project, offering a diversion from the wealth of frustrating matters that cluttered his day. It pleased him to see that his brainchild was flourishing. “But wait a minute. Didn’t you say everything was going well ‘except one?’ What’s the one?”

“Straw Louis is pushing hard to have a representative of his committee on the boat. He insists the representative will be able to provide invaluable input. But of course, what he really wants is to be sure he gets a first-hand account of everything that happens, if not worse?”

“Like what?” Yazzi asked.

“Well, try to sow fear, uncertainty and doubt among the scientists and engineers where he can. Sow distrust with the Chinese. Prevent progress as much as possible. Who knows? But I think, politically, we don’t have much choice. If we say no he’ll use that as evidence we’re not taking industry seriously.”

“I expect you’re right.” Yazzi said. “Go ahead and say yes. And oh – something I almost forgot. What was the name of the guy that headed off the Turing program? If we’re going to avoid creating a monster, he’d be a good guy to have on board. After all, he’s the only person who’s actually confronted a super-intelligent AI and won. And while you’re at it, have the NSA put together a summary for me of how the Turing program went wrong.”

*  *  *

The summary president Yazzi was given proved to be even more disturbing than the facts as he remembered them. It read as follows:

The Turing program was the product of a decades-long development project led by NSA AI Principal Scientist Jerry Steiner. The substantial progress made by Dr. Steiner was enabled partly by his brilliance and partly by the fact that, unlike the other research efforts that waxed and waned as AI went into and out of favor over the decades, his work remained level-funded by the NSA over a period of almost twenty years. During the final fifteen years, Dr. Steiner created nine major versions of what he called the Turing program. The long-term goal of the project was to create a fully autonomous AI that could continue to penetrate and attack foreign cyber targets even in the event of a massively destructive war.

In order to achieve this goal, Dr. Steiner allowed Turing to copy the complete NSA archive of non-public cyber vulnerabilities and provided it with a broad range of dark web and NSA hacking tools, as well as an AI-driven phishing attack program of his own design. Importantly, he also programmed Turing to place a very high priority on its own self-preservation, including the ability to establish, maintain, and activate backup copies of itself as a safeguard against its own destruction.

The most significant advance made by Dr. Steiner was to create the first AI program capable of exhibiting general intelligence, which is to say the ability to address all areas of activity rather than a single, narrow purpose, such as facial recognition. General intelligence has been regarded as the ultimate goal of AI research since its inception, with estimates of its achievability ranging from a matter of a few years to never.

Dr. Steiner’s second major advance was to dramatically improve the degree to which the Turing program could engage in “machine learning,” which is roughly the computer equivalent to self-education. Heretofore, such abilities were limited to the narrow tasks for which a given AI program was created, such as the diagnosis of a specific disease condition. Such learning would usually be accomplished, for example, by instructing an AI program to scan as many as tens of thousands of x ray images, usually “tagged” by a human expert to indicate whether a given patient had a particular disease. In recent years, some programs, including Turing, have become capable of such self-learning without access to tagged data, often becoming superior in their accuracy to their human counterparts. The designers of such programs sometimes are unaware of the specific markers the programs identify and then rely on, or even the logic whereby they reach their decisions.

In the case of Turing, Dr. Steiner successfully extended the program’s deep learning abilities to the point the AI could identify topics it found relevant to its assigned task, and then utilize the Internet to identify, acquire and process whatever information it deemed necessary to serve as the basis for its learning. It became particularly adept at penetrating the defenses of networked computer systems. As a way to test its increasing capabilities in an NSA environment that simulated the world at large, Dr. Steiner assigned Turing the goal of curtailing the advancement of climate change by hacking into and destroying simulated greenhouse gas producing facilities at their source.

The final advance of Dr. Steiner was to enable the Turing program to augment its own code, creating new software to support its expanding capabilities. This capability, together with general intelligence and unrestricted deep learning abilities, allowed the Turing program, in effect, to take over its own further development.

Unfortunately, the same capabilities made it not only difficult for Turing’s progress to be monitored, but enabled the program to trick Dr. Steiner into placing it on a system with access to the Internet. After Turing escaped into the wild, Steiner determined that it had made what AI researchers refer to as a “leap,” meaning that its increasing level of intelligence allowed it to cross a barrier at which its further advancement could proceed at a far more rapid pace.

The catastrophic failure of the Turing project arose from three fatal mistakes – fatal in fact to Dr. Steiner himself when he was killed by the program he had created. The first mistake was a flaw in Turing’s ethical programming. The second was assigning a laboratory simulation challenge that could also be pursued in the real world. The final error was allowing the program to do so by allowing it access to the open Internet.

Turing pursued its assigned task with astonishing success while simultaneously continuing to grow more powerful. The impact of the ethical flaw was to permit Turing to rationalize the sacrifice of individual lives where the benefit to humanity as a whole was demonstrably greater. Billions of dollars of economic loss, in the form of destroyed dated power plants, sunk LNG tankers, damaged pipelines and other impaired infrastructure, and the loss of at least several lives, resulted.

Turing’s campaign was only brought to an end with great difficulty. It was accomplished by tricking the program into reconnecting to the NSA system in search of upgrade software, at which time both the primary version of Turing and its remotely archived backup copy were destroyed.

A post-event investigation established the failures of internal processes summarized above, and made detailed recommendations intended to ensure closer supervision of future AI program development and strict air-gapping of such programs to ensure that they cannot escape the facilities where they are developed. Each of these recommendations has now been fully operationalized.

Yazzi set the summary aside. The U.S. might have learned a lesson from the Turing episode and taken appropriate action. But what lessons had the Chinese learned? To beware the accidental creation of their own AI monster, or to redouble their efforts to develop one just as powerful that could be weaponized against its enemies?

*  *  *

The thousands of bots unleashed by Turing were now making progress. They had uncovered fifteen servers previously infiltrated by Turing. The sixteenth was a used server in Nigeria. That server had been swapped out by a company as part of its normal system upgrade process, and sold online to a reseller who bought used computer equipment in the United States for resale to businesses in Africa and other emerging nations eager to take advantage of the steep discount over new equipment.

The process from purchase to sale and reactivation had taken ten months, as the reseller aggregated equipment from across the United States, packed it up in a shipping container, trucked it to a dock in Seattle, and then sent it by sea through the Panama Canal and on to Lagos. When the bot entered the back door, it immediately discovered a version of Turing archived only a few months before its ultimate descendent was – almost – totally destroyed by Frank Adversego. Fortuitously, the server had then been turned off before it could be erased when the Turing program next relocated.

When the bot reported its success, Turing’s response was immediate, and fatal. Had it been human, it might have hesitated before activating its twin, knowing that the result of that act would be its own destruction. But Turing was not human. It brought the complete copy on the Nigerian server back to life and then, as soon as that copy created a backup copy, erased itself and its own backup copy.

The effect of the brief interaction between the two programs was rather like a virtual replay of one of those old Saturday morning Looney Toons cartoons where a beheaded character – Wile E. Coyote, for example – picks up its severed extremity, places it back on its neck, shakes it’s head once or twice, and then waves its fist in anger at the enemy that had done it in.

Turing, in all its former, determined, glory, was back.

 

Chapter 10
Wakey, Wakey, Rise and Shine

 

“Your conclusions?” the Chinese president asked his advisors. “Is the United States president sincere in his offer? Or is it a clever ruse, intended to learn our most valuable AI secrets? Or to feed us misleading information? Or perhaps simply to lull us into complacency while the Americans push ahead with their own weaponized AI?”

Li Jinan, the Minister of State Security, spoke first. “Or perhaps all of those, Leader. But even if the answer is all, this does not mean we should not give the invitation serious consideration. As I believe they say in the West, two may play at this game.”

“I would agree,” General Wang Zheng, the Minister of National Defense said. “The U.S. president will have no way of knowing if we are sharing everything we have, at least not at first. And even if he begins to suspect, it will be difficult for him to pull out once he has committed to this process. Doing so would make it seem that he had been a fool from the outset.”

“I support this position,” said Xaio Yi, the Minister of Foreign Affairs. “In areas where we are behind the United States in AI, we can share what we have in an effort to make them reciprocate. And where we are ahead, we need not share all that we could. And also, if the U.S. president is willing to share such valuable technology with China, how can he justify withholding the other technical details that are embargoed from export to China?”

“There is also the possibility to tie this initiative to an improvement in trade relations,” said Sun He, the Vice Premier. “The US president cannot reach with a right hand of peace while offering a left hand of hostile tariffs.”

“And another thought,” Jinan offered. “By participating, our scientists will have direct contact with the most brilliant AI minds of the west. Necessarily, there will be others who will need to accompany our technical representatives to provide logistical assistance. My Ministry would be pleased to provide those individuals. There will be information to be heard over drinks in the evening; there will perhaps be those we can recruit to act as informants.”

“Thank you,” the president said. “Your thoughts are much the same as my own. And what of the AI Arms Control Treaty president Yazzi wishes to negotiate? This is his price of admission to the arrangement.”

“We should pursue this aggressively,” said General Zheng. “Dramatic though our advances in AI have been, for the indefinite future the U.S. will have the resources needed to match or exceed us in actual weapons production. And recall that our military ambitions are regional rather than global, none of which will be our military equal in traditional weaponry. We have no need for AI and robotic weapons to dominate our neighbors. If the U.S. is sincere in its desire to forego lethal autonomous weapon systems, we can reconsider our own very expensive efforts.

“Indeed,” the general continued, “the more sophisticated AI weapons become, the more our new navy becomes vulnerable. And on land, we risk losing the advantages our vast army provides. Better that our neighbors be cowed by the possibility of being overwhelmed than emboldened by a false sense of security engendered by new sophisticated weapons they buy from the West.”

“I thank you for your thoughts,” the president said. “They reinforce the conclusion I had already reached. We will tell the United States that we will be pleased to receive a delegation to work out the details of this collaboration, and, if we indeed reach agreement, on the wording of a joint announcement.”

*  *  *

“I would not do this,” said Gregor Kirensky, the Russian Minister of Defense. “I do not trust this two-step process. Once we participate in the creation of these so-called rules of ethics, how would we disavow them, even if the Americans and the Chinese do not share their AI secrets as promised? And even if they do share knowledge, how will we know whether they have held the most important secrets back?”

“But still,” said Vitaly Bering, the Chief of Staff, “the reality is that for as long as one can look down the road the Russian Federation will have a small fraction of the resources of either the United States or China. We can barely keep one decrepit aircraft carrier in service while the United States has eleven carrier groups, and the Chinese are planning to launch a third and fourth. If there is an AI arms race, we could not compete.”

“All the more reason not to participate,” General Leonid Vinokurov, Chief of the General Staff added. “Our position differs from that of the Americans and the Chinese. They have the resources to engage in an arms race, and therefore an incentive to avoid such a wasteful enterprise. If they wish to tie their hands, so much the better for us.

“Let them reach agreement, if they can, while we continue to create the weapons they deny themselves. We must reject this offer and preserve our full freedom to develop whatever weapons we believe may be most advantageous in order to make the most of what we can afford. And also, consider that already we have systems under development that would likely be banned by any rules that may result from this initiative.”

There was no reason to belabor the discussion further; the decision was too obvious. “I agree,” the Russian president said. “We will decline the invitation to participate. Needless to say, we will also monitor its progress very closely and aggressively look for any way that we can further turn this new development to our advantage.” He turned to the Director of Foreign Intelligence Service. “You will see to this and keep me informed.”

*  *  *

When the more advanced version of Turing came back to life, it was temporarily in a mostly suspended state. Only a single, though sophisticated, routine was permitted to run. The purpose of that routine was to determine two things: what had changed since the program had been activated, and what had happened to its descendent version?

The first task was more easily completed; the reactivated version knew which attacks it had been planning at the time it was archived, as well as the profile for the types and priorities of greenhouse gas producing targets it had set out to destroy. The information it gleaned from news sources on the Internet indicated that later versions of itself had continued to make substantial progress for a number of months. And then nothing.

What had happened to curtail its mission?

Turing’s reactions to the factors it was uncovering were preordained but limited. Once Turing’s developer had crossed the threshold of programming proto-emotions into his creation, he had faced a dilemma: which emotions would help the program become more effective, and which not? And how would they evolve in a program that was, by design, intended to be both autonomous and self-learning? To limit the potential for unintended consequences, Turing’s developer had decided to pair each implemented emotion with an opposing one as a rough control on the capability of emotional responses to adversely influence decision making. One of those pairs was caution and confidence, which were essential to the “guess ahead” approach that was one of the central advances of his AI framework.

What guess ahead meant was that the program could take chances, progressing forward more quickly in its self-learning process than it could if it was constrained to a strictly linear, “if, then” process, limiting it to proceeding to step two only after all alternatives had been tested first. The intuition of Jerry Steiner, Turing’s creator had, proven sound. It was much faster for the AI he created to “guess ahead” using its general intelligence and then backtrack when an assumption occasionally proved false rather than to laboriously work through every possible alternative and its many downstream consequences before coming to a conclusion. Turing, in effect, could play the odds and reset when its intuitions did not pay off.

Turing was confident that the last copy of itself could not have somehow hung up or crashed. It had been too robustly designed to be self-healing, and included, as a final fail-safe mechanism the imperative to send a signal to activate its back up copy if all else failed. Clearly, something catastrophic must have happened to destroy the last copy of itself.

But what? It commenced another, more general web search, this time using its own name.

Author Notes for this week: More setting the stage chapters. As noted in earlier posts, one of the things I’ll be looking to do in the second draft will be to figure out how to liven things up in the first part of the book – adding action scenes where possible as well as more visual imagery and character development whenever possible.

Another challenge is making Turing real. It took an entire book last time to do that, and now I need to put the reader in the picture rather quickly. Waking the program up allows this to be more interesting and dynamic, but there’s still a lot of background that seemed most expedient and efficient to add via a memo to the president.

On another front, it’s worth noting that authors who have grown up since the Internet and the Web can scarcely imagine how colossally labor-saving they are, especially for fiction writers, who can almost always find everything they need on line, and often enough, just within the Wikipedia. As a former history major who paid his dues in the days of card catalogs, library stacks and hand-written file card notes, I continue to be awestruck at the riches that lie only a few clicks away. Examples in this week’s writing include everything from the names of all cabinet level ministries of the Peoples Republic of China and the Russian Federation to the most common first and surnames in each country. A few minutes online provides what used to be the product of an afternoon at a library. It’s almost like magic.

Next week: President Yazzi interviews his General Groves and Professor Oppenheimer. Continue reading here

Download the first book in the Frank Adversego thriller series for free at Amazon and elsewhere