I’ve been busy running experiments, well, the algorithms have been busy that is, once I set them running. Years ago, I wrote about using a genetic algorithm to evolve reusable launch vehicle designs, including the best way to pull it off. I called this “what” meets “how.” So, this wasn’t only about a power supply, a new technology, or the features of your rocket engine. This included how lean your organization ran, how it worked with suppliers, and how you organized. Typically, no one wants to run an experiment and change more than one variable at a time. I thought – why not? Better yet, let’s change them all – dozens – simultaneously.

“Trust the director” was one of the many memorable phrases in the (Netflix) series “The Travelers.” The premise was time travelers from the usual dystopian future travel back in time to make the future less of a dreary dystopia. A novel element is they get orders from an AI, the “director.” They constantly remind themselves to follow their orders, trusting that the quantum AI from the future knows all and sees all possibilities. This proves easier in theory than in practice. It’s easy to lose faith in what we don’t understand (with the obvious metaphors to religion and deities.) Not everyone is on board with taking orders from a machine. Sound familiar?
To judge from the news, the battles are raging between the travelers who would say “trust the director” and those who could care less about the correct answer, favoring human choice. Choices –a word I use throughout the paper I just uploaded to academia.edu as a draft. (Comments welcome!) I fall on the side of the director, to say the least, letting the machine do its thing for the simple reason the possibilities are beyond me. A human can set up a game, some rules, and competing objectives (the tricky part), but it takes an AI to explore every possibility. At a time in NASA, we also challenged the directors – the human ones – to make choices and explore possibilities.
Even before the loss of Columbia, NASA leadership was fond of reminding anyone providing advice that it was all merely for consideration. Costs, schedule, risk, and reliability were all vague possibilities, whereas doing what was appropriated was real. For a time, you could catch a manager (likely recently returned from training) saying they only focused on what they could control. Viola! How effective. Pick your battles and all that. This sounded great in theory, but in practice, what a manager said they controlled approached zero once a project was approved. The ship was out to sea, the course was set, and even the highest level of leadership shared the wisdom of realizing your job was to keep the engine running. The analysts and advisors reveling in “choice” were, at best naïve. At worse, they were saboteurs trying to steer westerly instead of north, staging a mutiny, starting with taking control of the bridge and the engine room. (Next time, use an AI to figure out how to take both, simultaneously.) Wisdom was in seeing choice as an illusion. If you realize this, you might become a “director,” too.

I am old enough to remember when professors thought spreadsheets were the spawn of satan. We will raise a generation of engineers that can’t do math in their heads. But to feel really ancient, I remember handing my calculator to a professor during a test as she walked the rows of desks checking to make sure no one had a “programmable” calculator – the latest thing. (Some did graphs too!) So I am likely not the best, unbiased source for which side to choose in this debate. In my experience, machines were always valuable tools, allowing humans to focus on understanding rather than mind-numbing number crunching. Oddly, in some instances, we don’t think twice about trusting AI-ish algorithms that hardly anyone understands. The Traveling Salesman problem is one of these, finding the shortest route for hitting every location on a list. FedEx anyone? Trust lags in other walks of life, and we say, “Not so fast.”
What I do know, though I wish I had seen this ten years ago when I first worked on this with the Department of Defense, is when a machine can give an “aha” moment that otherwise avoids detection. Complex problems are often overcome by sheer firepower and complex solutions – running for days on the laptop. I added that “aha” moment to the work, much evolved over the years. Another aha? I will assume we can make choices. Many. There is more latitude to steer the ship than many believe. And I will trust the director – the AI version – for now.