Earlier in my career, I spent several years as a software programmer. Much of my work was writing code for simulation models. This software attempts to mimic some aspect of the real world and allows its users to post “what if?” questions. The software, based on rules coded into it, then comes up with hypothetical (ideally, realistic) answers.
My work, first with the Department of Defense, was coding models to simulate battles between U.S. Navy destroyers and Chinese submarines. We programmers joked about this assignment. At that time, the Chinese had no submarines.
Jokes aside, it was not an easy assignment. How were we to write code about something that for all practical purposes did not exist? Such was (is) the funding given to the organization in the Department of Defense now called DARPA, for the Defense Advanced Research Projects Agency. (For which, in spite of the Chinese submarine project, I have great respect.)
A few years later, working at the Federal Reserve, I was assigned as the person to write programs to simulate the United States money supply in relation to the Fed’s interest rate (the Fed funds rate). Without going into the arcane aspects of these subjects, I was hired by the Fed because of my “extensive” experience in coding simulation software for U.S. and Chinese naval battles.
This statement comes across as irrelevant now. But in those times (the late 1960s and early 1970s), software simulation models were practically non-existent. I placed on my application to the Fed something to the effect of: “wrote simulation software while in the Navy.” I was snatched-up by the Fed after only one cursory interview.
Enough history, which is actually relevant to this article. In those early days, which have persisted until quite recently (a few months ago), programmers could examine the output of their software and compare the output with the software that produced it. Did the output seem correct? If it did not, the programmer could examine the code that produced the output, and make adjustments — fix the “bugs” as they were called.
Transparent operations, under control
The programmer controlled the software. It was the programmer’s responsibility to make certain the software conformed to the output it produced. The output was analyzed and evaluated by the users of the software. For my examples: at the Navy, warfare experts; at the Fed, gifted economists.
A mind of its own?
Consider this event, which occurred a few months ago, one that is both exhilarating and disturbing (see The Atlantic, August 2019, 24-26).
In 2018, the company AlphaZero developed a chess-playing program that was only given the rules of the game, with no input from well-known and accepted chess-playing strategies. It trained itself by self-play to become the best chess player in the world, even beating chess grand masters. The program accomplished this feat, unaided by humans, in less than 24 hours. It did not use classic chess strategies or practices that had been developed over hundreds of years.
Chess experts considered its moves to be “counterintuitive, if not simply wrong.” One of the owners of the company said AI “…is no longer constrained by the limits of human knowledge.”
Even more, the programmers of this software could not trace their own code that led to the program’s non-traceable output! They could not associate the code of the program with what it had done: defeating all chess players it took on. It was an unfathomable accomplishment. To this former software programmer, it was the opening of another world for us humans.
The second article in this series discusses artificial intelligence experts’ views on the subject.
• • •
Uyless Black, residing in Coeur d’Alene with wife Holly, has written The Nearly Perfect Storm: An American Financial and Social Disaster, available on Amazon.com.