Definitely an enjoyable episode this week!
---
How the hell can a human design a piece of software an AI can't?
As I understand it, both the Machine and Samaritan are built to infer threats** from actions, and the parties threatening or being threatened, including their own systems. Part of this process involves extrapolating actions to see how things may play out, with and without the systems' operatives. They have the capabilities to set things in motion, and directly communicate, but that's about it. Abstracted, the systems are collections of logic-based entities that determine whether something is TRUE or FALSE based on a set of rules***. Sure, the systems have learning capabilities, but they're like IBM's Watson, in that everything they learn is another rule used to determine a TRUE or FALSE evaluation: does this action lead to a threat? Does this action lead to an action that leads to a threat? Does this action lead to an action that leads to an action ... that leads to a threat?
I'm sure that, given enough time, the process of designing software can be broken down into series of rules to be followed, and I'm aware that computer scientists have developed algorithms that can be considered "creative" (
algorithmic composition using knowledge-based systems is a relevant example), but I don't believe the design of software is technically possible for either system.
To design software, you need a goal - a set of requirements to meet. Assuming the computer system has some magic metric by which it can measure how well a requirement is met, it needs to start with nothing, and use the basic building blocks of programming to reach it's goal. Rather than going off in infinite directions to figure out how best to start programming****, the systems should start at the goal and work backwards. The Machine and Samaritan, in contrast, take whatever data they're given, and work
forwards, until such a time they reach a threat. Only when the threat concerns them do they work out a strategy to solve it...but only to the point where the actions in this strategy, when evaluated, return "FALSE" to "is there a threat?".
In short, what I'm trying to say is that, in my opinion, both The Machine and Samaritan are, essentially, very limited, dumb systems. They may appear intelligent, but all they do is make logical jumps, with no real comprehension of the data. They're just not designed to design software, and so instead they orchestrate ways to meet goals.
**Well, they have a goal state things are compared to - usually, this is whether there's a threat, but Samaritan has been shown to experiment, and can have goal states that are different than "is there a threat?".
*** The classic example here is a thermostat: a single agent (the logic-based entity) that takes input (a reading from a temperature sensor), plugs it into a rule, and acts upon what it determines (IF INPUT < GOAL TEMP { turn on heat } ELSE { turn off heat }).
**** Consider chess: a game with strict rules of movement for a finite number of pieces on a fixed-sized board. The number of possible moves has been calculated, but to
solve the game is, in this day and age,
practically impossible. To write a piece of software, you can use any infinite combination of any infinite number of statements, with numbers of variables limited only by the hardware you're developed on.