4:19 Search and Destroy

REBerg

Registered Alien
Supporter
Joined
May 27, 2013
Messages
5,455
Location
Kepler-440b
As often as it happens, I never tire of seeing Root “Places to Go, People to Kill” Groves, ride to the rescue. John was right when he said he should have let her finish Martine “Terminator” Rousseau. It might have given Root a little closure for Shaw.

I've got to admire Finch's tenacity in sticking to his guns by not sticking to them at all. It's almost as if he knows he's in a fictional world in which heavily armed henchmen can't touch him.

I enjoyed seeing Aasif Mandvi appear in an atypical non-comedic role. I do wonder why Samaritan bothered capturing and hauling him to the “mountain top” only to execute him. I had thought the plan was to recruit him.

The author of the scanning software Samaritan is using in its efforts to find The Machine might have proven useful. Now, how are they going to solve the problem of their arch-AI-enemy not being on a network? Tsk, tsk.
 
As often as it happens, I never tire of seeing Root “Places to Go, People to Kill” Groves, ride to the rescue. John was right when he said he should have let her finish Martine “Terminator” Rousseau. It might have given Root a little closure for Shaw.

This was the whole episode for me, I suspect. Groves-centric without making her do anything absolutely implausible in story context. So I'm incapable of evaluating the ep objectively. ;) As near as I can figure, though, it was a very dramatic and interesting ep and I like the way it dealt directly with Samaritan and how all kinds of people are beginning to notice. (Speaking of, Amy Acker's expressions and whole vibe when they were in the safehouse, treading around the edges of cluing the Number into the AI, were fantastic.) But I feel like it moved quickly over holes so that we didn't fall in them but they were there. And, while I can see keeping Martine around as an extra-special villain in terms of her performance, the way they've actually set it up, I'm getting tired of the way they drag it out. "You should have let me kill her." "I should have let you kill her." Yep. It's just implausible that these extra-lethal people with perfect marksmanship can't seem to hit the broadside of a barn if one of them is by the barn and that they can't finish it in seconds hand-to-hand vs. the minutes it takes for Reese to pull Samantha off Martine. On the other hand, while those two are improbably equal, I do find the fight to be a little too imbalanced between the AIs. Yes, Samaritan had the initial advantage with the official status and the things that followed immediately but the Machine is still a super-AI, too, and should be doing a better job of evening the odds. Ah... there's one of those holes. How the hell can a human design a piece of software an AI can't? But S can't allow M to know it's inhuman software. But it can't fabricate a "human"? And how does executing the human who designed it not tip off M? Obviously, the guy turned up on M's radar as a person of interest. Etc. Just all very odd. Part of the problem is that this is a show about AIs with human characters. Defining the, um, "relevance" of the humans to the story and dragging out an arc that, really, would be over in seconds or minutes rather than days/weeks/years is where the fiction is likely to get a little tricky.

Ah, sorry - rambling. Great recovery from last week, anyway. I mean, last episode. (Stupid re-runs.) That's important. :)
 
Definitely an enjoyable episode this week!

---

How the hell can a human design a piece of software an AI can't?

As I understand it, both the Machine and Samaritan are built to infer threats** from actions, and the parties threatening or being threatened, including their own systems. Part of this process involves extrapolating actions to see how things may play out, with and without the systems' operatives. They have the capabilities to set things in motion, and directly communicate, but that's about it. Abstracted, the systems are collections of logic-based entities that determine whether something is TRUE or FALSE based on a set of rules***. Sure, the systems have learning capabilities, but they're like IBM's Watson, in that everything they learn is another rule used to determine a TRUE or FALSE evaluation: does this action lead to a threat? Does this action lead to an action that leads to a threat? Does this action lead to an action that leads to an action ... that leads to a threat?

I'm sure that, given enough time, the process of designing software can be broken down into series of rules to be followed, and I'm aware that computer scientists have developed algorithms that can be considered "creative" (algorithmic composition using knowledge-based systems is a relevant example), but I don't believe the design of software is technically possible for either system.

To design software, you need a goal - a set of requirements to meet. Assuming the computer system has some magic metric by which it can measure how well a requirement is met, it needs to start with nothing, and use the basic building blocks of programming to reach it's goal. Rather than going off in infinite directions to figure out how best to start programming****, the systems should start at the goal and work backwards. The Machine and Samaritan, in contrast, take whatever data they're given, and work forwards, until such a time they reach a threat. Only when the threat concerns them do they work out a strategy to solve it...but only to the point where the actions in this strategy, when evaluated, return "FALSE" to "is there a threat?".

In short, what I'm trying to say is that, in my opinion, both The Machine and Samaritan are, essentially, very limited, dumb systems. They may appear intelligent, but all they do is make logical jumps, with no real comprehension of the data. They're just not designed to design software, and so instead they orchestrate ways to meet goals.



**Well, they have a goal state things are compared to - usually, this is whether there's a threat, but Samaritan has been shown to experiment, and can have goal states that are different than "is there a threat?".

*** The classic example here is a thermostat: a single agent (the logic-based entity) that takes input (a reading from a temperature sensor), plugs it into a rule, and acts upon what it determines (IF INPUT < GOAL TEMP { turn on heat } ELSE { turn off heat }).

**** Consider chess: a game with strict rules of movement for a finite number of pieces on a fixed-sized board. The number of possible moves has been calculated, but to solve the game is, in this day and age, practically impossible. To write a piece of software, you can use any infinite combination of any infinite number of statements, with numbers of variables limited only by the hardware you're developed on.
 
Last edited:
In short, what I'm trying to say is that, in my opinion, both The Machine and Samaritan are, essentially, very limited, dumb systems. They may appear intelligent, but all they do is make logical jumps, with no real comprehension of the data. They're just not designed to design software, and so instead they orchestrate ways to meet goals.

Okay, I see what you're saying and it naturally makes sense, especially given the state of computer science today. I was just seeing it as being strong AI where these machines had been designed as you say but, given the number of moves they could think ahead and given the number of possible responses, it basically "dropped the nets" on the tennis court - the usual SF "becomes intelligent"-trope in a cascading sense. "At some point a threat will arise which would involve software, to meet that, I need additional software. Humans can be used to do it but I could just copy and vary my own code, if nothing else, and "learn" to program faster and better, so I will." The iterating process where, eventually, you just get the AIs going at each other with robots with nukes or something. A different way to get to the Terminator scenario. :) (Just kidding - as they took out the number this week, it would be surgical and would involve directly eliminating the other entity. Nukes are shotguns here.) And, as I say, this probably can't happen at all but, if it could at all, it actually wouldn't take long.

But for the sake of my enjoyment of the show, maybe I'll adopt the "they're dumber than Groves et al think they are". :)
 
"At some point a threat will arise which would involve software, to meet that, I need additional software. Humans can be used to do it but I could just copy and vary my own code, if nothing else, and "learn" to program faster and better, so I will."

Just think how screwed everyone will be if Samaritan comes to the conclusion that having modules added to its core system that allows it to learn how to modify code is the optimal strategy to defuse a threat!

But for the sake of my enjoyment of the show, maybe I'll adopt the "they're dumber than Groves et al think they are". :)

I can't decide if the systems are more terrifying as highly intelligent tacticians, or as dumb collectives trundling along making TRUE/FALSE evaluations. I think the show would prefer them to be seen as the former, but I'm kinda leaning towards the latter.
 
Just think how screwed everyone will be if Samaritan comes to the conclusion that having modules added to its core system that allows it to learn how to modify code is the optimal strategy to defuse a threat!

Exactly! I'm wondering why it hasn't. :)

I can't decide if the systems are more terrifying as highly intelligent tacticians, or as dumb collectives trundling along making TRUE/FALSE evaluations. I think the show would prefer them to be seen as the former, but I'm kinda leaning towards the latter.

Good point. I was taking them as the former, myself, but either one is kind of scary. I think the dumb collectives are a little less so because it's just more like a law of nature that the clever monkey still has a chance of being nimble enough to manage. As long as it's like not like the laws of nature that control asteroid strikes and tsunamis and such. Those are hard to avoid or handle. ;)
 
Yes, a much more satisfying episode.

How the hell can a human design a piece of software an AI can't?
I was going to say humans are more creative.

I'm aware that computer scientists have developed algorithms that can be considered "creative"
They aren't really creative though, in the sense that we know it. But then, we don't have any machines that you would consider to be AI either.

Okay, I see what you're saying and it naturally makes sense, especially given the state of computer science today. I was just seeing it as being strong AI where these machines had been designed as you say...
Yes, I also see both sides of the argument, because given that we have this fictional world in which AI machines do exist, then why not creative machines too.

I did think the whole premise of Samaritan looking for odd pieces of code was a strange one. Surely there are plenty of weird operating systems in many strange pieces of tech. I may be wrong, but is everything really Windows, Linux and Ios? There are all kinds of little strange machines, gadgets and tech connected up to the internet, the Raspberry Pi for instance. I also thought that the Machine had disseminated software between multiple remote servers to make it impossible to shut down - or was that what the interactive wall map was showing with the flashing locations? I expect that disseminated software or not, there could only be a finite number of servers needed to be taken down before the machine stops working. It might need to be done simultaneously, but Samaritan has the power and ability to do that.

I enjoyed seeing Aasif Mandvi appear in an atypical non-comedic role. I do wonder why Samaritan bothered capturing and hauling him to the “mountain top” only to execute him. I had thought the plan was to recruit him.

The author of the scanning software Samaritan is using in its efforts to find The Machine might have proven useful. Now, how are they going to solve the problem of their arch-AI-enemy not being on a network? Tsk, tsk.

I actually thought that Team Machine were going to recruit him. They already previously agreed they need more help and here is someone who independently worked out there is an AI ruling the country, plus he has nowhere else to go. Instead, after going to a great deal of trouble and risk to rescue him, they allowed him to walk off during the middle of a shoot-out.

And the Groves-Martine thing was not believable to me. I guess they want to keep Martine on as a stock villain heavy, but if so, don't use her. One of them should have died in the shoot-out. Both guns ran out of ammunition? - please! No one else to shoot them instead? They get into fisticuffs - Martine is a trained assassin, Groves is a computer hacker - who's going to win? Then Reese comes along and simply pulls Groves off her and carries her away! But I will forgive that if they stop doing it every week.
 
And the Groves-Martine thing was not believable to me. I guess they want to keep Martine on as a stock villain heavy, but if so, don't use her. One of them should have died in the shoot-out. Both guns ran out of ammunition? - please! No one else to shoot them instead? They get into fisticuffs - Martine is a trained assassin, Groves is a computer hacker - who's going to win? Then Reese comes along and simply pulls Groves off her and carries her away! But I will forgive that if they stop doing it every week.
Root's background, as I recall, also includes time as a contract killer. While that line of work may not make her overly proficient in hand-to-hand combat, she has certainly demonstrated an unbelievable skill with firearms.

You're right, when two Annie Oakleys engage in a firefight, only one (or neither) should walk away.
 
It's always good when the POI ties into Samaritan, so big plus this week from me! His Av software runs on 86% of the Internet? John Mcafee would be proud!

I was expecting him to be recruited too, it seems that Samaritan could have used someone with his skills on board, so why not buy his company rather than destroy him personally?

On the nature of the AI.. the machine was designed to learn heuristically, hence in effect it is constantly rewriting itself and "improving" on Harold's source code. Samaritan is doing the same. It's also seeking out unique programs/code bases like the search engine from earlier in the season, and the AV this week. These can be incorporated or utilised more efficiently than "learning" to perform the same function independently..

Marine should have been finished off, yeah. Loose ends biting back etc! Root is definitely capable, even without the Machine in her ear..

Any guesses on the briefcase/Fabergé egg?

I also thought that the Machine had disseminated software between multiple remote servers to make it impossible to shut down - or was that what the interactive wall map was showing with the flashing locations? I expect that disseminated software or not, there could only be a finite number of servers needed to be taken down before the machine stops working. It might need to be done simultaneously, but Samaritan has the power and ability to do that.
I agree the Machine is surely operating in a distributed fashion, but the AV software couldn't find a trace of it on any networked hardware so how would Samaritan know what to switch off? If it's truly operating as virtual code then only a total Internet shutdown would kill the Machine - if it wasn't able to backup itself somewhere first? The Machine was printing and retyping its own code to survive before Harold's daily wipe was removed, so we know it's pretty resourceful at preserving itself...
 
Did you spot the reference to Mount Olympus from John Nolan? POI has a lot in common with Greek mythology: two rival gods fighting eachother using mortals as pawns. Each helps their own heroes while hindering those on the other side.
 

Similar threads


Back
Top