1.22 : The Return of the Archons.

Dave

Non Bio
Staff member
Joined
Jan 5, 2001
Messages
23,198
Location
Way on Down South, London Town
First of the Kirk destroys the computer-controlling-the-population episodes.

The Enterprise visits a planet to learn the fate of the USS Archon, lost a century earlier, and find a world ruled by the followers of Landru.

Are Kirk's anti-machine acts violations of the Prime Directive, or because he induces them to destroy themselves is it OK? Kirk says the Directive only applies to developing worlds not stagnant ones. In 'Symbiosis' TNG the Ornaran world was stagnant, a whole culture addicted to a drug, but Picard refused to help them.
 
This episode has a few creepy moments at the beginning. The silent, robed and cowled "enforcers" are eerie presences in that incongruously Earth-like setting. The sudden change in people's behavior during the Red Hour (which seems to last all night -- ?) is an effective bit of television. Harry Townes conveys fear rather well. But much of the episode is not very compelling. The dungeon and flambeaux look rather too familiar after "The Squire of Gothos," one of the worst episodes. And the defeat of the Landru-computer by (mostly) Kirk and (a little) Spock is, as Dave points out above, something we'd get far too much of in Star Trek (I think also in the episode of The Prisoner called The General, etc.).

Why does a tap, at most, of one of the cowled ones' staffs suffice to capture Sulu's mind, while Kirk and Spock supposedly needed to be worked on by a device operated by one of Landru's more intelligent servants?

There's a trite anti-religion thing going on, it seems to me, in this episode. Of course the Landru cult is a parody; but I'm not sure Roddenberry saw it as such.
 
As Kirk observed, Landu couldn't give the computer his companion or his Wisdom. Being a limited machine , it could comprehend these things or their importance.
 
As Kirk observed, Landu couldn't give the computer his companion or his Wisdom. Being a limited machine , it could comprehend these things or their importance.
A true AI could actually do that though, and no reason for it not to be programmed with Asimov-style Laws to prevent any harm to humans. Why did they use such a limited machine. The problem is that this story treated this whole subject very superficially. There are some real problems if we were to give over our governance and decision making to any machine. I suggest reading This Perfect Day by Ira Levin for more on that.
 

Similar threads


Back
Top