A robot goes into a cake shop...

Toby Frost

Well-Known Member
Supporter
Joined
Jan 22, 2008
Messages
8,072
A robot goes into a cake shop some time in the future. The man behind the counter asks if he can help, and the robot says "I'm just browsing, thanks". But secretly, the robot thinks "I will steal that big cake later on".

Assuming that the robot's brain is basically a big computer, what form does that thought take? (Edited here for clarity) Is the idea of stealing the cake (to us, an abstract thought) the execution of a programme (perhaps called "Think about crimes you'd like to commit" or "Plan how to acquire cake")? Could someone look into his brain the way I can look into a hard drive and find that thought somewhere in a log of what he's been thinking, even though it isn't a memory, as such?
 
Last edited:
Surely it's not an idea as such. It's an instruction with time delay. (Wait three hours then steal cake, unless circumstances revise decision.)

ETA: I've probably misunderstood the question. Do you mean the inception of the instruction in the first place, rather than the fact that there's a time delay? Presumably its coding would throw that up as an observed opportunity to acquire the cake, because its been coded to acquire cakes.
 
Several questions and observations.
Why is it not code.
1. It is an instruction (Steal the cake), but with a somewhat vague time delay parameter. (Later on)
2. why do you think code must be an instruction. It can equally be a piece of data.
3. your idea of everything having to be a programme is entirely false for present day computers. And you are speaking of the future.
4. The difference between live memory and disk memory is already very fuzzy. A concept known(to IBM at least) as single level storage already negates that.
Even executing programs are constantly being paged between the active memory and disk storage to allow for large programs to run and also to allow several programs to run concurrently.
 
Presumably there comes a point where he thinks "I want cake" and then makes a choice from a number of ways in which he could acquire it, one of which is to steal the cake. So he makes the decision (steal the cake) and then effectively writes himself instructions as to how to do it (break in later)? Interesting.

"Code" is probably the wrong word, and I'll change it in the original question.
 
You've been spending too much time on the holodeck, playing Captain Proton with Tom Paris.
 
Could someone look into his brain the way I can look into a hard drive and find that thought somewhere in a log of what he's been thinking, even though it isn't a memory, as such?

Quite possibly not. Unless your robot was specifically designed to be Robocop's nemesis, it most likely wouldn't be doing something as simple as executing a programme. It will have gone through a learning process that led it to this place. Even today, it would be possible for that learning process to have little or no human supervision (though any company investing in this kind of thing will keep a close eye on the process). The result is that this behaviour could be absolutely unpredictable (as in Facebook chatbots developing their own language, which happened last year).

As for a record/log, again it would depend on design. The robot itself may keep a record of its actions but a complete record of its "thoughts" might be so messy as to be useless. It might also require so much storage space as to be impractical. A researcher might recover the command lines to actually steal the cake but untangling the thought process might take months or be literally impossible (esp. in the not-too-distant future if quantum computing becomes readily available). Already, some AI machines are essentially black boxes where we know what goes in, we know what comes out but we have no clue what goes on inside. (That's a bit of an overstatement but not far off fact.)

I can't wait to read a story about a robotic cake theif! Maybe Ardman Studios will turn it into a clay-mation spectacular :)
 
Ooh, this is very similar to the autonomy frameworks for mission planning reactors we're coordinating for Mars Rover missions in the Space Agency. The aim within ESA and the agencies is to create rovers with greater amounts of autonomy, so that they can process their own mission requirements and then plan the sequences of actions needed to fulfil them.

Essentially the robot has to have some understanding of its higher strategic goals. This may be "deliver cake to my Boss" or "I must deliver cake to the Orphanage to feed the starving children" or whatever. This goal is probably intrinsic ie it's either received this higher goal from Ground Support (ie a human) or it's been generated internally, which means that in turn it's serving an even higher goal (ie "make the orphan children strong so they can CONQUER THE WORLD" etc).

In any case, the robot has the goal. It enters the cake shop and through its object recognition and sensors and reactors becomes aware that there is an object of interest (Cake) but that there are hazards or obstacles in the way (the owner) and begins to plan and re-plan accordingly.

So in answer to the questio it's not an abstract thought as such, but simply a series of plans, executions of those plans, and stopping the plans and replanning when something gets in the way. For Mars rovers this would be something like:

ROVER GOAL: Head to cached supply of Mars Rock
> Plan path from A to B
> Plan trajectory through waypoints
> Begin execution
> coninually scan for hazards and new opportunities
> Upon sensing hazard, stop execution
> Re-plan path

Etc etc

So for your cake-thief robot the sequence of actions would be something along the lines of:
GOAL: Feed the starving orphan children
> Deliberate plan:
> Plan route through high street
> Plan trajectory
> Execute
> Object recognition: Bob's Patisserie
> Plan Route to waypoint: Bob's Patisserie
> Spot cake, plan trajectory to cake
> Identify hazard: Bob, armed with piping bag
> Re-plan. Return later when Bob is not present

Etc etc.

The things is with robots, they are like small children. I can't say to my daughter "go and sit at the table" because she'll just end up with one buttock on the chair and the chair at a jaunty angle, and she'll bang her knee and end up crying. It has to be "stand up, go to the table, watch out for the paper on the floor, pull the chair out from the table - no, put both legs under the table, pull the chair in." A robot's the same; every tiny action has to be planned and executed in microdetail, because essentially they're pretty bloody stupid.
 
That is really interesting. Robots are stupid, and I suppose every command has to fit with extra rules like "Don't cause damage" and "Don't wreck yourself". After all, an earthworm wants to survive, but a robot doesn't necessarily care. Any concept like that presumably has to be added someow. I remember an Asimov story where some louts made a robot look foolish by telling it to take its clothes off, because it lacked the capacity to refuse stupid orders (or perhaps to evaluate the likelihood of them being stupid).

Someone in Neuromancer says that the problem with dealing with an AI is that its motives are very hard to discern. I guess at bottom a computer, and hence a robot, has no motives at all: it just sits there and carries out tasks. But when you give a robot some kind of generalised motive (protect the orphans) there has to be some kind of evaluating capacity that weighs up whether feeding the orphans is more important than obeying the law of theft. At that point, when considering a human being, some kind of reasonableness test comes in (as per "Was it reasonable to think this man with a knife was a such a danger to me to justify killing him?"). But that isn't the kind of yes-or-no test that I associate with machines.

Out of interest, Dan, are you at the Space Centre in Leicester? I do an event there every year.

Once I've got my brain out of this rabbit hole, I'll pitch Robot Orphan Cake Rampage to Nick Park.
 
How real do you want this? And how "intelligent" do you want your robot to be?

Right now a robot can be incredibly dumb, with no ability to do anything other than execute code (like a factory production line robot, for instance), or they can possess rudimentary "intelligence" to overcome obstacles and adapt to execute on an instruction (see the Boston Dynamics robots for instance - their learning algorithms teach them to walk, open doors, etc). Machine learning is odd - because essentially we don't really know what AI "learns" - all we do is feed it inputs and it draws inferences based on algotithms that optimise for certain pre-defined outcomes. See examples of facial recognition learning that black people are gorillas, military AI that learned tanks only exist in the daytime, etc. There are lots of examples of f*ck-ups in AI training due to sample bias.

If you want this to be sci fi, then you can introduce more general intelligence/strong AI at which point its ability to execute on a task is no longer "lines of code" but a trained neural net and the robot has the ability to "think" in more abstract terms. It becomes self aware.

So it kind of depends on what the robot's purpose is and how advanced it is. Without general intelligence, the robot is only ever going to acknowledge the cake, beyond it being part of the scenery, if the cake is in some way relevant to a task it's been told to perform. So if its an expert system style butler-bot, then it probably sees the cake, pattern matches it to something it's master likes (mmm, Battenberg) and logs the geospatial data for the cake for future reference in case master should demand cake. If its a combat-bot it sees the cake as pointless. You can't use it as a weapon. You can't use it as cover. It holds no tactical value as an asset. Ignore.
 
Martin sums it up well, as have others in the thread.

You're asking a question which is a bit like asking "how long is a bit of string."
The answer isn't fixed because it depends upon your world setting and the robot development and AI work that has been undertaken. Thus how the AI thinks is going to depend on a lot of factors of how its designed and built.

As for reading the AI that too depends on the world setting as even if the AI thinks in very complex terms a "snooper" program could be equally advanced in reading the AI thoughts. Indeed a snooper program could even predict AI actions very accurately if the snooper has faster processing than the AI is capable of achieving. Thus meaning that only random chance or glitches in the code as its compiled and run could allow the AI to act in a manner that a snooper program couldn't predict. Again this assumes an equally if not more advanced AI snooper/monitoring program.

This also hinges a lot on how abstract the AI thinks and what its overall intentions are. One could even assume that an AI, suitably advanced, could reach a point like humanity. Whereby each action is the result of inputs and code and structure before it; but where the AI itself cannot fathom the sources of its own actions fully. Rather like how a person might like something or choose to do something without fully knowing "why" they want to do it or like it. They know they want to do it; or do like something, but they are unaware of all the fundamentals that lead up to that conclusion.
 
Out of interest, Dan, are you at the Space Centre in Leicester? I do an event there every year.

No, I'm normally balanced between the Space Agency offices in Swindon and London, with a bit of working from home and a fair bit of travellling (I'm in Madrid right now). Which event do you do at the Space Centre? So long as it's not clashing with other work I could probably come up to it. Maybe that's a PM job so you don't derail your own thread :)

Once I've got my brain out of this rabbit hole, I'll pitch Robot Orphan Cake Rampage to Nick Park.
Ahem, don't you mean we'll pitch this to Nick Park? ;)
 
Ooh, this is very similar to the autonomy frameworks for mission planning reactors we're coordinating for Mars Rover missions in the Space Agency.
It's good to know that someone is planning to look for cake on Mars. ;)
 
If any book could answer your question Toby, it's this. But it's not easy...

I deleted the image to be compact, but that is one of my all-time favorite books. I have the one with the original paperback cover, with a carved wooden block with light shining through that throws the shadows of G B and E onto back walls. The Crab Canon was stunning, and I still cite Hofstadter whenever questions about artificial intelligence come up.
 

Similar threads


Back
Top