graewolfe wrote:even minimalist AIs for say farming equipment without any background rules for them you run the risk of them desiding they dont want to work anymore and going feral, they would have about animal level intelegence... but if they actualy enjoyed to some degree farming then there would be no risk of overworking them. once feral the dont harm other sentient life part would kick in and you wouldnt have rogue worker bots waxing humans for fertalizer or whatever
Perhaps I am taking too many things for granted here. I have what you could call a technical background, and I am quite familiar with coding myself and the problems that come with it. So it may be that I have been too vague in my points....
But what you just said contradicts itself. If the robot AI is very simple, fit only for farming, then it would NOT just "decide" that it does not want to work. This is because it would require a conscious "decision" which would automatically demand that the AI would be complex enough to make conclusion that it exists beyond the parametres it has been given, and that the alternative would be better. In other words, a certain self-awarness beyond the current task and concepts of worse/better and perhaps even freedom. Those are extremely complex deductions for a simple machine to do. So even giving the AI an "animal level" intelligence would be way over the top. If they're agrarian 'bots, they wouldn't need any sentience at all. Just ability to stick to planting seeds and harvesting food. Even our cellphones have enough power to run a proper algorithms for THAT (but there are other things that are beyond us, I'm afraid).
Anyway, an AI which would be able to do such an intuitive leap - would essentially require an neural net brain. Which is an software/hardware package that simulates how real brains work, and thus essentially becomes a self-learning machine. This is one of the few instances I can think where the computer could possibly became a sentient entity. Now, this is obviously (from OUR point of view, since we all know what terminator did) a Bad Idea(tm) to implement in to anything at all.
But there are some complications. Some parts required would need some massive number crunching ability. The most complex thing about the robots would be their ability to visualize the world around us. It is WAY beyond our current technology. Our best computer scientists have only an hazy inkling how could be THEORETICALLY done. We have had some path finding and facial recognition algorithms, some of them even work (ocassionally) - although not in real life solutions. But a complete package that would be able to see the world as we do, and then react to that world in approriate manner, is disgustingly complex.
However, this is a "fixed" feature. It does not require neural net brain, it does need to improve itself. It is an independent thread within the robot's OS. This itself lowers the standards for the complexity needed for the computer and the AI. All it needs is just a damn fast processor and some real fucking fancy coding, which I don't think is a problem in sci-fi world.
P.S:
As the passing remark about Animal Intelligence (sorry about the pun) brought to my mind. Have you noticed that recent research is starting to show results that in the end, us humans are not a whit any more intelligent than the animals. Our behavior patterns (sociologically, and as individuals) are disturbingly similiar to all those dumb animals we've been calling lower life-forms. Even down to such details as how married couples interact.
So are we dumber than we think?
Or the animals smarter than we thought?
Think about it.