Sunday, October 20, 2013

Robot breaks ankle!

For years we have been promised convenient technology in our homes. We're now flooded with so-called "smart" devices that include cell phones, cars, refrigerators and even watches. While the dictionary defines "smart" as "intelligence", none of these devices is self-aware or is really smart. The label is added to imply perhaps that the buyer is smart for buying the device, and as a result shows intelligence in the choice.

Enter the robot. These human-like appliances do have limited intelligence and can learn and adapt similar to the way human children do. While we are still years, maybe even decades away from the day where they will be common household appliances, a recent newspaper article about robots caught my eye:

Bad break as Atlas comes robo-cropper

Things break. Complicated techology fails. So what happens when we come to depend on this technology to perform simple or possibly even complicated tasks that, because of our reliance on this new technology, we are unable to do by ourself. We've heard the expression "smart house" where all functions, including locking and unlocking doors and windows, are all computer controlled. What happens if the computer fails and locks all windows and doors during a fire, trapping all residents IN THE HOUSE?

I know, I know. There will be built in safeguards. This is not a new idea. Isaac Asimov, one of the most visionary science fiction writers of the 20th century, explored robotics in many of his novels. Originally introduced in his short story "Runaround" in 1942, these rules were explored in more detail in his novel "I, Robot" that was recently made into a popular Will Smith movie. The Three Laws are:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Most science fiction writers have respected these "rules" but stories have been made more dramatic to illustrate the consequences of violation of the rules.

But real life is very different. We know form the current use of miltary drones used mostly by the US in surgical strikes to kill enemies or destroy miltary targets that flying robots with lethal weapons are beings used without any rules at all. Are we now living in a world where we will have two classes of robots? One class, being a consumer class, would have the 3 rules burned into their operating system. The other class, let's call it the military class, would have no rules or restrictions and could be used in whatever way that establishment sees fit with little or minimal oversight.

And what happens when one of the devices breaks its ankle? Would a consumer have to purchase a new, replacement robot? More and more of our technology is perceived as disposible and unrepairable. This is true, for example, of almost every "smart" phone today. Would we be obliged to replace a robot that breaks its ankle, or a finger?

Is it possible that a "broken" consumer robot might "forget" the 3 rules and start causing havoc? Could the programming be hacked to cancel the 3 rules?

The future on robots is not clear. We need to consider these consequences before we start mass production and end up with monsters we can no longer control.

1 comment:

Rondyn Musings said...

My thoughts about broken robots resurfaced recently when I upgraded my iPhone from and iPhone 4 to an iPhone 5. Will we treat robots the same way? I mean there was nothing wrong with my iPhone 4, and I still plan to keep it in case my iPhone 5 is lost or breaks - would we do that with robots too? Where would we put them all?

So iRobot 4 gets replaced by iRobot5 which has faster processors and is more responsive, has a larger vocabulary and memory.

Voice recognition is already coming into its own with examples like Apple's Siri, so how about an iRobot that can talk multiple languages? Imagine a travel companion who can order the exact meal you want, or bargain for the best price for that amazing hand-crafted carving you found in the foreign city's flea market.

Or perhaps flying to your destination, the pilot has a heart attack and the PA announcement asks if there are any pilots on board. In 10-seconds your iRobot downloads a complete manual for the plane into its memory, and a complete navigation program for the route you're taking. Or maybe all pilots WILL BE robots by then - and why not the flight attendants too?

Would we get to an era where iRobots would replace human companions? I mean this would be a companion you could abuse to your heart's delight, and it would still be as friendly and responsive as ever. Talk about man's best friend!

In an era where we trust everything to advanced technology that enhances, but also controls our lives, what will be left for humans to do? Perhaps we will become obsolete.