One argument against the idea of a real life Terminator-type scenario actually occuring is the idea that computers, although capable of thinking and executing a number of spectacular tasks are unable to do them simultaneously. In other words, they cannot carry on with the assignments that they are given when the original plan becomes interrupted and/or deviates. This just leads to more time spent reprogramming the machine.
The image of robots as hulking machines taking their place in a factory, making up an assembly line with other like machines, is a common one shared by most (like myself). In these menial jobs, we can imagine robots that are programmed to complete the same tasks over and over again with the precision of a well-oiled machine.
As would be expected, in the evolving world of robotics this is not the case at all. Many robots can be capable of a variety of things–like folding laundry and retrieving items–for which they are programmed.
Now, however, scientists are expanding the scope of robotic capabilities.
Scientists in Japan are currently developing robots that will be able to learn from its surroundings in order to help it become the sentient beings science fiction has warned us about. SOINN, or Self Organizing Incremental Neural Network, is a new method of programming that allows a robot to accommodate for its changing environment. By recognizing the differences that were made to the “plan,” the robot is now able to utilize its set skills in a way that would best help it to carry on with the tasks at hand. This allows the robot to “learn from experience,” to quote from msn.com.
Osamu Hasegawa, the associate professor at the Tokyo Institute of Technology, is producing this new technology that allows robots to solve problems and make decisions without constant human oversight. In the videos posted online to illustrate the capacity of the robot’s work, we can see that these problem solving skills are put to the test and become apparent in the robot’s actions and ways of retrieving information.
In the example seen in the videos, the robot has three items placed in front of it: a bottle of “water” (beads, are used as a substitute), an empty cup, and a single “ice cube.” The robot is first given the command to fill the empty cup with the water that is in the bottle. Once the robot successfully completes this task, it is then told that ice water would be preferred, presenting the robot with a change from the original task.
This provides a conflict. The robot (by connecting to the internet, and inquiring online) learns that to cool water it should add an ice cube—and, it just so happens, the robot has all the necessary materials. It now has to figure out a way to do this when both hands are full: one with the bottle, and one with the cup.
After a moment of assessing the situation, the robot places the bottle of water back on the table in order to free up a hand to grab the ice cube and put it in the–now full–cup of water.
Voila! Cold water.
And this is not the only news in the world of robotics. This information comes just days after another robot, the aptly named Bakerbot, whipped up a batch of cookies on the spot.
Without having to constantly retune and reprogram the machines, scientists can save the time they would have spent doing so.
With such advancements in robot intelligence, the door for many more advancements in this technology is opening wider. Who knows what robots will be able to accomplish and how soon they will be able to do it? It is clear that technology is pushing them down a new road where they will be able to think quickly and act on their own.
Photo Credit: designboom.com/weblog/cat/16/view/15978/soinn-robot-mimics-human-reasoning.html