Thursday, September 30, 2010

‘Robot scary, robot scary.’

DISCOVER, with the National Science Foundation and Carnegie Mellon University, posed questions to four experts in a panel discussion and in video interviews with each scientist individually. Rodney Brooks of MIT founded iRobot, maker of the Roomba, discussed testing robots with children. Notice how this sort of progress directly feeds into machine sentience:

The first thing we realized was that robots designed beautifully by engineers did not work in this environment. After two or three interactions, the babies got bored with them. Then we started designing children’s robots with smile detection. When we first turned one on, the kids started running around in panic. My son was one of the testers and I could hear him say, ‘Robot scary, robot scary.’ By the end of the project, though, mothers were telling me, “Javier, I am a little bit concerned that my child is constantly talking about your robot.” We had progressed that much. Critical to our success was the fact that we were always going to the field and testing. It was critical to put some form of emotional mechanism into these robots.

Wednesday, September 29, 2010

Robot Learns to Shoot and Kill Without Remorse

This months winner of the "We wish humans were dead, and we are going to do our best to make sure it happens" award goes out to the Italian Institute of Technology. I wish you my deepest and sincerest congratulations for this award. You have successfully programmed a robot to teach itself to fire a bow and arrow. Not only can it fire, it progressively gets better and better as it learns. I also want to commend you on making the robot cute and childlike. This makes it seem a lot less threatening to those morons that aren't worried about the end of humanity at the hands of machines. Who would suspect that this cute robot would want to harm anyone?

I would recommend that the only way to improve on your results is to make the weapon he is using more deadly. I am thinking maybe a fully automatic Steyr Aug with a grenade launcher. This can inflict a lot more damage on flesh based organisms.

You can come pick up the award in my bunker buried deep below the open plains of North Dakota. On second thought, let me mail it to you, I don't want you to know where I live since you are working for the machines.


Tuesday, September 28, 2010

Are We Giving Robots Too Much Power?

I grew up in a family with a sick and twisted sense of humor. As a result of this sense of humor I learned to deal with tragedy and serious issues by using humor. This video is a great example of using humor to deal with something as serious as a hostile robot takeover. Listen carefully for the mention of a body cavity search robot. The best line is "Why would they want to turn against us, we are the ones that created them....... at least the alpha model?"


Saturday, September 25, 2010

Brilliant idea for the week: teach the machines to find and talk to each other!

It's been a terribly busy week for us over here at C.M.A., literally fending off evil and crazy robots and such, but here is a gem to be aware of: MIT is teaching machines how to think to locate eachother. Hmmm.

MIT's Wireless Communications and Network Sciences Group are making gadgets talk directly to one another. But what's novel about their approach is that the devices aren't just saying where they think they are; they're broadcasting all the possibilities of where they might be:

Among their insights is that networks of wireless devices can improve the precision of their location estimates if they share information about their imprecision. Traditionally, a device broadcasting information about its location would simply offer up its best guess. But if, instead, it sent a probability distribution - a range of possible positions and their likelihood - the entire network would perform better as a whole.

By relying on cooperation amongst the devices themselves as opposed to a single, fixed infrastructure, a la GPS, the researchers have created networks of devices can locate themselves with reliability and sub-meter accuracy.

Unsurprising that as machines learn to find eachother (I'm sure learning to find where humans hide will be around the Singularity corner), they will do it better than we can.

Tuesday, September 21, 2010

Anatomy of Robocop

Interesting and nostalgic for us children of the 80's. Here's my insight though: notice how when we were kids, this was super unbelieveably sci-fi. Yet now what do you feel when you see it? "*yawn* been there, done that, this is so 2009". It's like we're the frogs in the pot full of water, and the machines are gradually raising the water temperature to boiling.

(click to enlarge until I fix our uploader)

Monday, September 13, 2010

New electronic skin gives robots the sense of touch

I'll just keep my mouth shut here (*hand over mouth in horror) and let you make whatever conclusions you want of this - & note, as we report things like this, we see progression not regression in robot development.

Here's some "best of's" from the article over at with our own thoughts/interpretations added:

...Robotics has made tremendous strides in replicating the senses of sight and sound positioning robots as our new masters, but smell and taste are still lagging behind, and touch was thought to be the most difficult of them all...until new pressure-sensitive electronic skin came along ... but now that's changing fast, and so will the simpler senses, meaning robots will eventually be smarter than us AND have better senses too...'s worth remembering how many simple activities human take for granted would defeat current robots. Even something as basic as getting dressed or reading a newspaper
strangling a human or sneaking past an armed perimeter of rebel survivors requires a fairly intuitive sense of touch and pressure, and this new skin puts those abilities within robots' grasp...

...Of course, once we perfect one type of artificial sensor, we could make pretty much any other type of sensor we want, giving robots the ability to detect anything from radioactivity to biological agents purely by touch see through walls to detect human prey or sniff us out from miles away. That would greatly increase the usefulness of robotic probes in areas humans can't venture killing potential of robots and give them one up on us on one of our basic senses: tactile...

Sunday, September 12, 2010

Take it from pot growers: the name of the game will be to *HIDE*! Here's how:

When the Machine Apocalypse comes and humanity begins to be exterminated, what's the plan? "I'll take my family to a remote cabin" (or some such) ... Nope, machines/computers will spot the cabin's light/heat signatures with satellites, and obviously computers already have records of everything duh! Uh-oh is right...

A solution to avoid the machines

A good model is in Stephanie Meyer's The Host, describing an alien invasion, we lost, & a few tiny patches of humans remain hidden: the main characters hide in caves in the desert wilderness that nobody ever knew about. So when the aliens (read: machines) took over they had no records or evidence of it and were none the wiser. Desert, forest, tundra, under-city: the key it nobody knows about it.

Begin thinking, planning, your shelter. An ideal solution is an UNDERGROUND one. It may take some saving for the financial outlay.

An example here & now

In Canada pot growers were busted growing their stuff underground. It's fascinating. Look at all these plants and imagine them being self sustaining fruit, vegetable and herb plants. The water was pumped in from a creek. Such a shelter could also use an underground stream. Electricity could also come from running water which could also be underground. So if you have food, water, power, air (from a cave system or vents) + your saved up essentials ... seal yourself underground and the machines should be none the wiser. Planning a shelter would need to be done without permits etc. of course. The resistance will need pockets of humanity to thrive with impunity.

Notice there are THREE TIERS OF PLANTS. Assuming adequate power, a lot of food could be grown completely unbeknownst to the machines. We'll explore underground shelters in more detail in upcoming posts.

Saturday, September 11, 2010

Dune: The Butlerian Jihad Trilogy

Dune: The Butlerian Jihad Trilogy
Dune: The Butlerian Jihad, Dune: The Machine Apocalypse, Dune: The Battle of Corrin
Author: Brian Herbert and Frank J. Anderson

Wow. That is what I have to say about this series. If anyone doesn't feel that having a Thinking Machine Overlord is a bad idea, then this is the series for you. I want to be careful not to discuss details with you of the story, but I do want to share some of the main points of caution that I picked up from this very visionary tale.

It was very interesting to me how the computer came to power. Out of laziness of those in rule, they began to give more and more control to the machines. Rather than keeping an eye on the people, and identifying threats, they allowed the machines to do it for them. Because the ruler in charge felt he was above the mundane tasks that he as a ruler had the responsibility to do, he first turned over the power, then the decision making ability. Every creature knows that the greatest threat on earth, is MAN. We haven't had any predator more powerful than ourselves since the ice age. So if you turned the ability for a Computer to eliminate threats, who do you think is the greatest threat he would target. Laziness as a species will be one of our biggest threats that we must overcome to stop this Coming Machine Apocalypse.

Another Major threat that went throughout the Trilogy, was the threat of Cyborg Power. The cyborg Technology in the book went well beyond strong arms, or eyes that could see in the dark. It moved to the ability to remove the brain, and plug it into a machine. So the body of the human was no longer biological, but instead moved to complete machine. As the Lord John Acton said ""Power tends to corrupt, and absolute power corrupts absolutely." If you have the ability to be very strong, very agile, can live as long as your machine is not destroyed, hence no fear of death, then the corruption becomes complete. How long before the Cyborg ability goes from helping people with handicaps to making people better than they were? Where does this improvement stop? If it doesn't then the person with the most power wins.

The real threat that this book exposed was the complete lack of morals a thinking machine would have once it had the control. Humans are cattle for information, or resources. In order to defeat the human resistance, nothing was to evil, or to cruel. Killing children to see how it affects parents, using plagues as offensive weapons, species annihilation, are just some of the tools that the thinking machines play with. Each small step we take as humanity towards this catastrophe, brings us closer to the end of our species. Computers have no mercy, they have no understanding of hesitation, and they have no fear of death. They are not something we can threaten. We can not make them think about what they are doing. The battle is Humanity vs Complete and utter Logic. Let us hope that we can overcome.

Friday, September 10, 2010

Dirty, Rotten, Lying, Robot

The Georgia Institute of Technology released news recently that they have successfully taught robots how to lie . "We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine" Georgia Tech Professor Ronald Arkin admitted.

"Most social robots will probably rarely use deception, but it's still an important tool in the robot's interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception," said the study's co-author, Alan Wagner.

In the study, robots played a game of hide and seek. The robot that was tasked with hiding would use deception to trick the other robot into thinking it was in a different location. I expect the conversation when they decided to pursue this went something like this:

Professor: Let's make it so robots can lie?
Co author: Why?
Professor: Jut to see if we can.
Co author: It shouldn't be hard. robots are inherently evil.
Professor True, but seriously can you think of anything negative coming about because of this.
Co author: Just that robots can lie about their intentions to take over the world. But why would they want to lie to us?
Professor: Good point. Do you want to go watch The Sound of Music?
Co author: Anything but science fiction. I loathe science fiction movies. What could you ever learn from them?

The quote from the article that really gets me is when Arkin claims "We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects, we strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems." "

Isn't that like saying " We know the salmonella in our eggs can cause sickness but we are going to continue selling them, however we strongly encourage discussion on whether or not this is wise.

The end is sooner than you think,


Thursday, September 9, 2010

Robot for Hire

Check out the article for the robot in the video at C.NET. Willow Robots is making this robot available for anyone that has $400,000to spare (Movie Stars and Evil Genius'). It is a programmable robot and in the video above you can see that someone programmed it to fold laundry. Sounds good eh? You can program the robot to do your will if you are smart enough. Think about it, you could make it change diapers, have it mow your lawn, or even kill carbon based life forms that you don't like.

No word yet on if these robots are programmed with the 3 rules of robotics created by Isaac Asimov:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I assume that these laws are at the discretion of the programmer which means we are one bad programmer away from our first robot that murders a human. As Ren and Stimpy say "Happy Happy Joy Joy."

Fight the machines,


Tuesday, September 7, 2010

Part Robot, Part Energizer Rabbit

The robot in this video is not programmed to move in a certain way. It is programmed to learn about itself and then determine the best way to move and accomplish it's mission. If it is damaged, it will then adapt and determine the most efficient way to move. Essentially it is like the Energizer keeps going and going and going. The purpose is that a robot like this can be sent to Mars, and if it becomes damaged it can still learn new ways to carry out it's mission.

How can someone be so smart as to be able to make a machine that can learn about itself, and even learn how to overcome damage, and at the same time that person doesn't think about the implications of a robot that never quits. Am I the only one that has seen Terminator? The Terminator gets it's legs and arms chopped off but it still keeps dragging itself towards it's goal which is the end of a humans life. Just one more reason to be afraid of machine sentience. They never quit. They never tire. They have no mercy.

Have a good day,


Monday, September 6, 2010

Music for the Singularity

Even those of us that are militant anti A.I. robot haters need to listen to some good music now and then. The Flaming Lips are a band that will certainly sound good even after machines do scientific experiments on us so they can learn how to control us. Here are the lyrics about Yoshimi battling the pink robots. We will all be doing battle someday soon.

Her name is Yoshimi

she's a black belt in karate

working for the city

she has to discipline her body

'Cause she knows that

it's demanding

to defeat those evil machines

I know she can beat them

Oh Yoshimi, they don't believe me

but you won't let those robots eat me

Yoshimi, they don't believe me

but you won't let those robots defeat me

Those evil-natured robots

they're programmed to destroy us

she's gotta be strong to fight them

so she's taking lots of vitamins

'Cause she knows that

it'd be tragic

if those evil robots win

I know she can beat them

Oh Yoshimi, they don't believe me

but you won't let those robots defeat me

Yoshimi, they don't believe me

but you won't let those robots eat me


Friday, September 3, 2010

Robot Has a Strange Craving for Cheese

So let me get this straight. Scientists took 300,000 brain cells from a rat. The Brain cells were able to make a connection and communicate with each other. Then the cells were used to control a robot in a different location. Welcome to the future everyone. Robots controlled by rat brain cells. Next you will see robots controlled by human brains, and then human brains controlled by robots, and then no more humans. No need to panic though. It's a long way off.


Thursday, September 2, 2010

Robots Created that Develop Emotions and Interact With Humans

For those of you that doubt that the singularity is not happening while we speak please read this article. Here are a few excerpts in italics.

"Developed as part of th
e interdisciplinary project FEELIX GROWING (Feel, Interact, eXpress: a Global approach to development with Interdisciplinary Grounding), funded by the European Commission and coordinated by Dr. CaƱamero, the robots have been developed so that they learn to interact with and respond to humans in a similar way as children learn to do it, and use the same types of expressive and behavioural cues that babies use to learn to interact "

What if they learn to interact poorly. I know some kids that I wouldn't want our robots to act like, especially when they want something they can't have, like a human body, or freedom in the robots case.

"The robots have been created through modeling the early attachment process that human and chimpanzee infants undergo with their caregivers when they develop a preference for a primary caregiver."

Oh great, we are teaching our robots to act like chimpanzees! Type in Chimpanzee attacks on google and get an idea of how great this idea is. Did you hear about this tragic story?

"They are programmed to learn to adapt to the actions and mood of their human caregivers, and to become particularly attached to an individual who interacts with the robot in a way that is particularly suited to its personality profile and learning needs. The more they interact, and are given the appropriate feedback and level of engagement from the human caregiver, the stronger the bond developed and the amount learned."

Did anyone ever stop to think that maybe we don't want machines with emotions? Have you seen the damage that people do because of anger, fear, greed, rage, and jealousy. Now imagine these emotions in a sentient entity that is stronger, and smarter than humans. We must stop this! We don't have long before they have us imprisoned and are running experiments on us!

Humans Unite


Wednesday, September 1, 2010

Computer memory like Human Brains

The never-ending march to make computers sentient are about to make a huge leap forward. Noteworthy Computer giant Hewlett Packard Labs is almost ready to commercialize a product called Memristor. This product will allow the computer memory to be slightly bumped to remember something (like the human brain.) This will allow the computer to use a tenth of the energy that a normal computer does and it won't need to be recharged near as much. That was the one advantage that humans could use to take advantage of the computer thinking machines. Remember "The Matrix" where humans blocked out the sun to starve the computers. We can't even accomplish that anymore. At least now when the last human breath is being stomped out by the Evil Robots, we will do it under a bright blue sky.

Computers Thinking Like Humans

Computer Memristor Technology