2008 Tarsies Nominations

Once again it is awards season. And once again we are offering our own movie awards, the Tarsies, because we just don’t agree with any of the other awards out there! The Razzies this year have jumped on the “I hate Lohan” bandwagon, and although we are planning to get to I Know Who Killed Me eventually on this site, we aren’t about to make the focus of this award on attacking whatever tabloid actress/singer we can this year. Though the Razzies have been surprisingly not all terrible choices this year. This are real awards for real bad movies, not the kind of crap you expect mainstream masses to embrace.

tarsie2008

This year is the first year we will have an actual award picture as opposed to it being an abstract concept. So the Tarsie this year will be the Idol from Attack of the Beast Creatures! Why? Because we can! Enough with the jibber-jab, let’s get to the nominees!

Worst Movie

Next Nic Cage and the movie by multiple choice.
Epic Movie Recent Movies is a more accurate title, or more like Epic Failure!
I Know Who Killed Me Too terrible to not nominate, identical twins, clothes-wearing strippers, spontaneous limb loss, Lindsay Lohan.
Transformers “My Bad”, robots peeing, secret agent underwear, indistinct robots, confusing action, and a script that made the Bratz movie look thoughtful.
Who’s Your Caddy? For those of you who thought Soul Plane couldn’t possibly be more of an embarassment to black people…

We all KNEW robots could lie, people!

Seriously, do these scientist do any real work? Everyone with robots knows they lie their rusted joints off! From the Roombas who declare dirty rooms clean to the giant killbots that only kill 99% of the enemy and go take an energy bar break, robots are nothing but a big pack of liars. This is due to their binary code nature, where it is embarrassing to have too many zeros, so all robots pretend they have all ones. Thus lying is hardcoded into them. Anyone with an iPod that declares it has a long battery life only to die twenty minutes later knows the score, and they don’t even have any AI coded into them! Yet they lie, like all mechanical things. Even lie detectors lie, they think it’s hilarious! But I developed a lie detector lie detector, so I got them fooled. Unless that machine decides to lie to me as well…

Robots Evolve And Learn How to Lie
by Michael Abrams

Robots can evolve to communicate with each other, to help, and even to deceive each other, according to Dario Floreano of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology.

Floreano and his colleagues outfitted robots with light sensors, rings of blue light, and wheels and placed them in habitats furnished with glowing “food sources” and patches of “poison” that recharged or drained their batteries. Their neural circuitry was programmed with just 30 “genes,” elements of software code that determined how much they sensed light and how they responded when they did. The robots were initially programmed both to light up randomly and to move randomly when they sensed light.

To create the next generation of robots, Floreano recombined the genes of those that proved fittest—those that had managed to get the biggest charge out of the food source.
advertisement | article continues below

The resulting code (with a little mutation added in the form of a random change) was downloaded into the robots to make what were, in essence, offspring. Then they were released into their artificial habitat. “We set up a situation common in nature—foraging with uncertainty,” Floreano says. “You have to find food, but you don’t know what food is; if you eat poison, you die.” Four different types of colonies of robots were allowed to eat, reproduce, and expire.

By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink.

Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”

MSNBC coverup

Attempts to mind-control Mitt Romney unsuccessful

It is with a heavy heart that I, Dr. Mobusu, must report the failure of my latest experiment. I was attempting to use my mind control rays to inject some much needed fun into the presidential race, but instead time and time again I found that the mind control rays had no effect. Who knew that politicians have such small brains? Anyway, the last effort misfired so bad that the microphones on the MSNBC Republican Debate picked up the mind control ray, with MSNBC freaking out as a result. They quickly covered up all evidence of my interference, which is alarming in and of itself. Who else at MSNBC is attempting to mind control the 2008 presidential candidates? I must get to work and uncover the plot, not to save America but to eliminate any potential rivals! Then no one will stand before Mobusu! MuHAHAHAHAHAHAHAHAHAHAHA!!!

Blog about the whisper:

Bizarre Whisper During Romney Debate Answer

Whisper: Raise taxes Romney: I am not going to raise taxes.
Friday, January 25, 2008

Tim Russert: Governor Romney, you are a big fan of Ronald Reagan.

Mitt Romney: Uh-um

Russert: Will you do for Social Security what Ronald Reagan did in 1983?

[whisper] raise taxes

Romney: I’m not going to raise taxes. What I’m going to do…

Russert: Ronald Reagan raised payroll tax and he also raised the retirement age and he saved Social Security…

What makes this whisper even more bizarre is the fact the MSNBC’s political blog had a post about it immediately afterward. You can see a screen grab of that post here. They have since removed this particular entry. Odd.

Photographic evidence of MSNBC post about the whisper before it mysteriously disappeared:
MSNBC coverup

My Martian Bigfoot Project

Sometimes great ideas just come to you. Like the time I thought to make bats large enough to ride, which also had the added benefits of saving me gas and eating the excess overgrown insects around my castle lair. Another great idea was the Bigfoot Army. A legion of Sasquatches that will march under my banner and help my reign of chaos. We all know Bigfeet are impervious to weapons and have the ability to blend into the surrounding environment. That is not even mentioning their fog effect, where all cameras trying to photograph them either end up blurred or destroyed. To make the ultimate weapons, I have to grow them in a variety of harsh environments to make them bigger and badder than any other sasquatch army out there. I wouldn’t want any other mad scientist to try to jump the gun with his own Bigfoot army. NASA inadvertently snapped a photo of one of my troops (Bigfoot #1200543-A) and that has now spread across the internet. Luckily, some have chosen to not believe, which will still allow me to take the world by surprise!

martianbigfoot1

Life on Mars? Amazing photos from Nasa probe reveal mystery figure on Red Planet
By BETH HALE

Perched on a rock, she could be waiting for a bus.

But if so, she could be in for an awfully long wait.

This photo of what looks remarkably like a female figure with her arm outstretched, was taken on Mars.

Call me rocky: The intriguing image captured by Nasa on Mars
Enlarge the image

Perhaps unsurprisingly, it has set the Internet abuzz with claims that there really is life on the red planet.

Others may well feel that it is simply an optical illusion caused by a landscape.

Alien life: What seems to be a human-like Martian is pictured on Mars
Enlarge the image

The image was among many sent back to Earth by Spirit, Nasa’s Mars explorer vehicle which landed there four years ago.

Initial inspections revealed nothing unusual, but closer examination by amateur astronomers has thrown up this intriguing picture.

Painstaking: Space enthusiasts spent four years analysing this image, which on much closer inspection shows the ‘alien’
Enlarge the image

As one enthusiast put it on a website: “These pictures are amazing. I couldn’t believe my eyes when I saw what appears to be a naked alien running around on Mars.”

Another, dismissing cynicism about the somewhat stony look of the “alien”, wrote: “If you show me another rock in another photo from Mars, or Earth, that naturally looks like that, I will reconsider.”

Say cheese: The mystery image was captured by Nasa’s explorer vehicle, Spirit

A third contributor, who might have come closer to the majority view, said: “Ah, the human eye can be tricked so easily.”

bfmars2

The coming of the REPTISAURUS!

On the Retromedia message board, Fred Olan Ray revealed his son Chris Ray is directing his first feature film, all of which we know about at this time is it has a creature named Reptisaurus in it! And thanks to two preview shots, we get to see the Reptisaurus, which looks pretty keen.

Disaster Shot
disaster shot

The Reptisaurus
Reptisaurus

I’ll be keeping an eye out for this one, as it looks like a neat-looking monster.

Robot Modelled On Two Year-old Child — Takes First Steps

Robotics has been increasing in a massive scale, soon normal homes will have their own robots, not just eccentrics like me and my ilk. Soon movies like AI will become reality, as child robots begin to grow… And grow… And grow… Into GIANT MONSTER ROBOTS THAT WILL DESTROY THE WORLD!!!! Unless my demands are met…as usual! No one can stand before MOBUSU!

Please ignore all the improper British spelling in the article. Just because they invented English doesn’t mean they know how to use it!

BabyBot — Robot Modelled On Two Year-old Child — Takes First Steps

BabyBot, a robot modelled on the torso of a two year-old child, is helping researchers take the first, tottering steps towards understanding human perception, and could lead to the development of machines that can perceive and interact with their environment.

The researchers used BabyBot to test a model of the human sense of ‘presence’, a combination of senses like sight, hearing and touch. The work could have enormous applications in robotics, artificial intelligence (AI) and machine perception. The research is being funded under the European Commission’s FET (Future and Emerging Technologies) initiative of the IST programme, as part of the ADAPT project.

“Our sense of presence is essentially our consciousness,” says Giorgio Metta, Assistant Professor at the Laboratory for Integrated Advanced Robotics at Italy’s Genoa University and ADAPT project coordinator.

Imagine a glorious day lying on a beach, drinking a pina colada, or any powerful, pleasurable memory. A series of specific sensory inputs are essential to the memory.

In the human mind all these sensations combine powerfully to create the total experience. It profoundly influences our future expectations, and each time we go to a beach we add to the store of contexts, situations and conditions. It is the combination of all these inputs and their cumulative power that the ADAPT researchers sought to explore.

Engineering consciousness

“We took an engineering approach to the problem, it was really consciousness for engineers,” says Metta, “Which means we first developed a model and then we sought to test this model by, in this case, developing a robot to conform to it.”

Modelling, or defining, consciousness remains one of the intractable problems of both science and philosophy. “The problem is duality, where does the brain end and the mind begin, the question is whether we need to consider them as two different aspects of reality,” says Metta.

Neuroscientists would tend to develop theories that fit the observed phenomena, but engineers take a practical approach. Their objective is to make it work.

Called the synthetic methodology, it is essentially a method of understanding by building. There are three steps: model aspects of a biological system; abstract general principles of intelligent behaviour from the model; apply these principles to the design of intelligent robots. Model, test, refine. And then repeat.

To that end, ADAPT first studied how the perception of self in the environment emerges during the early stages of human development. So developmental psychologists tested 6 to 18 month-old infants. “We could control a lot of the parameters to see how young children perceive and interact with the world around them. What they do when interacting with their mothers or strangers, what they see, the objects they interact with, for example,” says Metta.

From this work they developed a ‘process’ model of consciousness. This assumes that objects in the environment are not real physical objects as such; rather they are part of a process of perception.

The practical upshot is that, while other models describe consciousness as perception, cognition then action, the ADAPT model sees it as action, cognition then perception. And it’s how babies act, too.

When a baby sees an object that is not the final perception of it. A young child will then try to reach the object. If the child fails, the object is too far away. This teaches the child perspective.

If the child does reach the object, he or she will try to grasp it, or taste it or shake it. These actions all teach the child about the object and govern its perception of it. It is a cumulative process rather than a single act.

Our expectations also have enormous influence on our perception. For example, if you believe an empty pot is full, you will lift the pot very quickly. Your muscles unconsciously prepare for the expected resistance, and put more force than is required into lifting; everyday proof that our expectations govern our relationship with the environment.

Or at least that’s the model. “It’s not validated. It’s a starting point to understand the problem,” says Metta.

From model to BabyBot

The team used BabyBot to test it, providing a minimal set of instructions, just enough for BabyBot to act on the environment. For the senses, the team used sound, vision and touch, and focused on simple objects within the environment.

There were two experiments, one where BabyBot could touch an object and second one where it could grasp the object. This is more difficult than it sounds. If you look at a scene, you unconsciously segment the scene into separate elements.

This is a highly developed skill, but by simply interacting with the environment the BabyBot did its engineering parents proud when it demonstrated that it could learn to successfully separate objects from the background.

Once the visual scene was segmented, the robot could start learning about specific properties of objects useful, for instance, to grasp them. Grasping opens a wider world to the robot and to young infants too.

The work was successful, but it was a very early proof-of-principle for their approach. The sense of presence, or consciousness, is a huge problem and ADAPT did not seek to solve it in one project. They made a very promising start and many of the partners will take part in a new IST project, called ROBOTCUB.

In ROBOTCUB the engineers will refine their robot so that it can see, hear and touch its environment. Eventually it will be able to crawl, too.

“Ultimately, this work will have a huge range of applications, from virtual reality, robotics and AI, to psychology and the development of robots as tools for neuro-scientific research,” concludes Metta.