May 4, 2010

CHIMPANZEE TOOL USE IS NO MONKEY BUSINESS

Chimpanzees are our closest living relatives and are constantly challenging our notion of what makes humans unique; the cognitive divide between Homo sapiens and Pan troglodytes is becoming less and less distinct. Chimpanzees have self-awareness, can beat college students at memory tasks, and react to the deaths of their companions in ways that we would find uncannily familiar. Complex tool use may be the best example of chimpanzees’ advanced cognitive abilities; a review in last week’s issue of Science summarizes some of the most interesting instances of tool use among chimpanzees. Read the rest of this article… Read the comments on this post
May 2, 2010

NATURAL LANGUAGE

Can someone explain this comment to me? It sounds almost like something I’d say, but in the mouth of someone else I have no idea what it means. “Humans are good with language,” says Boris Katz, lead research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, the principle group working with Nokia. “We want language to be a first-rate citizen” on cell phones, he says. |link|
March 15, 2010

FREEDOM

March 9, 2010

AR SCREENING

Been arguing about AR, archiving for posterity. Augmented reality Augmented reality will be the most important technological and social change since the widespread adoption of the internet. The internet has been around for decades, but it wasn’t until computing hardware was ubiquitous that the technology was able to serve as a platform for radical social, political, and economic change. Similarly, AR technologies have been around for a while, but only now is the hardware ubiquitous. Everyone is carrying computers in their pocket, computers that are networked and equipped with cameras and GPS, and are about as powerful as the PCs that fueled the first few years of internet. My personal hope is that the Hollywood-backed push for 3D multimedia will promote the widespread use of “smart glasses”, connected by Bluetooth to the smart phone in your pocket, with a HUD for fully-immersive, always-on AR. The technology is already there, or close enough for early adopters, it just all needs to get hooked up in the right way. AR tattoos Your face is a social business card 3D AR on the fly From image to interactive 3D model in 5 minutes Photosynth + AR] Arhrrrr The future of advertising Ali G QR Google Translate Hand from above Projection on Buildings Pinball The Ladder is a mixed-reality installation. The room is plain apart from a window, cut high into the wall and a ladder. A tiny virtual character, that can only be seen through the computer screen, stands on a ladder and looks out of the window to the physical world. He keeps voicing concerns as to the nature of the world, tracing shapes with his hands and trying to describe the scene. The screen is on a rig so that you can pan it across the room but the boy stays […]
February 26, 2010

ON CHALMERS

David Chalmers at Singularity Summit 2009 — Simulation and the Singularity. First, an uncontroversial assumption: humans are machines. We are machines that create other machines, and as Chalmers points out, all that is necessary for an ‘intelligence explosion’ is that the machines we create have the ability to create still better machines. In the arguments below, let G be this self-amplifying feature, and let M1 be human machines. The following arguments unpack some further features of the Singularity argument that Chalmers doesn’t explore directly. I think, when made explicit and taken together, these show Chalmers’ approach to the singularity to be untenable, and his ethical worries to be unfounded. The Obsolescence Argument: (O1) Machine M1 builds machine M2 of greater G than M1. (O2) Thus, M2 is capable of creating machine M3 of greater G than M2, leaving M1 “far behind”. (O3) Thus, M1 is rendered obsolete. A machine is rendered obsolete relative to a task if it can no longer meaningfully contribute to that task. Since the task under consideration here is “creating greater intelligence”, and since M2 can perform this task better than M1, then M1 no longer has anything to contribute. Thus, M1 is ‘left behind’ in the task of creating greater G. The obsolescence argument is at the heart of the ethical worries surrounding the Singularity, and is explicit in Good’s quote. Worries that advanced machines will harm us or take over the world may be implications of this conclusion, but not necessarily so. However, obsolescence does seem to follow necessarily from an intelligence explosion, and this on its own may be cause for alarm. The No Precedence Argument: (NP1) M1 was not built by any prior machine M0. In other words, M1 is not itself the result of exploding G. (NP2) Thus, when M1 builds […]
February 1, 2010

WHY ROBOTICS IS LESS IMPORTANT THAN AI

Augmented (hyper)Reality: Domestic Robocop from Keiichi Matsuda on Vimeo.
January 25, 2010

I NEED TO BELIEVE

Hadn’t seen this yet, recording for posterity.
December 18, 2009

MURDER

December 6, 2009

HELLA DROP SHADOW

I just read an excellent article called The Dark Side of Digital Backchannels in Shared Physical Spaces. I have nothing to really add to the analysis, except to say that these are circles I wish I traveled in. I should move to Silicon Valley and become a freelance philosopher. The article also references the Online Disinhibition Effect, which I had somehow forgotten to mention in my classes this semester, so I was grateful for the reminder. The Wikipedia entry for online inhibition effect lists six components: You Don’t Know Me (Dissociative anonymity) You Can’t See Me (Invisibility) See You Later (Asynchronicity) It’s All in My Head (Solipsistic Introjection) It’s Just a Game (Dissociative Imagination) We’re Equals (Minimizing Authority) However, when online tools are used in shared physical spaces, they transform them into what Adriana de Souza e Silva and others call hybrid spaces. In such spaces, the first four components are not as relevant or applicable, and so the hybrid inhibition effect may only involve the last two, and I think the one that best explains the Twittermobbing at conferences is the last one. Perhaps I am too deep into my research to see outside my own little world, but it strikes me that one might plausibly interpret Turing’s test as an endorsement of disinhibition in the last two senses: that we ought to treat our interactions with some machines as a game among equals, contrary to our normal biases against machines. In other words, although the online disinhibition effect is often discussed as a negative consequence of shared digital spaces (Wikipedia links its article to antisocial personality disorder, for instance), it is important to remember that sometimes disinhibition can be a virtue, especially when the norms that inhibit us are themselves negative and stifling.
December 5, 2009

BODY LANGUAGE

This is old news but talk of Google’s Public DNS brought up this bit of data: Marissa ran an experiment where Google increased the number of search results to thirty. Traffic and revenue from Google searchers in the experimental group dropped by 20%. Ouch. Why? Why, when users had asked for this, did they seem to hate it? After a bit of looking, Marissa explained that they found an uncontrolled variable. The page with 10 results took .4 seconds to generate. The page with 30 results took .9 seconds. Half a second delay caused a 20% drop in traffic. Half a second delay killed user satisfaction. Just a friendly reminder that computers are not pure syntax manipulators; they are embodied systems with complex non-formal behavior to which we are highly sensitive.
December 2, 2009

DUH

artificial From Abstruse Goose. Thx, Cameron!
November 21, 2009

I DONT WANT TO BE A ROBOT

July 26, 2010

ROBOT SURGEONS OPERATE AUTONOMOUSLY (ON TURKEYS)

Earlier this year we posted about how people are starting to specifically request robot-assisted surgeries as opposed to having ‘just’ a human operate on them. Now, researchers at Duke are working on an entirely autonomous robot arm that can take biopsies on humans based on ultrasound data. It works pretty well, too, at least on the dead turkeys that they tried it out on: “In the latest series of experiments, the robot guided the plunger to eight different locations on the simulated prostate tissue in 93 percent of its attempts.” I’m not entirely sure what happened in that other 7 percent… Most likely a slight miss with minimal consequences for the ex-turkey, as opposed to the robot going berserk and wildly stabbing everything within reach. More importantly, I’m curious as to what what the average “miss” rate is for a human taking a biopsy based on an ultrasound. In any case, the idea here is that robots will eventually (soon, perhaps?) be able to at the very least take care of simple, routine medical procedures which will save patients both time and money. “We’re now testing the robot on a human mannequin seated at the examining table whose breast is constrained in a stiff bra cup,” Smith said. “The breast is composed of turkey breast tissue with an embedded grape to simulate a lesion.” This is making me hungry. Vid, after the jump. Incidentally, turkeys are used because they have similar flesh to humans, and they show up about the same on an ultrasound. Also, they’re tasty. [ Duke ] VIA [ Daily Mail ] Note: the robot in the picture, a DaVinci system, was not the robot being used for this study. And as far as I know, the turkey in the picture wasn’t involved either.
July 24, 2010

AN APP THAT TURNS CAMERAS INTO TIME MACHINES [APPS]

Perfectly matching snapshots-in-progress with a photo taken in the same spot a hundred years ago is an awesome idea. Turns out, it’s kind of hard. But Adobe and MIT have figured out a way to make it happen more accurately. More »
July 22, 2010

EDUDEMIC » HOW TWITTER HELPS RESEARCHERS VISUALIZE THE MOODS OF AMERICANS

July 22, 2010

SEMI-AUTONOMOUS VANS TRAVELING FROM ITALY TO CHINA

A pair of robotic vehicles from Vislab (artificial vision and intelligent systems lab at the University of Parma) departed Parma, Italy on Tuesday for Shanghai, China. The 100% electric vans will travel 8,000 miles over three months, enduring (hopefully) all kinds of extremes ranging from the downtown Moscow to the Gobi desert, which I’m pretty sure is full of dinosaurs or something. Now, I’m calling these vans semi-autonomous because they’re autonomously following a vehicle that’s being driven by a human. Not that this is an easy task, of course… The vans have been kitted out with the same sort of obstacle detection and avoidance tech as the DARPA Grand and Urban challenge vehicles. At this point, this technology is targeted mostly at goods transport as opposed to letting you take a nap while your car drives you somewhere. Some people, though, don’t really get why this sort of thing is useful or important: “It begs the question why. In Australia, you have big trucks with three or four trailers attached in the desert. Why do you need an autonomous vehicle if you can connect them with a piece of steel?” said Andrew Close, an analyst at IHS Automotive. Well, there’s a reason why that type of thing works in Australia and nowhere else: in Australia, you have a bajillion miles of long, flat, empty road. Most states in the US, on the other hand, limit connected trailers to two. Giving autonomy (or optional semi-autonomy) to vehicles means that you can have as many trailers as is reasonable or convenient. And really, it’s the optional semi-autonomy that’s the most realistically valuable in the short term, as we’ve discussed before. Think about it: on the highway, you spend a LOT of time doing nothing except following the guy in front of you, […]
July 21, 2010

FLOPPY DRIVE GROWS LEGS TO AVOID SPILLS, STILL CAN’T AVOID EXTINCTION — ENGADGET

We might one day live in a world where everyday electronics can fend for themselves against household disasters but, for the time being, we can
July 21, 2010

NEW MIT SOFTWARE LEARNS AN ENTIRE DEAD LANGUAGE IN JUST A FEW HOURS

Whenever we boot up our time machines, cruise back to 1200 B.C., and try to pick up chicks at our favorite wine bar in Western Syria, our rudimentary knowledge of Ugaritic is usually more embarrassing than helpful. The good folks at the Massachusetts Institute of Technology have us stoked on some new software we hope to have in pocket form soon. It analyzes an unknown language by comparing letter and word patterns to another known language (in Ugaritic’s case, its close cousin is Hebrew) and spits out a translation quickly, using precious little computing power. To give some perspective, it took archaeologists four years to do the same thing back in 1928. It’s not quite Berlitz yet, but this proof of concept is kind of like the Michael Jordan of computational linguists — it’s probably the first time that machine translations of dead scripts has been proven effective. If we plug some hopeful numbers into our TI-83, we calculate that we’ll be inserting our own genes into the ancient Syrian pool in a matter of months. Thanks, MIT! [Photo courtesy of Wikipedia Commons] New MIT software learns an entire dead language in just a few hours originally appeared on Engadget on Thu, 22 Jul 2010 00:41:00 EST. Please see our terms for use of feeds. Permalink Mother Nature News | source National Geographic | Email this | Comments
July 21, 2010

BLURRY PICTURES FROM A PLANE

The Artic Circle: Random Cities in Mainland China: Hong Kong International Airport:
July 21, 2010

WHY I BOUGHT MCDONALDS IN HONG KONG

There is no excuse for buying McDonald’s in Hong Kong. But I did, and I will try to explain. My first night in Hong Kong led me, quite randomly, to the best Chinese food I’ve ever eaten, hands down. So good, in fact, that I have gone back to the same place twice since, and tried to make nice with the wait staff there. The fact that this place is right around the corner from a row of strip clubs has nothing to do with my frequenting this establishment. Honestly. I’ve also tried other places, with mixed results. I am currently convinced that I don’t really like the taste of Chinese barbecue; it is sweet and gummy in a way that just doesn’t appeal to me. I’ve also decided that I don’t really like rice noodles either; again, its mostly a texture thing. I’ve been on the look out for some fresh sea food, but no luck so far. This is high on my list of priorities for the weekend. I’ve run into a lot of sushi places, but since I’ll be in Tokyo in a few weeks I want to save my appetite for the real deal. The point is that its not from a lack of trying new things. I’ve become quite bold at stepping inside small, steamy restaurants, pointing randomly at the menu, and hoping for the best. Although the city is designed to be bilingual, I’ve found myself in a number of situations interacting with people can’t speak more than a few words of English, and so it is a crap shoot every time. I’ve also started working a lot, and running into the city to try new food isn’t always an option. I’ve gone hungry a few nights from just working past the time when […]
July 21, 2010

ECOBOT III EATS, POOPS, MOVES

This is a robot going poo: The robot in question is Ecobot III, which contains a fully functional digestive system capable of ingesting biomass, turning it into energy, and then excreting waste, graphically demonstrated in the above video. The actual digesting is done by a series of microbial fuel cells (MFCs), where bacteria chow down and produce hydrogen atoms as a byproduct. The hydrogen goes into a fuel cell, which generates electricity to power the robot plus pure water, which the robot then drinks to keep itself from getting dehydrated. The remaining biomass goes through the entire cycle once more, and then it’s, um, purged: Director of Bristol Robotics Laboratory, Chris Melhuish, said MFCs had been tried before but an artificial gut was needed to solve the problem of previous models, which was that humans had to clean up the waste left by bacterial digestion. Melhuish said the robot was called Ecobot III, but admitted “diarrhea-bot would be more appropriate, as it’s not exactly knocking out rabbit pellets.” The difference between Ecobot and other robots that use biomass for fuel (like EATR) is that Ecobot digests things to produce energy rather than burning them to generate heat to boil water to create steam to produce energy. Thanks to its bellyfull of microbes, Ecobot is actually able to digest things, and this makes it much more adaptable when it comes to sources of fuel, since it’s able to run on stuff that doesn’t burn, like waste water. Yes, this robot not only poos, it could potentially be powered by poo. At the moment, Ecobot III is only 1% efficient, and while it’s technically capable of operating for several days completely on its own, it can’t really do much in that time. After the jump, watch Ecobot II (a fully armed and […]
July 20, 2010

SINGULARITY SUMMIT 10 NEXT MONTH IN SF

The Singularity Summit is scheduled for August 14-15 here in SF, and if you’re interested in seeing what the future might be like (without just waiting until it gets here), a bunch of smart people will happily tell you their thoughts on what might be in store for us as a species. Speakers include Ray Kurzweil (of course), James Randi, Dr. Irene Pepperberg, David Hanson of Hanson Robotics, and many more. So, what’s “The Singularity?” See it on a graph, after the jump. The Singularity represents an “event horizon” in the predictability of human technological development past which present models of the future may cease to give reliable answers, following the creation of strong AI or the enhancement of human intelligence. The general argument here is that the increase in computing power is a predictable trend, and if you extend that trend out into the future, you can see how long it takes until human brains become pretty much useless in the face of overwhelming artificial intelligence, at which point things are going to get totally crazy. And keep in mind that that graph is also taking cost into account, so according to the predicted trend (which is based on data from the past and present), by 2050 or so $1000 will buy you a computer that can out-calculate our entire race. Every second. Pretty wild stuff. [ The Singularity Summit 10 ]
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }