September 13, 2007

TECHNOMANCER

In case you were wondering: The Technomancer Like druids, but with tech instead of nature. Technomancers are more than just skilled technicians. They are in tune with machines, connecting with them not only on an intellectual but a spiritual level. Note: A “machine”, for purposes of the technomancer, is any electronic system. A technomancer does not necessarily have any mechanical or structural engineering abilities or knowledge. But “electrical and electronic systems, computers, and artificial intelligences” is one hell of an awkward phrase. … Robotic Companion: A 1st-level technomancer may begin play with a robotic companion. This companion is one that the technomancer has built herself. Robotic companions can have up to 2 HD. Alternatively, the technomancer may have more than one robotic companion provided that the robots’ total HD don’t exceed 2. The technomancer can also cast AI friendship on other robots during play (see the spell description below.) I’m looking for a word to describe someone almost religiously devoted to technology, but I’d prefer a word that leans towards cyberpunk and away from D&D. I don’t like technomancer, and technomage is just as bad. I like server monk, but it wouldn’t make much sense to someone who didn’t know server. Technopriest is taken by the Catholics, and electroyogi and eVicar are just silly. So help me out, cyberspace.
September 13, 2007

SCHOOL OF BBALL

From this collection, most of which is send ups or variations of rather tired internet memes. See also.
September 11, 2007

WHY MAN CREATES

September 7, 2007

OVERHEARD IN AN AIRPORT

In an airport smoking lounge, two TSA officials take a smoke break with Sudoku books in hand. TSA 1: You know its just logic. If a computer had that puzzle, it could solve it in [pause] ten seconds. Just… poof [makes wild hand gestures]. TSA 2: Well, obviously I’m not a computer. [pause, smiles] If I were a computer, I’d have a real job.
September 3, 2007

I’VE BEEN NAILED

So in the D&D thread on the Deep Blue article, I was getting a bit liberal with my misanthropist technophile rhetorical flourishes. This particular response makes me chuckle a bit: Not to attack you or anything, but you get overly dramatic over bizarre stuff. What do you mean by “This comparatively simple inert machine generated genuine panic and emotion in humanity’s best representative; in the face of the machine, we flinched first” exactly? It seems like you’re turning the frustration of one person into a species-wide defeat that we all felt — and on top of it, you really seem to relish it. It seems odd to me that you simultaneously place such great significance upon machines performing the tasks they were built to perform and such great satisfaction in humans “losing.” After I gave my colloquium on Friday, there was some discussion about how my intuitions concerning machines and technology didn’t align with most people at the talk. A certain Mr. Swenson suggested, via an illusion to Jane Goodall, that perhaps I had spent so much time around machines that I actually started to think like them . Well, if loving machines is wrong then I don’t wanna be right.
September 2, 2007

CONTAGIOUS

Its a few months late, but happy 10 year anniversary to Deep Blue vs Kasparov! To commemorate the event, Dennett wrote up a short, and I think painfully superficial, discussion in MIT’s Technology Review. Higher Games The verdict that computers are the equal of human beings in chess could hardly be more official, which makes the caviling all the more pathetic. The excuses sometimes take this form: “Yes, but machines don’t play chess the way human beings play chess!” Or sometimes this: “What the machines do isn’t really playing chess at all.” Well, then, what would be really playing chess? This is not a trivial question. The best computer chess is well nigh indistinguishable from the best human chess, except for one thing: computers don’t know when to accept a draw. Computers–at least currently existing computers–can’t be bored or embarrassed, or anxious about losing the respect of the other players, and these are aspects of life that human competitors always have to contend with, and sometimes even exploit, in their games. Offering or accepting a draw, or resigning, is the one decision that opens the hermetically sealed world of chess to the real world, in which life is short and there are things more important than chess to think about. This boundary crossing can be simulated with an arbitrary rule, or by allowing the computer’s handlers to step in. Human players often try to intimidate or embarrass their human opponents, but this is like the covert pushing and shoving that goes on in soccer matches. The imperviousness of computers to this sort of gamesmanship means that if you beat them at all, you have to beat them fair and square–and isn’t that just what ­Kasparov and Kramnik were unable to do? |via Reality Apologetics| I am personally convinced that humanity […]
August 28, 2007

NT

August 25, 2007

CONTENT AWARENESS

August 23, 2007

HIV DENIAL IN THE INTERNET ERA

I was linked to this study in the PLOS on the apparent spread of science denial and disinformation that has become symptomatic of the Internet Age. Below are my somewhat lengthy comments in response to Twinxor’s concerns in the D&D thread. For the Record, PLOS is a legit peer-reviewed scientific journal, but is licensed under Creative Commons, so it free and open to the public. What’s more, they allow commentary by readers. I am thinking of revising this comments and attaching them to the article, so any editing advice would be appreciated. Twinxor posted: I can live with the existence of wackos with silly beliefs. The trouble is their influence – widespread doubt of HIV’s importance is very bad, because it leads people to ignore safe sex practices and a lot more people die. As I see it, the big challenge is to demonstrate the reliability and correctness of science, which inoculates the public against conspiracy theory. This is a strange claim to make, because the job of science is to demonstrate the reliability and correctness of its claims, and at least in these cases science has already done an admirable job of justifying its conclusions. Moreover, this article demonstrates that science is already well inoculated against pseudoscience, so much so that it can incorporate pseudoscientific practice as part of its dataset. This suggests that science is not challenged by pseudoscience. Leaving aside the obviously huge problem of scientific funding, pseudoscience seems to present no epistemological problems for the status of science itself. If science is primarily an epistemological enterprise, then what’s the challenge? The answer, I think, is mentioned in the title of the paper, but seems relatively absent from the article itself: namely, the effect of the ‘Internet Era’ on scientific practice. Before internet, people were obviously free […]
August 21, 2007

10 YEARS OF ARTIFICIAL INTELLIGENCE

I think this little snapshot of history is quite telling. How Do Post Office Machines Read Addresses?‘ Not until Christmas of 1997 did the USPS and the University of Buffalo’s Center for Excellence in Document Analysis and Recognition (CEDAR) deploy its first handwritten address-reading prototype, which rejected 85 percent of envelopes and correctly identified the address in only 10 percent of those it read with a 2 percent error rate. … Today, the large majority of letters sent through the post office are read and sorted entirely by computer. According to Srihari, current reading success rates are above 90 percent… the first human eyes to examine the envelope are those of the postal carrier approaching your mailbox.
August 18, 2007

FORGET THE GRAND CHALLENGE

Baka Robocup
August 13, 2007

THE HUMANS ARE DEAD

If you haven’t been watching Flight of the Conchords’ show on HBO, you should be.
June 16, 2009

AR

And for the kids, who we are clearly raising to become wizards:
June 14, 2009

INFINITELY MORE USEFUL

one giant leap for robotkind: robot successfully opens doors, plugs own power cord No matter how fast they can think or how many things they can process at once, robots will be infinitely more useful if they’re independent. That includes being able to overcome obstacles – such as the nigh-immovable hindrance we call “The Door” – and more importantly, be able to feed itself, which obviously translates into recharging. thx Lally
June 13, 2009

IN THE YEAR 2009

http://fractionalactorssub.madeofrobots.com/blog/wp-content/uploads/2009/06/robot-emotions.jpg thx kb
June 11, 2009

STILL MORE VIDS FOR SUMMER USE

And I should have added this to the blog long ago:
June 1, 2009

MORE SUMMER USE

Hulu has some SciAm vids with Alan Alda that are worth bookmarking here. See below the jump. The first is on robots at MIT’s media labs. I’ve posted most of the bots here already, but its a good overview of the work they are doing with social robotics. However, at the start of the program Alda unbelievably says “The problem with most robots is that they tend to be robotic. They know nothing, they aren’t programmed to know. And they do nothing, they aren’t programmed to do. But for many applications where robots could be useful, they need to be more like humans.” My diss is now titled “Rethinking Machines: why Alan Alda is wrong about everything” The second vid is Alda again working with some cybernetically enhanced humans, regaining either the rudimentary power to hear or to see. The bit where Alda describes what it is like to hear a human voice with the cochlear implants is terrifying. This is much more of a human interest piece, and I can’t help but feel sorry for early adopters.
May 21, 2009

I’LL TURN YOU INTO ME, I’LL TURN YOU INTO ME

Robots Evolve And Learn How to Lie (Discover) Robots can evolve to communicate with each other, to help, and even to deceive each other, according to Dario Floreano of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology. Floreano and his colleagues outfitted robots with light sensors, rings of blue light, and wheels and placed them in habitats furnished with glowing “food sources” and patches of “poison” that recharged or drained their batteries. Their neural circuitry was programmed with just 30 “genes,” elements of software code that determined how much they sensed light and how they responded when they did. The robots were initially programmed both to light up randomly and to move randomly when they sensed light. To create the next generation of robots, Floreano recombined the genes of those that proved fittest—those that had managed to get the biggest charge out of the food source. The resulting code (with a little mutation added in the form of a random change) was downloaded into the robots to make what were, in essence, offspring. Then they were released into their artificial habitat. “We set up a situation common in nature—foraging with uncertainty,” Floreano says. “You have to find food, but you don’t know what food is; if you eat poison, you die.” Four different types of colonies of robots were allowed to eat, reproduce, and expire. By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink. Some robots, though, were […]
May 20, 2009

I AM THE CYBORG ANTICHRIST

Lally, always on top of the newest and best on the net, linked me to a great feature on oobject about the top current cyborg technologies. 16 Genuine Cyborg Technologies Just how much of the human body can you replace or augment: seemingly everything apart from the tadpole like remnants of the brain and spinal chord. Bionic eyes, ears, hearts, lungs, kidneys, livers, hands, feets, legs, arms and skin are now real science rather than concept designs. For this list, we have gathered together as many real devices including commercially available products rather than concept designs or imagery that appeal based on gimmick value. The one exception is the tooth and ear cellphone implant which is feasible today. An interesting idea is how the notion of a cyborg might change (often imagined as fusion of mechanical and electronic technology with human biology), since many of these devices use technology that is itself principally biological, such as stem cell lines in the bioreactor liver or artificial skin. At the top of list when I last checked was the Bionic Contact Lense bionic-lense Researchers have developed new contact lenses that contain circuits, LEDs, and a “powder” of electrical components that can enable an average human being to possess superhuman vision. The contact lenses would allow images to be displayed in a person’s vision, superimposed on the real world. … The researchers explained that one of the most difficult parts of designing the lenses is making them biologically safe. So far, they have only tested the lenses on rabbits, with no negative effects. Electrical circuits consist of toxic chemicals, but the scientists built them from layers of metal only a few nanometers thick. Now is a great time to be a rabbit. In any case, the rest of the list is pretty sweet […]
May 20, 2009

ROBOT MONSTER

thx Bdizzle
May 20, 2009

ROBOT ETHICS. MMHM.

Robot warriors will get a guide to ethics New ‘Terminator’ Robots Go in Harm’s Way 090518-robotwarrior-hmed-11ahmedium maars-robot-540×380 terminator-400×540 Lethal military robots are currently deployed in Iraq, Afghanistan and Pakistan. Ground-based robots like QinetiQ’s MAARS robot (shown here), are armed with weapons to shoot insurgents, appendages to disarm bombs, and surveillance equipment to search buildings. Robots with a set of ethical guidelines, or perhaps how we ought to treat robots ethically? Or maybe — “This is trying to give a team of soldiers a ‘tenth man’ that is expendable to enemy fire,” said Quinn. “[The robots] can take a beating,” said Robert Quinn, an engineer at Foster-Miller. “Some of our robots have been blown up 10, even 15 times, and they still work.” “Robots don’t have an inherent right to self-defense and don’t get scared,” said Arkin. “The robots can take greater risk and respond more appropriately.” Oh yes, I see. (Thanks Max and Paul!)
May 12, 2009

THE ETHOS OF INTERNET

at least it’s an ethos This viddie is a rather boring demonstration of Wolfram Alpha. It does basically what it has claimed to be able to do: it can process data in a variety of domains, answer queries in natural language that pertain to the data, and present answers and other relevant or useful information in a human readable form. The internet has been hyping and/or cynically doubting Alpha for the last few weeks, and although looks like it works pretty well I don’t think it deserves either. The fervor Alpha has generated is really due to a misunderstanding of what Alpha is. Alpha is a systematic attempt to formalize the ontologies of certain scientific domains in order to query that data for specific kinds of information. It is an attempt, Wolfram suggests, of making science computable. This is a big project, and certainly worthwhile (if just a little wide-eyed). But it is also something that Wolfram has been working on for decades, and it appears to be a legitimate attempt. Alpha is not a foundation for a semantic web. Look: the semantic web is going to happen one way or another. It is the looming peak in the distance, and someone will scale it, and I imagine it will happen fairly soon. But this is not it. I have lots of complaints about the vision here, but my biggest complaint is certainly this: Alpha requires expert humans to explicitly build ontologies and pour in the data. This works well in certain scientific domains, but its not the sort of thing you can lay on top of the internet to create SmartGoogle, which is what everyone expects from the semantic web. Ontologies cannot be planned in advanced. Ontologies are not pure formal properties that bind together a domain through pure […]
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }