February 24, 2006

WELL I GUESS IT WOULD BE NICE

The always astute Bellman posted a link to a nice little story on Atwood’s new machine for remotely autographing books. It is a good and far less artificial example of the deep confusion we have of how to treat the technology that continually interferes and intervenes in our lives.I’ll let the discussion continue over there, but I had to post the following: Ms Atwood insists that the device is not a hoax. “It’s real. Trust me. You need to have more faith.” Its easy to say that how we treat the contributions of machines to our practices is just ‘a matter of convention’, but that presupposes that our conventional intuitions are informed by principles that extend nicely to novel cases. Convention breaks down when our intuitions and practices are confused and confusing, and we need some way of sorting this mess out before we can even start to assess the situation.
February 24, 2006

TALKING WITH THE PHILOSOPHERS

I’ve introduced a new feature on this page, since I have the webspace and bandwidth to spare. The Academy‘ is for the philosophers sick of the problems with the Social list serve to have a place to discuss upcoming parties and poker and such, and to generally fuck off. But don’t let that stop anyone else who wants to post whatever for the benefit of us all.
February 21, 2006

EXAMPLE IV: THE ROBOTIC SLIME

From The Gaurdian: Slime mould used to create first robot run by living cells Dr Zauner grew a star-shaped sample of the slime mould and attached it to a six-legged robot (with each point of the star attached to a leg) to control its movements. Shining white light on to a section of the single cell organism made it vibrate, changing its thickness. These vibrations were fed into a computer, which then sent signals to move the leg in question. Pointing beams of light at different parts of the slime mould means that different legs move. Do it in an ordered way and the robot will walk. Lets assume for this example that animal agency is different in kind from robotic or otherwise artificial agency, such that the slime mold’s behavior here is closer to genuine ‘original action’ than to mechanical ‘derivative action’. This is not to import any cognitive or otherwise mental phenomena to the slime mold. Its just a slime mold. The point is simply that its behavior is properly attributed to it, since there are no designers or other actors influencing the cyborg’s behavior. But the mold also moves around a robot, with some sophisticated machinery backing it up. Here’s the problem: the slime mold is essentially just a photo cell for responding to light. We have plenty of those same sorts of cells, artifiically constructed, that can behave in a much more complex fashion with respect to incoming light. From an engineering perspective, the slime mold is rather superfluous, and this sort of example is more show than actual science. But lets look at it from the perspective of our discussion on robotic agency. The robot moves because the slime mold reacts to the light. The cyborg (slime mold + robot) here could reasonably be described […]
February 20, 2006

EXAMPLE III: ALDO CIMINO

Consider the oft-cited case from the early 80s of Aldo Cimino, the resident expert of Campbell Soup’s cooker system. Occasionally, serious problems arise that require the services of an expert who understands the gritty details of the design, installation, and operation of the hydrostatic sterilizer. If this sterilizer is not working, bacteria will eat through the cans and plant operations would be seriously disrupted. If the problem cannot be solved in a few minutes, it may be necessary to throw away many thousands of cans of food. Unfortunately, there are few human experts that understand the cooker systems well enough to handle any problem that may arise. Campbell Soup relied primarily one individual, Aldo Cimino(who had 45 years of experience), to deal with the toughest problems. Sometimes, a hydrostatic sterilizer had to be shut down until Mr. Cimino could be flown to a particular plant to work on this problem. |link| Cimino is clearly an expert, and because of this epistemically privileged position he served an extremely valuable role within the business’ practices. But experts are like the Sith: there is always a master and an apprentice. Unfortunately, is was almost impossible to train anyone to Cimino’s level of expertise. Enter the machine: Although programming an expert system to replicate the work of Aldo Cimino seemed near impossible due to the amount of knowledge and intuition he has gained through 45 years of experience, the power of artificial forever changed the way things were done at Campbell… The diagnostic system, with 150 heuristical rules built in, was completed in several months, and then tested in select factories for seven months until it was finally implemented in all of Campbell’s canneries a year later. It took roughly two years to develop and implement this expert system that could be mass produced, […]
February 20, 2006

FOR THE RECORD

I have finished going through the archives and classifying all the posts, which you can peruse if you like. As you can see, I spend far more time talking about myself than anything else. To be fair, I have been including anything remotely relating to my personal and social life under ‘eripsa’, so it has entries like this or this, which really have nothing to do with me. Next on the countdown is philosophy, the internet, and AI/HMI tied for fourth. I’d say that is a fair reflection on the content of my blog. Notice, for the record, that Google and Robots appear only 30 times in over a year. My favorite image is still this, which was posted very early on:
February 20, 2006

A MODEL OF SELF

Melnick’s advice on my proto-proposal was that it seems I need to give the machines something like a self to be responsible, or to otherwise hold the seat of agency. My first philosophy class as a freshman at UCR was on Parfit and persons, and I haven’t thought about issues of ‘self’ since. I thought Melnick was a bit confused, because he raised this point in the context of talking about consciousness, and if talking about a self necessarily required talking about consciousness, then I was most definitely not interested in the self. In any case, raising issues about the self seemed to push me back into some self-moved mover mumbo that I was explicitly trying to avoid. Flash forward to today, reading an article on Cognitive Radio: Self-awareness refers to the unit’s ability to learn about itself and its relation to the radio networks it inhabits. Engineers can implement these functions through a computational model of the device and its environment that defines it as an individual entity (“Self”) that operates as a “Radio”; the model also defines a “User” about whom the system can learn. A cognitive radio will be able to autonomously sense how its RF environment varies with position and time in terms of the power that it and other transmitters in the vicinity radiate. These data structures and related software will enable a cognitive radio device to discover and use surrounding networks to the best advantage while avoiding interference from other radios. In the not too distant future, cognitive radio technology will share the available spectrum optimally without instructions from a controlling network, which could eventually liberate the user from user contracts and fees. If I can wax existential for a bit, the self necessarily understands itself in terms of the Other. In the human […]
February 20, 2006

LITTLE MIRACLES

From CNN: Scientists enlist clergy in evolution battle “The intelligent design movement belittles God. It makes God a designer, an engineer,” said Vatican Observatory Director George Coyne, an astrophysicist who is also ordained. “The God of religious faith is a god of love. He did not design me.”
February 20, 2006

THE COOL KIDS

From the Pew Internet & American Life Project (PDF) Surfing the Web has become one of the most popular activities that internet users will do online on a typical day. Some 30% of internet users go online on any given day for no particular reason, just for fun or to pass the time. This makes the act of hanging out online one of the most popular activities tracked by the Pew Internet & American Life Project and indicates that the online environment is increasingly popular as a place for people to spend their free time. Compared to other online pursuits, the act of surfing for fun now stands only behind sending or receiving email (52% of internet users do this on a typical day) and using a search engine (38% of internet users do this on a typical day), and is in a virtual tie for third with the act of getting news online (31% of internet users do this on a typical day). In aggregate figures, this development is striking because it represents a significant increase from the number of people who went online just to browse for fun on a typical day at the end of 2004. In a survey in late November 2004, about 25 million people went online on any given day just to browse for fun. In the Pew Internet Project survey in December, 2005, that number had risen to about 40 million people.
February 17, 2006

EXAMPLE II: THE NAMING MACHINE

Say we automate astronomy by building telescopes that searched the sky in regular patterns and, upon finding a star or otherwise notable object in space, it assigns that object a name from an officially designated list of names. On Kripke’s view, a name has a reference in virtue of a causal history of use that can be traced back to an initial ‘baptism’ or imposition of a name. Some person at some time in the past pointed at water and said ‘water’ (or some cognate), and from that point forward the word ‘water’ rigidly designates water in all possible worlds. Assume for a moment that Kripke is right. Does our automated astronomy bot name the star? One might think ‘no, the star is named in virtue of the pattern of search employed by the machine, and the list of names, both of which are developed by the scientists and engineers who designed the machine.’ But, as I have been arguing, the designers don’t name anything. It is the machine itself that forms the connection between a name and an object. The designers wouldn’t have known which object the proposed name would attach to, or even if the name would ever in fact be used. We can complicate the story by making the lists more complex (for instance, different lists for different categories of stars), or having the machine pick a random starting point within the list. I don’t think either variation helps the sitution much. Of course, the scientist’s ignorance about which object the name is attached to doesn’t itself hurt the Kripkean theory, since ‘water’ means H20 in all possible worlds, even those in which no one knows that ‘water’ is H20. But the case here is more severe: the scientists not only lack knowledge about which star is […]
February 16, 2006

HOW WE USE EMAIL

Kruger et al. Egocentrism Over E-Mail: Can We Communicate as Well as We Think? (PDF) If comprehending human communication consisted merely of translating sentences and syntax into thoughts and ideas, there would be no room for misunderstanding. But it does not, and so there is. People convey meaning not only with what they say, but also with how they say it. Gesture, voice, expression, context—all are important paralinguistic cues that can disambiguate ambiguous messages (Archer & Akert, 1977; Argyle, 1970; DePaulo & Friedman, 1998). Indeed, it is not uncommon for paralinguistic information to more than merely supplement linguistic information, but to alter it completely. The sarcastic observation that “Blues Brother, 2000—now that’s a sequel” may imply one thing in the presence of paralinguistic cues but quite the opposite in the absence of them. The research presented here tested the implications of these observations for the rapidly escalating technology of e-mail, a communication medium largely lacking in paralinguistic information. We predicted that because of this limitation subtle forms of communication such as sarcasm and humor, would be difficult to convey. But more than that, we predicted that e-mail communicators would be largely unaware of this limitation. Because participants knew what they intended to communicate, we expected them to assume that their audience would as well. Stolen from ars technica
February 15, 2006

EXAMPLE: EHARMONY

Consider eHarmony, the online dating service that uses some highly sophisticated statistical methods for matching people up, with the express goal of long-term compatibility. From The Atlantic: How do I love thee? “We’re using science in an area most people think of as inherently unscientific,” Gonzaga said. So far, the data are promising: a recent Harris Interactive poll found that between September of 2004 and September of 2005, eHarmony facilitated the marriages of more than 33,000 members—an average of forty-six marriages a day. And a 2004 in-house study of nearly 300 married couples showed that people who met through eHarmony report more marital satisfaction than those who met by other means. The company is now replicating that study in a larger sample. “We have massive amounts of data!” Warren said. “Twelve thousand new people a day taking a 436-item questionnaire! Ultimately, our dream is to have the biggest group of relationship psychologists in the country. It’s so easy to get people excited about coming here. We’ve got more data than they could collect in a thousand years.” The stength of eHarmony, and what makes it so popular and apparently successful, is the sheer amount of data they have collected, and their theoretical models of relationships that can mine the data for compatibility results. They claim to be using science to build relationships (contrast with chemistry.com, which basically uses a suped up Myers-Briggs test). Question: who is responsible for the resulting pairs suggested by the system? Consider: The statistical models are the result of lots of r&d from some rather prominent academics and experts in this field of psychology. None of the scientists responsible for building those models (or, for that matter, any of the programmers and engineers responsible for implementing the model) directly influence the resulting suggestion from the statistical […]
February 11, 2006

THIS IS NOT MY POSITION

More on Pleo: video of his first steps. http://www.demo.com/demonstrators/demo2006/63039.html Its worth watching, if only for how depressing it gets at the end.
July 29, 2007

LEO

This is from the NYT Mag article linked in the last post. I thought Leo (at MIT, of course) deserved special attention: The reason the robot, called Leonardo (Leo for short), is so lifelike is that it was made by Hollywood animatronics experts at the Stan Winston Studio. (Breazeal consulted with the studio on the construction of the robotic teddy bear in the 2001 Steven Spielberg film “A.I.”) Apparently Leo is also wired up to pass the false-belief test, but the author of the article wasn’t very impressed with that.
July 29, 2007

OH, THE LINKS I GET!

I’ve received a lot of links. Some are great a lot of them stink Oh, the links I get! Just this past week I’ve received a lot of links Because apparently when people read of weed they think of me. Oh, the links I get! My reputation may not be high But I don’t worry. Don’t stew. I also get links about AI and robots towering in the sky Where solving checkers is easy as pie Where sociable robots go to die Even while they scream “I’m Alive!” Oh, the links I get! (Thanks, Chaz, Steve, EJDickso, IS, and Mara!)
July 22, 2007

NT

http://fractionalactorssub.madeofrobots.com/blog/pics/comic_title.jpg
July 16, 2007

YOUR MONEY IS NOW OUR MONEY

I’ve wanted to post this video for a while, but YouTube only had a crappy cam of it. It is by far the best part of the ATHF movie
July 14, 2007

WELL THATS SETTLED

Watch this.
July 4, 2007

OUR BEST MACHINES ARE MADE OF SUNSHINE

The title is a quote from Donna Haraway’s A Cyborg Manifesto (1985). Here’s the relevant passage: The third distinction is a subset of the second: the boundary between physical and non-physical is very imprecise for us. Pop physics books on the consequences of quantum theory and the indeterminacy principle are a kind of popular scientific equivalent to Harlequin romances* as a marker of radical change in American white heterosexuality: they get it wrong, but they are on the right subject. Modern machines are quintessentially microelectronic devices: they are everywhere and they are invisible. Modern machinery is an irreverent upstart god, mocking the Father’s ubiquity and spirituality. The silicon chip is a surface for writing; it is etched in molecular scales disturbed only by atomic noise, the ultimate interference for nuclear scores. Writing, power, and technology are old partners in Western stories of the origin of civilization, but miniaturization has changed our experience of mechanism. Miniaturization has turned out to be about power; small is not so much beautiful as pre-eminently dangerous, as in cruise missiles. Contrast the TV sets of the 1950s or the news cameras of the 1970s with the TV wrist bands or hand-sized video cameras now advertised. Our best machines are made of sunshine; they are all light and clean because they are nothing but signals, electromagnetic waves, a section of a spectrum, and these machines are eminently portable, mobile — a matter of immense human pain in Detroit and Singapore. People are nowhere near so fluid, being both material and opaque. Cyborgs are ether, quintessence. I interpret Haraway’s quote quite literally: our best machines are made of pure energy, of the same stuff as sunshine. Think of fiber optics, or of all the signals that fill the air broadcasting information at some frequency of the electromagnetic […]
June 23, 2007

WE ARE ONE PLANET

June 13, 2007

ISN’T HUMAN NATURE AMAZING?

June 4, 2007

HOW CRAYONS ARE MADE

I’m probably going to show this video to my students over the summer as we discuss various definitions of technology. I’ve used this example before to illustrate technology as manufacturing; for some reason this is the picture in my head when I think ‘manufacturing’. They don’t make crayons like this any more, of course, but I especially like how tactile this video is: lots of hands grabbing bundles of crayons and moving them around really gives you a sense of the weight of the crayons in bulk, and a pretty good idea at the steps involved in crayon creation. I’ve referenced this montage in my 101 class, without having the video handy, under the assumption that enough people have seen Sesame Street and Electric Company to recognize the reference. But maybe I’m just old. Is this video familiar? If it isn’t, and I said “The Sesame Street montage of how crayons are made”, would you at least have a sense of what I’m talking about?
May 8, 2007

WAPO FOLLOWING MY LEAD

There are just too many great quotes from this article, so just read the whole thing: Bots on the ground The wars in Afghanistan and Iraq have become an unprecedented field study in human relationships with intelligent machines. These conflicts are the first in history to see widespread deployment of thousands of battle bots. Flying bots range in size from Learjets to eagles. Some ground bots are like small tanks. Others are the size of two-pound dumbbells, designed to be thrown through a window to scope out the inside of a room. Bots search caves for bad guys, clear roads of improvised explosive devices, scoot under cars to look for bombs, spy on the enemy and, sometimes, kill humans. Even more startling than these machines’ capabilities, however, are the effects they have on their friendly keepers who, for example, award their bots “battlefield promotions” and “purple hearts.” “Ours was called Sgt. Talon,” says Sgt. Michael Maxson of the 737th Ordnance Company (EOD). “We always wanted him as our main robot. Every time he was working, nothing bad ever happened. He always got the job done. He took a couple of detonations in front of his face and didn’t stop working. One time, he actually did break down in a mission, and we sent another robot in and it got blown to pieces. It’s like he shut down because he knew something bad would happen.” The troops promoted the robot to staff sergeant — a high honor, since that usually means a squad leader. They also awarded it three “purple hearts.” Humans have long displayed an uncanny ability to make emotional connections with their manufactured helpmates. Car owners for generations have named their vehicles. In “Cast Away,” Tom Hanks risks his life to save a volleyball named Wilson, who has become […]
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }