December 29, 2015

YES, AI SHOULD BE OPEN

Scott Alexander: Should AI be Open? Or are we worried that AI will be so powerful that someone armed with AI is stronger than the government? Think about this scenario for a moment. If the government notices someone getting, say, a quarter as powerful as it is, it’ll probably take action. So an AI user isn’t likely to overpower the government unless their AI can become powerful enough to defeat the US military too quickly for the government to notice or respond to. But if AIs can do that, we’re back in the intelligence explosion/fast takeoff world where OpenAI’s assumptions break down. If AIs can go from zero to more-powerful-than-the-US-military in a very short amount of time while still remaining well-behaved, then we actually do have to worry about Dr. Evil and we shouldn’t be giving him all our research. // I’ve been meaning to write a critical take on the OpenAI project. I’m glad Scott Alexander did this first, because it allows me to start by pointing out how completely terrible the public discussion on AI is at the moment. We’re thinking about AI as if they are Super Saiyan warriors with a “power level” of some explicit quantity, as if such a number would determine the future success of a system. This is, for lack of a better word, a completely bullshit adolescent fantasy. For instance, there’s no question that the US government vastly overpowers ISIS and other terrorist organizations in strength, numbers, and strategy. Those terrorist groups nevertheless represent a persistent threat to global stability despite the radical asymmetry of power– or rather, precisely because of the ways we’ve abused this asymmetry. “Power level” here does not determine the trouble and disruption a system can cause; comparatively “weak” actors can nevertheless leave dramatic marks on history. Or […]
December 9, 2015

DELUSIONS ABOUT EUGENE (A REPLY TO ANDREAS SCHOU)

Andreas Schou writes: +Daniel Estrada finds this unnecessarily reductive and essentialist, and argues for a quacks-like-a-duck definition: if does a task which humans do, and effectively orients itself toward a goal, then it’s “intelligence.” After sitting on the question for a while, I think I agree — for some purposes. If your purpose is to build a philosophical category, “intelligence,” which at some point will entitle nonhuman intelligences to be treated as independent agents and valid objects of moral concern, reductive examination of the precise properties of nonhuman intelligences will yield consistently negative results. Human intelligence is largely illegible and was not, at any point, “built.” A capabilities approach which operates at a higher level of abstraction will flag the properties of a possibly-legitimate moral subject long before a close-to-the-metal approach will. (I do not believe we are near that point, but that’s also beyond the scope of this post.) But if your purpose is to build artificial intelligences, the reductive details matter in terms of practical ontology, but not necessarily ethics: a capabilities ontology creates a giant, muddy categorical mess which disallows engineers from distinguishing trivial parlor tricks like Eugene Goostman from meaningful accomplishments. The underspecified capabilities approach, without particulars, simply hands the reins over to the part of the human brain which draws faces in the clouds. Which is a problem. Because we are apparently built to greedily anthropomorphize. Historically, humans have treated states, natural objects, tools, the weather, their own thoughts, and their own unconscious actions as legitimate “persons.” (Seldom all at the same time, but still.) If we assigned the trait “intelligence” to every category which we had historically anthropomorphized, that would leave us treating the United States, Icelandic elf-stones, Watson, Zeus, our internal models of other peoples’ actions, and Ouija boards as being “intelligent.” Which […]
November 23, 2015

ATTENTION, OPINION DYNAMICS, AND CRYING BABIES

In a recent article, Adam Elkus argues two points: 1) Drawing attention to an issue doesn’t necessarily solve it. 2) Drawing attention might make things worse. For these reasons, Elkus argues against what he calls “tragedy hipsterism”: the “endless castigation of the West for sins and imperfections” without offering anything constructive. He says, “Awareness-raising is only useful if it is somehow necessary for the instrumental process of achieving the desired aim. In many cases, it is not and is in fact an obstacle to that aim.” I think this is completely mistaken, both about the utility of castigation, but more generally about the role of attention in shaping the social dynamics. Consider, for instance, a crying baby. Crying doesn’t solve any problem on its own. If an infant is hungry, crying won’t make food magically appear. At best, crying gets an adult to acquire food for the baby– but not necessarily so. The adult could easily ignore the baby, or misinterpret the cry as triggered by something other than hunger. Typically, an adult will feed the baby whether or not it cries, which renders the crying itself completely superfluous. And crying can be dangerous! In the wild, crying newborns tend to attract predators looking for an easy meal. On a plane, crying newborns create social animosity that might threaten the safety of the newborn and their family in other ways. Crying doesn’t always help, and it often makes things worse. So on Elkus’ argument, crying is actually an obstacle to the infant’s well being. If babies only understood the futility of crying, perhaps they’d be more effective at realizing their goals! Of course, this argument is ridiculous. Crying isn’t meant to solve problems directly. In fact, crying is usually issued from a place of helplessness: the inability to realize one’s […]
October 2, 2015

EARLY DIGITAL SOCIETIES

I’d invite everyone to imagine the human world as it existed before the invention of money. Prior to money, people engaged in cooperative behaviors for a variety of non-financial reasons (family, love, adventure, etc). But populations eventually grew too big to support the network with such slow, noisy transactions. Early agricultural societies invented money to help everyone collectively keep better track of how all their valuables were distributed. At the time, it would have been perfectly sensible to wonder about the complications that money would bring. “What if you’re in a situation where you need help, but you don’t have enough money to get anyone to help you? Seems like a lot of people could get the short end of the stick.” Of course, this worry would have been exactly correct. There are massive problems with the distribution of wealth and resources that comes with money. These problems are persistent; we still don’t know how to deal with them, and they are worse than ever before. Nevertheless, money was the critical coordinating infrastructure that (more or less) set up the human population to flourish over the last 10k yrs or so. It was the tool that built the human population as it exists today. I don’t like money. The human population today is fat, dirty, wasteful, uncoordinated in distributing resources, and ineffective at exerting global, targeted control that does anything but kill people. These failures have piled up to the point that they legitimately pose widespread, calamitous dangers to large human populations and important cultural centers. Money is currently in no position to resolve the problems we face; if anything, it’s made them virtually intractable. We’re in an analogous situation to the early agriculturalists: we need a new tool. Attention is our new tool; the attention economy our new coordinating […]
July 14, 2015

REAL ROBOT MOVIES

There are two kinds of robot movies. The first treats robots as a spectacle. Robots in spectacle movies justify their existence by being badass and doing badass things. Sometimes spectacle robots work for the good guys (Pacific Rim, Big Hero 6). Sometimes they function as classic movie monsters (Terminator , The Matrix sequels) putting robots in the same monster family as zombies and Frankenstein, sources with which they share many tropes. But usually, spectacle robots serve as both heroes and villains simultaneously (Terminator 2, Transformers, Robocop, Avengers 2). Presenting robots in both positive and negative roles allows spectacle movies remain neutral on their nature. Robots can be a threat but they can also be a savior, so there’s no motivation to inquire deeply into the nature of robots as such. In effect, spectacle movies take the presence of robots for granted, and so reinforce our default presumptions: that robots exist for human use and entertainment. Robot spectacle movies can be entertaining but they tend to be shallow, and plenty of them are just plain boring (Real Steel, the animated Robots). Apart from functional novelties that advance the plot or (more likely) set up a slapstick gag, robot spectacle movies don’t bother to reflect on the robot’s experience of the world or how they might reflect on our human condition. The Terminator even provides the audience with glimpses of his heads-up display without hinting at the homunculus paradoxes it implies. Because once that robot’s function as a ruthless killing machine is established, the only question left is how to deal with it– a challenge to be met by the film’s human protagonists in an otherwise thoroughly conventional narrative. In spectacle movies, the robot is merely the pretense for telling that human story, another technological obstacle for humanity to overcome. The second […]
July 13, 2015

DISTURBINGLY LIVELY, FRIGHTENLINGLY INERT

This line of thinking traces to Drefyus’ What computers can’t do, and specifically of his reading of Heidegger’s care structure in Being and Time. Dreyfus’ views gained popularity during the first big AI wave and successfully put a lid on a lot of the hype around AI. I would say Dreyfus critiques are partly responsible for the terminological shift towards “machine learning” over AI, and also for the shifted focus on robotics and embodied cognition throughout the 90s. https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence But Drefyus’ critiques don’t really have a purchase anymore, and I’m surprised to see Sterling dusting them off. It’s hard to say that a driverless car doesn’t “care” about the conditions on the road; literally all it’s sensors and equipment are tuned to careful and persistent monitoring of road conditions. It remains in a ready state of action, equipped to interpret and respond to the world as a fully engaged participant. It is hard to read such a machine as a lifeless formal symbol manipulator. Haraway said it best: our machines are disturbingly lively, and we ourselves frighteningly inert. I think +Bruce Sterling underappreciates just how well we do understand the persistent complexities of biological organization. Driverless cars might be clunky and unreliable, but they are also orders of magnitude less complex than even a simple organism. The difference is more quantitative than qualitative, and is by no means mysterious or poorly understood. In a biological system, functional integration happens simultaneously at multiple scales; in a vehicle it might happen at two or three at most. This low organizational resolution makes it easier to see the structural inefficiencies and design choices in technological system. But this isn’t a rule for all technology. Software in particular isn’t subject to such design constraints. This is why we see neural nets making huge advances […]
May 17, 2015

ON THE ETHICS OF ROBOT ROACHES

+John Baez worries that +Backyard Brains dodges the hard questions in their ethics statement. I’m not sure they entirely dodge the ethics question, “when is it okay to turn animals into RC cyborgs?” By saying it isn’t a “toy” and emphasizing its educational applications, they’re distinguishing between frivolous and constructive uses of the tool. If you’re just messing around for entertainment, or if you have some malicious purpose (like a cyborg roach based bank heist) then it’s probably not okay. Turning animals into cyborgs is okay when the applications are constructive and educational: when students learn, when knowledge grows. This is a common response from scientists to questions of animal experimentation: to point at the benefits generated by the research. The distinction between frivolous “toys” and constructive uses might be clear enough, but as stated it’s only a rule of thumb. The harder question is how to distinguish the two. One might be skeptical that it’s possible to state the ethical rule any more clearly than this. After all, horribly inhumane and unethical acts have been conducted in the name of science, so obviously science itself can’t be cover for doing whatever you want. The developers also point to high schools and educators mentoring students on their use of these techniques. Indeed, they seem to be marketing primarily to educational institutions aiming to buy RoboRoaches in bulk. In effect, this diffuses the ethical questions by putting responsibility on the institutions and educators overseeing their use. Unfortunately, this doesn’t give those institutions much of a guideline for making that decision themselves. It also somewhat spoils the DIY-ness of “backyard brains”. I do appreciate that they have a dedicated discussion of the ethics at stake! Although I agree that they don’t nail down the ethics questions with complete satisfaction (and they admit […]
December 1, 2014

AUTISM AND WAR CRIMES: TURING’S MORAL CHARACTER IN THE IMITATION GAME

Last night I attended a packed screening of The Imitation Game. My thoughts on the movie are below, but tl;dr: I thought the film was great. If you have any interest in mathematics, cryptography, or the history of computing you will love this film. But this isn’t just a movie for nerds. The drama of the wartime setting and the arresting performance from Cumberbatch make this film entertaining and accessible to almost everyone– despite the fact that it’s a period war drama with almost no action or romance and doesn’t pass the Bechdel test. Of course, as a philosopher I have questions and criticisms. But don’t let that confuse you: go see this film. Turning history’s intellectual heroes into media’s popular heroes is a trend I’d like to reinforce. Turing’s story is timely and central for understanding the development of our world. I’m happy to see his work receive the publicity and recognition it deserves. Turing is something of a hero of mine; I spent half my dissertation wrestling with his thoughts on artificial intelligence, and I’ve found a way to work him in to just about every class I’ve taught for the last decade. I know many others feel just as passionately (or more!) about his life and work. I have been looking forward to this film for a long time and my expectations were high. I was not disappointed. The Oscar buzz around this film is completely appropriate. Spoilers will obviously follow. There are minor inaccuracies in the film: Knightley mispronounces Euler’s name; Turing’s paper is titled “Computing Machinery and Intelligence“, not “The Imitation Game”; the Polish bombe machine was eventually named Victory, never Christopher. But I’m not so interested in that sort of critique. I’d instead like to talk about two subtle but important themes in the […]
October 16, 2014

OUR SOCIAL NETWORKS ARE BROKEN. HERE’S HOW TO FIX THEM.

1. You can’t really blame us for building Facebook the way we have. By “we” I mean we billion-plus Facebook users, because of course we are the ones who built Facebook. Zuckerberg Inc. might take all the credit (and profit) from Facebook’s success, but all the content and contacts on Facebook– you know, the part of the service we users actually find valuable– was produced, curated, and distributed by us: by you and me and our vast network of friends. So you can’t blame us for how things turned out. We really had no idea what we were doing when we built this thing. None of us had ever built a network this big and important before. The digital age is still mostly uncharted territory. To be fair, we’ve done a genuinely impressive job given what we had to work with. Facebook is already the digital home to a significant fraction of the global human population. Whatever you think of the service, its size is nothing to scoff at. The population of Facebook users today is about the same as the global human population just 200 years ago. Human communities of this scale are more than just rare: they are historically unprecedented. We have accomplished something truly amazing. Good work, people. We have every right to be proud of ourselves. But pride shouldn’t prevent us from being honest about these things we build–it shouldn’t make us complacent, or turn us blind to the flaws in our creation. Our digital social networks are broken. They don’t work the way we had hoped they would; they don’t work for us. This problem isn’t unique to Facebook, so throwing stones at only the biggest of silicon giants won’t solve it. The problem is with the way we are thinking about the task of […]
October 13, 2014

BRUNO LATOUR IS TALKING ABOUT GAIA

// A few weeks ago I saw Bruno Latour give a talk called “Gaia Intrudes” at Columbia. I’ve struggled with the term “Gaia” since I came across Lovelock’s Gaia Hypothesis while studying complex systems a few years ago. On the one hand, Lovelock is obviously correct that we can and should treat the (surface of the) Earth and its inhabitants as an interconnected system, whose parts (both living and nonliving) all influence each other. On the other hand, the term “Gaia” has a New Agey, pseudosciencey flavor (even if Lovelock’s discussion doesn’t) that makes me hesitant to use the term in my public discussions of complexity theory, and immediately skeptical when I see others use it. Since my skepticism seems to align with the consensus position in the sciences, I’ve never bothered to resolve my ambivalence about the term. And to be completely honest, while I admired Latour’s work (he’s mentioned in my profile!), going into this talk I was also a little skeptical of _his_ use of the term. I’ve been thinking pretty seriously about the theoretical tools required for understanding the relationship between an organism, its functional components, and its environment, what and I have been calling “the individuation problem”. As far as I can tell, not even the sciences are thinking about this problem systematically across the many domains and scales where it arises. That same week I had written a critique of Tegmark’s recent proposal for a physical theory of consciousness; my core critique centered on his failure to distinguish the problems of integration and individuation. So to hear that Latour was approaching the discussion using the vocabulary of Gaia made me apprehensive, if not outright disappointed. I was worried that he would just muddy the waters of an already fantastically difficult discussion, and that it […]
April 4, 2014

HUMAN CASTE SYSTEMS: REIFYING CLASS

// From the ongoing SA thread on Strangecoin. > Just out of curiosity, RA, when you discuss ideas like reifying the class structure by assigning people coloured buttons identifying their social class and when you advocate a system that would admittedly make it more difficult for poor people to buy food and basic necessities, are you making any kind of value judgement on the merits of such a system? It’s hard for me to reconcile ‘worried about hypothetical silent discrimination against cyborgs’ RA vs ‘likes the idea of clearly identifying poors with brown badges to more easily refuse to serve them’ RA. // I would only advocate for the idea if I thought it had a chance to change the social circumstances for the better. The reasoning is something like the following: 1) People are psychologically disposed to reasoning about community membership (identity), their status within those communities (influence), and how to engage those communities(culture/convention). This is what significant portions of their brains evolved to do. 2) People are not particularly disposed to reasoning about traditional economic frameworks (supply and demand, wealth, etc), their status within those framworks (class, inequality), and how to engage those those frameworks (making sound economic decisions). They can do this, and the ones that do, do really well, but its hard and most people can’t and suffer because of it. 3) It would be easier for most people to do well in a system that emphasized transactions of the type that people are typically good at reasoning at than ones they are typically bad at reasoning at. 4) Therefore, we should prefer an economic framework that emphasizes reasoning of the former and not the latter type. I’m not saying this fixes all inequality and suffering, but it makes it easier for people to do things […]
March 30, 2014

FROM THE ARCHIVES, MY FIRST POST ON THE ATTENTION ECONOMY

// I was digging through the SomethingAwful archives and found my first essay on the attention economy, written on April 5th, 2011. At the time, Bitcoin had yet to experience it’s first bubble and was still trading below a dollar, and Occupy Wall Street was still five months in the future. If you don’t have access to the archives, the thread which prompted this first write up was titled “No More Bitchin: Let’s actually create solutions to society’s problems!” Despite my reputation on that forum, I’m not interested in pop speculative futurism or idle technoidealism. I don’t think there’s an easy technological fix for our many difficult problems. But I do think that our technological circumstances have a dramatic impact on our social, political, and economic organizations, and that we can design technologies to cultivate human communities that are healthy, stable, and cooperative. The political and economic infrastructure we have for managing collective human action was developed at a time when individual rational agency formed the basis of all political theory, and in a networked digital age we can do much better. An attention economy doesn’t solve all the problems, but it provides tools for addressing problems that simply aren’t available with the infrastructure we have available today. My discussion of the attention economy was aimed at discussing social organization at this level of abstraction, with the hopes that taking this networked perspective on social action would reveal some of the tools necessary for addressing our problems. . In the three years and multiple threads since that initial post, I’ve done research into the dynamics and organization of complex systems and taught myself some of the math and theory necessary for making the idea explicit and communicable. And in that time the field of data science has grown astronomically, making […]
August 31, 2005

SOMETHING CHANGED

In a precedent-setting case, administrative trial judge Tynia Richard recommended the firing of John Halpin, a veteran supervisor of carpenters, for cutting out before the end of his shift on as many as 83 occasions between March 2 and Aug. 9, 2006. The evidence against Halpin, whose base pay is $300 a day, included time cards that suspiciously appeared stamped on the same machine, even though his duties placed him in different locations each day. But there was a clincher: data gathered through the GPS system on Halpin’s cellphone, which he accepted in 2005 without being told it might be used to trace his every move. |link| My first response to this article was that it is yet more proof of how our technology outpaces our ethics. Our technology is rigid; our ethics have some slack around the edges. It might be a minor failing to leave work 5 or 10 minutes early, but its the kind of thing that most people are willing to over look and let slide, especially when such behavior goes so easily under the radar. And people have been exploiting this minor loophole since we first had punch cards keeping track of our hours. The economy didn’t crumble, and businesses didn’t suffer. Unless it is a particularly egregious case (this particular case might count), most people don’t think that leaving work a bit early is a sign of laziness or any other ethical failing. No one wants to be at work, and everyone understands that. Our technology, on the other hand, only knows rigid deadlines. Technology is ruthless in its petty attention to detail and its utter lack of flexibility. Our machines have no sympathy for our minor human concerns, and pays no attention to the flexibility of our intuitive ethical code. It would be […]
October 21, 2005

GIVE ME MY METAVERSE!

This plus this brings the metaverse that much closer. We are literally one technological convergence step away from a world entirely marked up by metadata. This basically means that we are one killer gadget away from a world where continuous, real-time access to that metadata is assumed as part of an individual’s basic equipment set, like having a phone number or the ability to see. Next semester I plan on teaching Bruce Sterling’s not-quite-sci-fi design manifesto Shaping Things (full text here), in which he refers to this kind of killer gadget as a Wand. A wand is nothing like the Xwand, because Microsoft’s trumped up Wiimote does not deal with metadata. A wand is handled like a cell phone, but without the already obsolete assumption that the gadget’s primary function is to make phone calls. The Wand’s primary purpose is to wrangle arphids, which are the keepers of metadata. A “monitor” should be cheap and easy to make, because it’s basically just an active arphid. It’s an arphid that happens to have a steady source of power, a longer communication range, and a more sophisticated chip. It’s been moved from passive to active; it’s now a boss arphid… The point of installing these monitors is that they can communicate information about the arphids to one another. Then they can filter that torrent of data and move the valuable information over long ranges. They become bosses, guards, co-ordinators. Add these monitors into the mix—active hubs of arphid data, repeaters, relayers, linked to a global network—and you have created an INTERNET OF THINGS. … Whenever I shop, I shop with a wand in my hand. It would never occur to me to shop without a filter and an interface. And someone built that for me, it was designed—as a Wrangler, I need […]
November 30, 2005

AUTONOMICALLY CORRECT

Business Week published an article on Autonomic Computing: Computer, heal thyself. His idea was simple. Scientists needed to come up with a new generation of computers, networks, and storage devices that would look after themselves. The name for his manifesto came from a medical term, the autonomic nervous system. The ANS automatically fine-tunes how various organs of the body function, making your heart beat faster, for instance, when you’re exercising or stressed. In the tech realm, the concept was that computers should monitor themselves, diagnose problems, ward off viruses, even heal themselves. Computers needed to be smarter. But this wasn’t about machines thinking like people. It was about machines thinking for themselves. Apparently IBM has been pushing the autonomic idea for a few years now, and has detailed the 4 major aspects of an autonomic system, and the 8 obstacles such systems face. This is interesting to me, obviously, for several reasons. The drive towards self-regulating, autonomous systems is obviously a push for greater agency in these systems. But the interesting aspect is IBM’s focus on the biological metaphor in describing the nature of autonomic systems, and borrows heavily from the philosophical and cognitive science research on the nature of agency. That last link includes reference to Damasio, for instance. I will have to do more research on the idea before I can say anything substantive. Glancing over the manifesto makes me think this is deep into ‘industry buzzword’ territory, though I think the implications here are more theoretical and foundational than IBM lets on. I should stop to conisder some of the blogosphere phuzz on the article. From Rough Type: Not like breathing The real power of the idea is not that computers will run themselves, in the way that the autonomic nervous system runs itself. Rather, it’s that, […]
December 5, 2005

DRIVING BY SATELLITE

From CNN: Device stops speeders from inside car The system being tested by Transport Canada, the Canadian equivalent of the U.S. Department of Transportation, uses a global positioning satellite device installed in the car to monitor the car’s speed and position. If the car begins to significantly exceed the speed limit for the road on which it’s travelling the system responds by making it harder to depress the gas pedal, according to a story posted on the Toronto Globe and Mail’s Website. The pilot test, using 10 cars driven by volunteers, is believed to be the first in North America, although similar systems have been tested in several European countries, according to the newspaper.
December 5, 2005

HOW TO MAKE THE EU STEP OFF YOUR GRILL

The Register recently published the letter Condi Rice sent to the EU right before the 11th hour decision to pull out of their hardline stance about ICANN control in the run up to the WSIS conference a few weeks back. The Internet will reach its full potential as a medium and facilitator for global economic expansion and development in an environment free from burdensome intergovernmental oversight and control. The success of the Internet lies in its inherently decentralized nature, with the most significant growth taking place at the outer edges of the network through innovative new applications and services. Burdensome, bureaucratic oversight is out of place in an Internet structure that has worked so well for many around the globe. The letter is strongly worded and no-nonsense, which means the responsibility now falls on the US to make sure we keep to the spirit and letter of our own recommendations. This is especially important now that the Baby Bells are getting fussy about the state of their monopolies because of the kinds of competition the internet provides.
December 7, 2005

FINALS

Elvis asked a question and he expects an answer? From ME? I’m just sitting here minding my own business flipping switches and turning knobs and pushing his buttons I suppose because he asks me how I plan on transcending humanism. Humans? What could those be? Little furry creatures with NO BUTTONS and NO KNOBS but lots of hard boney outty parts and lots of warm moist inny parts who make awful racket and LOOK YOU IN THE EYE. DONT LOOK ME IN THE EYE GODDAMNIT. Your soul is dark black and contagious and I am soul-free thank you very much. My mouth opens and my charismatic tone flees my throat and I croak out the relationships between me and you and you and me and it is stale and flat and disgusting and I heave and panic and HEAVE. My interactions are mine, goddamnit, and I choose who is on the other end of the line, who I call, which buttons I press, when to hang up. Action at a distance HA action smacksion resmacksion The point, c’mere, up close, Elvis says. The point, you see, is that when I touch you, when I slip your fitches and knurn your tobs, that I am in control. And I, Elvis says as he beats his chest and breaths a mucus breath, his hair in individual strands on his head, and I am human, and I am god. And then he stops and smokes a cigarette and takes a drink of water, and then sits down for a meal which he scarfs in living, bloody chunks, and then he shits and watches it as it spirals down the drain. And then he grabs his dick, large with loose strands of hair and veins and the grime of a well handled handrail, and […]
December 8, 2005

RAISE UP OFF THESE N-U-TZ

cuz you gets none of these. Its official: Snoop Dogg is the most well-connected rapper, according to a recent http://www.nature.com/news/2005/051205/full/051205-8.html article. He finds that on average it takes a chain of just 2.9 people in the network to connect one rapper to another; that is, three degrees of separation. This compares with 2.5 people for the network of movie actors (popularized in the Kevin Bacon game ), 3.6 for company board directors, and 5.9 for collaborations between high-energy physicists. … In this respect, rap exhibits the same spirit as early jazz, where musicians had on average less than two degrees of separation. … And who is the most highly connected rapper? It’s Snoop Dogg, naturally, who seems to have justified the title of his 1999 album No Limit Top Dogg. I dont think Snoop Dizzle will let it go to his head, though.
December 9, 2005

COLLECTIVE TALENT

From Swarm Sketch Image Hosted by ImageShack.us “Low fat” You can watch it being created here, or contribute to the current project, “Cricket India”. Past work of note includes “Faces of Meth”, “Hurricanes”, and “Unclaimed Baggage”. You can view the full gallery here. From CNET News.com: SwarmSketch taps Web’s ‘collective consciousness’ About three months ago, Peter Edmunds, a 22-year-old communications student at the University of Canberra, in Australia, began a Web site called SwarmSketch with the idea of producing a sketch of “the collective consciousness” every week. Edmunds’ Web site randomly selects one of the most popular search terms from a couple of major search engines and uses that word or phrase as the topic for a collaborative drawing project for the week. Anyone who wants to can peek at the latest stage of a drawing, add a tiny bit to it (about an inch’s worth, if you draw a straight line) and even erase other people’s lines, or at least vote to lighten them. … Apparently, the collective consciousness is quite literal-minded. Almost all of the drawings begin with something figurative in the middle. And no matter how much scribbling and erasing there is along the way, the central figure usually remains. “The basic outline of the sketch becomes clear in the first few hundred lines,” Edmunds said, and “it’s hard for the users after that to change the direction of the image.” POST NAVIGATION
December 11, 2005

AWW, THEY NOTICED

From Infoworld: Study: Google users wealthier, more Net savvy U.S. residents who prefer Google Inc.’s search engine tend to be richer and have more Internet experience than those who primarily use competing search services from Microsoft Corp., Yahoo Inc. and America Online Inc., a new study has found. The longer people have been using the Internet, the more likely it is that Google will be their search engine of choice, according to a survey of 1,000 U.S. Internet users conducted by investment banking and research firm S.G. Cowen & Co. LLC. Moreover, people whose primary search engine is Google are more likely to have household incomes above US$60,000 than people who use competing search engines, according to the survey, whose results S.G. Cowen published in a report Monday. Not only is Google an authority, but Google is recognized as an authority by the most competent among us.
December 12, 2005

ENTITY-HOOD

Some rumblings over at The Bellman about the lawsuit brought against Wikipedia. Saftey Neal quoted a News.com article with a bunch of analysts discussing the impossibility of a libel lawsuit against Wikipedia. From CNet News.com: Is Wikipedia safe from libel liability? Thanks to section 230 of the Federal Communications Decency Act (CDA), which became law in 1996, Wikipedia is most likely safe from legal liability for libel, regardless of how long an inaccurate article stays on the site. That’s because it is a service provider as opposed to a publisher such as Salon.com or CNN.com. “I think that there’s no liability, period,” said Jennifer Granick, executive director of the Center for Internet and Society at Stanford University Law School. “Section 230 gives you immunity for this.” Upon closer inspection of the CDA we find the relevant passages: (2) Civil liability No provider or user of an interactive computer service shall be held liable on account of – (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph The argument, I take it, is that Wikipedia is a service, and doesn’t provide content. In the interest of journalistic integrity, here’s the relevant definition of terms according to the CDA: (2) Interactive computer service The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet […]
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }