November 15, 2010

CONFESSIONS OF AN ACA/FAN: ARCHIVES: DIY VIDEO 2010: POLITICAL REMIX (PART TWO)

Shared by Daniel h/t @henryjenkins. Some good remixes here, and I like the argument for cams and fair use. This video also serves as a strong argument for the use of cam recordings for visual criticism and critique. Cam or bootleg recording of current theatrical releases make it possible for fans and critics to make their critiques in a timely fashion while films are still fresh in the collective consciousness of the public. If vidders and political remixers have to wait for a DVD release to make their visual arguments then the window for sparking public debate and discussion might have largely passed.
November 15, 2010

ARTIFICIAL INTELLIGENCES COMPETE WITH EACH OTHER, HUMANS AT STARCRAFT

Artificial intelligence systems are good at tackling problems that can be solved using brute force, like chess… All the computer has to do is calculate out every possible permutation of moves and pick the best one. They’re also pretty good at games like poker, where even with incomplete information, a computer can make a move that is statistically ‘best.’ And lastly, they’re good at making decisions far more quickly than a human. When you combine all of these separate characteristics into one game, things get exponentially more complex, but also much more like real life. And this is why people are trying to teach computers how to play StarCraft, at a level where they can compete with even the best human players. UC Santa Cruz hosted the 2010 StarCraft AI Competition, which put AI programs through a series of different StarCraft testing scenarios to determine the most effective AI system at micromanagement, small scale combat, tech limited games, and of course full gameplay. The video above shows a bunch of highlights; especially notable is the absolutely brutal use of mutalisks by the eventual AI winner, UC Berkeley’s Overmind. The last clip in the highlight video shows an AI taking on a world class human player, who wins handily. It’s only a matter of two or three years before humans have no chance against programs like these, however… And the reason (I think) is quite straightforward: the computer can micromanage every single unit it owns, on every part of the map, at the same time. A human can’t. Once the AI reaches a competent level of strategy and unit use (it’s not there yet), we’re screwed, because the AI can just launch multiple simultaneous micromanaged attacks. There are lots more videos of the different AI programs competing against each other on […]
November 13, 2010

JAPAN’S MINISTRY OF DEFENSE SHOWS OFF FLYING SURVEILLANCE DRONE

It may not be quite as menacing as some other surveillance drones, but this new flying contraption recently unveiled by Japan’s Ministry of Defense should at least get the job done for what seems like a somewhat limited purpose. That seems to be primarily for short treks of less than 30 minutes into dangerous areas, where the drone can take advantage of its GPS tracking and “high power” cameras to relay information back to the pilots on the ground. Unlike plane-style drones, this one can also move up and down and in every direction, much like a quadrocopter. Head on past the break to check it out in action courtesy of Japan’s NHK network. Continue reading Japan’s Ministry of Defense shows off flying surveillance drone Japan’s Ministry of Defense shows off flying surveillance drone originally appeared on Engadget on Sat, 13 Nov 2010 02:58:00 EDT. Please see our terms for use of feeds. Permalink | source Crave | Email this | Comments
November 11, 2010

MINSKY TENTACLE ARM WAS GROPINGWOMEN IN 1968

Marvin Minsky helped found what is now known as the MIT Computer Science and Artificial Intelligence Laboratory back in 1959. Only 9 years later, he constructed this tentacle arm, which shows an impressive level of sophistication. There isn’t too much info about it, but here’s the caption from the video, which was posted by MIT CSAIL: “This film from 1968 shows Marvin Minsky’s tentacle arm, developed at the MIT AI Lab (one of CSAIL’s forerunner labs). The arm had twelve joints and could be controlled by a PDP-6 computer or via a joystick. This video demonstrates that the arm was strong enough to lift a person, yet gentle enough to embrace a child.” [ MIT CSAIL ]
November 10, 2010

BACTERIA ‘R’ US | SMART JOURNALISM. REAL SOLUTIONS. MILLER-MCCUNE.

Single-celled organisms are usually considered secondary players in a world dominated by human complexity. But emerging research shows that bacteria have astonishing powers to engineer the environment, to communicate and to affect human well-being. They may even think.
November 9, 2010

TEACHING ROBOTS TO BEHAVE ETHICALLY

While other researchers are busy teaching robots how to lie, professor Susan Anderson and her husband Michael have taught a robot how to behave ethically. I know which team is getting my research dollar.
November 7, 2010

GAME THEORY EXPLAINS WHY SOME CONTENT GOES VIRAL ON REDDIT, DIGG

A lot of attention has been lavished on ideas “going viral,” but this may not be the only way that ideas spread, according to an article published in PNAS last week. With some extensive theoretical work in game theory, two researchers have shown that trendy changes don’t spread quickly just because they gain exposure to a high number of people. Instead, the spread of innovations may work more like a game where players are gauging whether to adopt something new based on what others immediately surrounding them do. The popularity growth of things like websites or gadgets is often described as being similar to an epidemic: a network with a lot of connections between people increases exposure and then adoption, as do links stretching between dissimilar groups. When the trend in question spreads to a node with a lot of connections (like a celebrity), its popularity explodes. While this is fitting for some cases, in others it’s an oversimplification—a person’s exposure to a trend doesn’t always guarantee they will adopt it and pass it on. Read the rest of this article… Read the comments on this post
November 4, 2010

SPUTNIK VISUALIZATION – LEICAS DREAM ON VIMEO

Live visualization (video in fast-forward) for tracking visitors of the 24c3 in the Berlin Congress Centre. Tracking was done using active rfid tags. More info: http://www.openbeacon.org/ccc-sputnik.0.html http://code.google.com/p/leicas-dream
November 4, 2010

EXCLUSIVE SNEAK PEEK: DEFCON NINJA PARTY BADGE | THREAT LEVEL | WIRED.COM

Shared by Daniel This dear god this. LAS VEGAS — A hacker group known as the Ninjas has created what may be the best DefCon badge ever. The badge allows wireless ninja battle between badge
November 4, 2010

CODY WANTS TO GIVE YOU A SENSUAL SPONGE BATH

Cody here comes from Georgia Tech’s Healthcare Robotics Lab; we first met him back in March. Since then, Cody’s been busy, learning how to give sponge baths. All an operator has to do is to select an area of a patient, and Cody will autonomously go to work. In the video above, there are little blue squares of debris that Cody has been assigned to clean up, and clearly, he’s pretty good at it. Very good. He goes nice and sloooowww. Yeah… Just like that. Cody’s more than just a pleasurebot, though. He’s learning how to help out in hospitals and care facilities, to reduce the workload on nurses and direct care workers. This means better healthcare for everyone in the long run, and we can all look forward to getting sponged down by robots. I know I am. [ Georgia Tech Healthcare Robotics ]
November 4, 2010

PBS NEWSHOUR ON ROBOTS

Note: this video might not display unless you click through to the post page Everyone’s favorite TV show, NewsHour on PBS, had a segment on robots last week, and it’s now online. There’s nothing super new and exciting, at least not for loyal BotJunkie readers, but there’s bits of new footage of PR2’s towel folding and some other stuff. They couldn’t avoid a breathless “How close are we to being replaced by robots?” tagline, but we’ll forgive them, because Jim Lehrer is badass. [ PBS NewsHour ] Thanks Mom!
November 4, 2010

COLUMBIA ROLLS OUT OMNI-HEAT ELECTRIC GLOVES, JACKETS AND BOOTS, BATTERIES INCLUDED

Look, we don’t want to think about those brutally cold winter days ahead either, but there’s no denying that Columbia’s new electrically heated apparel could take the sting out of those below-zero temperatures. Similar to the company’s Bugathermo boots, its new gloves, jackets and boots pack what they call Omni-Heat Electric technology, which basically outfits the clothing with lithium polymer battery packs and a specially tailored heating system. Dubbed “on-demand” heat, you can turn on and off the heat with the touch of a button, and then adjust the level by pressing the color-changing LED-backlit button. The number / size of batteries depends on the article of clothing — for instance, the jackets are equipped with two 15Wh batteries while each glove, as you can see up there, has a smaller capacity cell. So, how long will they keep you warm and toasty on the slopes? About six hours, says a Columbia product manager, and once out of juice you can charge them via any USB cord. Oh, and yes, you can refuel your phone or iPod using the battery pack itself — obviously, we asked! At its press event in New York City this week, Columbia dressed us in a Circuit Breaker Softshell jacket (yes, that’s what it’s called) and a pair of the Bugaglove Max Electric gloves and threw us into its Omni-Heat freezer booth — we have to say, our arms and back stayed mighty toasty and the jacket didn’t feel as heavy as we expected. The gloves, on the other hand, are bulky, though may provide some good cushioning for novice snowboarders like ourselves. Of course, that heat is gonna cost ya. The aforementioned jacket rings up at $850 and the gloves at $400. Sure, picking up a few hand and boot warmers would be cheaper, […]
July 13, 2015

DISTURBINGLY LIVELY, FRIGHTENLINGLY INERT

This line of thinking traces to Drefyus’ What computers can’t do, and specifically of his reading of Heidegger’s care structure in Being and Time. Dreyfus’ views gained popularity during the first big AI wave and successfully put a lid on a lot of the hype around AI. I would say Dreyfus critiques are partly responsible for the terminological shift towards “machine learning” over AI, and also for the shifted focus on robotics and embodied cognition throughout the 90s. https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence But Drefyus’ critiques don’t really have a purchase anymore, and I’m surprised to see Sterling dusting them off. It’s hard to say that a driverless car doesn’t “care” about the conditions on the road; literally all it’s sensors and equipment are tuned to careful and persistent monitoring of road conditions. It remains in a ready state of action, equipped to interpret and respond to the world as a fully engaged participant. It is hard to read such a machine as a lifeless formal symbol manipulator. Haraway said it best: our machines are disturbingly lively, and we ourselves frighteningly inert. I think +Bruce Sterling underappreciates just how well we do understand the persistent complexities of biological organization. Driverless cars might be clunky and unreliable, but they are also orders of magnitude less complex than even a simple organism. The difference is more quantitative than qualitative, and is by no means mysterious or poorly understood. In a biological system, functional integration happens simultaneously at multiple scales; in a vehicle it might happen at two or three at most. This low organizational resolution makes it easier to see the structural inefficiencies and design choices in technological system. But this isn’t a rule for all technology. Software in particular isn’t subject to such design constraints. This is why we see neural nets making huge advances […]
July 14, 2015

REAL ROBOT MOVIES

There are two kinds of robot movies. The first treats robots as a spectacle. Robots in spectacle movies justify their existence by being badass and doing badass things. Sometimes spectacle robots work for the good guys (Pacific Rim, Big Hero 6). Sometimes they function as classic movie monsters (Terminator , The Matrix sequels) putting robots in the same monster family as zombies and Frankenstein, sources with which they share many tropes. But usually, spectacle robots serve as both heroes and villains simultaneously (Terminator 2, Transformers, Robocop, Avengers 2). Presenting robots in both positive and negative roles allows spectacle movies remain neutral on their nature. Robots can be a threat but they can also be a savior, so there’s no motivation to inquire deeply into the nature of robots as such. In effect, spectacle movies take the presence of robots for granted, and so reinforce our default presumptions: that robots exist for human use and entertainment. Robot spectacle movies can be entertaining but they tend to be shallow, and plenty of them are just plain boring (Real Steel, the animated Robots). Apart from functional novelties that advance the plot or (more likely) set up a slapstick gag, robot spectacle movies don’t bother to reflect on the robot’s experience of the world or how they might reflect on our human condition. The Terminator even provides the audience with glimpses of his heads-up display without hinting at the homunculus paradoxes it implies. Because once that robot’s function as a ruthless killing machine is established, the only question left is how to deal with it– a challenge to be met by the film’s human protagonists in an otherwise thoroughly conventional narrative. In spectacle movies, the robot is merely the pretense for telling that human story, another technological obstacle for humanity to overcome. The second […]
October 2, 2015

EARLY DIGITAL SOCIETIES

I’d invite everyone to imagine the human world as it existed before the invention of money. Prior to money, people engaged in cooperative behaviors for a variety of non-financial reasons (family, love, adventure, etc). But populations eventually grew too big to support the network with such slow, noisy transactions. Early agricultural societies invented money to help everyone collectively keep better track of how all their valuables were distributed. At the time, it would have been perfectly sensible to wonder about the complications that money would bring. “What if you’re in a situation where you need help, but you don’t have enough money to get anyone to help you? Seems like a lot of people could get the short end of the stick.” Of course, this worry would have been exactly correct. There are massive problems with the distribution of wealth and resources that comes with money. These problems are persistent; we still don’t know how to deal with them, and they are worse than ever before. Nevertheless, money was the critical coordinating infrastructure that (more or less) set up the human population to flourish over the last 10k yrs or so. It was the tool that built the human population as it exists today. I don’t like money. The human population today is fat, dirty, wasteful, uncoordinated in distributing resources, and ineffective at exerting global, targeted control that does anything but kill people. These failures have piled up to the point that they legitimately pose widespread, calamitous dangers to large human populations and important cultural centers. Money is currently in no position to resolve the problems we face; if anything, it’s made them virtually intractable. We’re in an analogous situation to the early agriculturalists: we need a new tool. Attention is our new tool; the attention economy our new coordinating […]
November 23, 2015

ATTENTION, OPINION DYNAMICS, AND CRYING BABIES

In a recent article, Adam Elkus argues two points: 1) Drawing attention to an issue doesn’t necessarily solve it. 2) Drawing attention might make things worse. For these reasons, Elkus argues against what he calls “tragedy hipsterism”: the “endless castigation of the West for sins and imperfections” without offering anything constructive. He says, “Awareness-raising is only useful if it is somehow necessary for the instrumental process of achieving the desired aim. In many cases, it is not and is in fact an obstacle to that aim.” I think this is completely mistaken, both about the utility of castigation, but more generally about the role of attention in shaping the social dynamics. Consider, for instance, a crying baby. Crying doesn’t solve any problem on its own. If an infant is hungry, crying won’t make food magically appear. At best, crying gets an adult to acquire food for the baby– but not necessarily so. The adult could easily ignore the baby, or misinterpret the cry as triggered by something other than hunger. Typically, an adult will feed the baby whether or not it cries, which renders the crying itself completely superfluous. And crying can be dangerous! In the wild, crying newborns tend to attract predators looking for an easy meal. On a plane, crying newborns create social animosity that might threaten the safety of the newborn and their family in other ways. Crying doesn’t always help, and it often makes things worse. So on Elkus’ argument, crying is actually an obstacle to the infant’s well being. If babies only understood the futility of crying, perhaps they’d be more effective at realizing their goals! Of course, this argument is ridiculous. Crying isn’t meant to solve problems directly. In fact, crying is usually issued from a place of helplessness: the inability to realize one’s […]
December 9, 2015

DELUSIONS ABOUT EUGENE (A REPLY TO ANDREAS SCHOU)

Andreas Schou writes: +Daniel Estrada finds this unnecessarily reductive and essentialist, and argues for a quacks-like-a-duck definition: if does a task which humans do, and effectively orients itself toward a goal, then it’s “intelligence.” After sitting on the question for a while, I think I agree — for some purposes. If your purpose is to build a philosophical category, “intelligence,” which at some point will entitle nonhuman intelligences to be treated as independent agents and valid objects of moral concern, reductive examination of the precise properties of nonhuman intelligences will yield consistently negative results. Human intelligence is largely illegible and was not, at any point, “built.” A capabilities approach which operates at a higher level of abstraction will flag the properties of a possibly-legitimate moral subject long before a close-to-the-metal approach will. (I do not believe we are near that point, but that’s also beyond the scope of this post.) But if your purpose is to build artificial intelligences, the reductive details matter in terms of practical ontology, but not necessarily ethics: a capabilities ontology creates a giant, muddy categorical mess which disallows engineers from distinguishing trivial parlor tricks like Eugene Goostman from meaningful accomplishments. The underspecified capabilities approach, without particulars, simply hands the reins over to the part of the human brain which draws faces in the clouds. Which is a problem. Because we are apparently built to greedily anthropomorphize. Historically, humans have treated states, natural objects, tools, the weather, their own thoughts, and their own unconscious actions as legitimate “persons.” (Seldom all at the same time, but still.) If we assigned the trait “intelligence” to every category which we had historically anthropomorphized, that would leave us treating the United States, Icelandic elf-stones, Watson, Zeus, our internal models of other peoples’ actions, and Ouija boards as being “intelligent.” Which […]
December 29, 2015

YES, AI SHOULD BE OPEN

Scott Alexander: Should AI be Open? Or are we worried that AI will be so powerful that someone armed with AI is stronger than the government? Think about this scenario for a moment. If the government notices someone getting, say, a quarter as powerful as it is, it’ll probably take action. So an AI user isn’t likely to overpower the government unless their AI can become powerful enough to defeat the US military too quickly for the government to notice or respond to. But if AIs can do that, we’re back in the intelligence explosion/fast takeoff world where OpenAI’s assumptions break down. If AIs can go from zero to more-powerful-than-the-US-military in a very short amount of time while still remaining well-behaved, then we actually do have to worry about Dr. Evil and we shouldn’t be giving him all our research. // I’ve been meaning to write a critical take on the OpenAI project. I’m glad Scott Alexander did this first, because it allows me to start by pointing out how completely terrible the public discussion on AI is at the moment. We’re thinking about AI as if they are Super Saiyan warriors with a “power level” of some explicit quantity, as if such a number would determine the future success of a system. This is, for lack of a better word, a completely bullshit adolescent fantasy. For instance, there’s no question that the US government vastly overpowers ISIS and other terrorist organizations in strength, numbers, and strategy. Those terrorist groups nevertheless represent a persistent threat to global stability despite the radical asymmetry of power– or rather, precisely because of the ways we’ve abused this asymmetry. “Power level” here does not determine the trouble and disruption a system can cause; comparatively “weak” actors can nevertheless leave dramatic marks on history. Or […]
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }