December 29, 2015

YES, AI SHOULD BE OPEN

Scott Alexander: Should AI be Open? Or are we worried that AI will be so powerful that someone armed with AI is stronger than the government? Think about this scenario for a moment. If the government notices someone getting, say, a quarter as powerful as it is, it’ll probably take action. So an AI user isn’t likely to overpower the government unless their AI can become powerful enough to defeat the US military too quickly for the government to notice or respond to. But if AIs can do that, we’re back in the intelligence explosion/fast takeoff world where OpenAI’s assumptions break down. If AIs can go from zero to more-powerful-than-the-US-military in a very short amount of time while still remaining well-behaved, then we actually do have to worry about Dr. Evil and we shouldn’t be giving him all our research. // I’ve been meaning to write a critical take on the OpenAI project. I’m glad Scott Alexander did this first, because it allows me to start by pointing out how completely terrible the public discussion on AI is at the moment. We’re thinking about AI as if they are Super Saiyan warriors with a “power level” of some explicit quantity, as if such a number would determine the future success of a system. This is, for lack of a better word, a completely bullshit adolescent fantasy. For instance, there’s no question that the US government vastly overpowers ISIS and other terrorist organizations in strength, numbers, and strategy. Those terrorist groups nevertheless represent a persistent threat to global stability despite the radical asymmetry of power– or rather, precisely because of the ways we’ve abused this asymmetry. “Power level” here does not determine the trouble and disruption a system can cause; comparatively “weak” actors can nevertheless leave dramatic marks on history. Or […]
December 9, 2015

DELUSIONS ABOUT EUGENE (A REPLY TO ANDREAS SCHOU)

Andreas Schou writes: +Daniel Estrada finds this unnecessarily reductive and essentialist, and argues for a quacks-like-a-duck definition: if does a task which humans do, and effectively orients itself toward a goal, then it’s “intelligence.” After sitting on the question for a while, I think I agree — for some purposes. If your purpose is to build a philosophical category, “intelligence,” which at some point will entitle nonhuman intelligences to be treated as independent agents and valid objects of moral concern, reductive examination of the precise properties of nonhuman intelligences will yield consistently negative results. Human intelligence is largely illegible and was not, at any point, “built.” A capabilities approach which operates at a higher level of abstraction will flag the properties of a possibly-legitimate moral subject long before a close-to-the-metal approach will. (I do not believe we are near that point, but that’s also beyond the scope of this post.) But if your purpose is to build artificial intelligences, the reductive details matter in terms of practical ontology, but not necessarily ethics: a capabilities ontology creates a giant, muddy categorical mess which disallows engineers from distinguishing trivial parlor tricks like Eugene Goostman from meaningful accomplishments. The underspecified capabilities approach, without particulars, simply hands the reins over to the part of the human brain which draws faces in the clouds. Which is a problem. Because we are apparently built to greedily anthropomorphize. Historically, humans have treated states, natural objects, tools, the weather, their own thoughts, and their own unconscious actions as legitimate “persons.” (Seldom all at the same time, but still.) If we assigned the trait “intelligence” to every category which we had historically anthropomorphized, that would leave us treating the United States, Icelandic elf-stones, Watson, Zeus, our internal models of other peoples’ actions, and Ouija boards as being “intelligent.” Which […]
November 23, 2015

ATTENTION, OPINION DYNAMICS, AND CRYING BABIES

In a recent article, Adam Elkus argues two points: 1) Drawing attention to an issue doesn’t necessarily solve it. 2) Drawing attention might make things worse. For these reasons, Elkus argues against what he calls “tragedy hipsterism”: the “endless castigation of the West for sins and imperfections” without offering anything constructive. He says, “Awareness-raising is only useful if it is somehow necessary for the instrumental process of achieving the desired aim. In many cases, it is not and is in fact an obstacle to that aim.” I think this is completely mistaken, both about the utility of castigation, but more generally about the role of attention in shaping the social dynamics. Consider, for instance, a crying baby. Crying doesn’t solve any problem on its own. If an infant is hungry, crying won’t make food magically appear. At best, crying gets an adult to acquire food for the baby– but not necessarily so. The adult could easily ignore the baby, or misinterpret the cry as triggered by something other than hunger. Typically, an adult will feed the baby whether or not it cries, which renders the crying itself completely superfluous. And crying can be dangerous! In the wild, crying newborns tend to attract predators looking for an easy meal. On a plane, crying newborns create social animosity that might threaten the safety of the newborn and their family in other ways. Crying doesn’t always help, and it often makes things worse. So on Elkus’ argument, crying is actually an obstacle to the infant’s well being. If babies only understood the futility of crying, perhaps they’d be more effective at realizing their goals! Of course, this argument is ridiculous. Crying isn’t meant to solve problems directly. In fact, crying is usually issued from a place of helplessness: the inability to realize one’s […]
October 2, 2015

EARLY DIGITAL SOCIETIES

I’d invite everyone to imagine the human world as it existed before the invention of money. Prior to money, people engaged in cooperative behaviors for a variety of non-financial reasons (family, love, adventure, etc). But populations eventually grew too big to support the network with such slow, noisy transactions. Early agricultural societies invented money to help everyone collectively keep better track of how all their valuables were distributed. At the time, it would have been perfectly sensible to wonder about the complications that money would bring. “What if you’re in a situation where you need help, but you don’t have enough money to get anyone to help you? Seems like a lot of people could get the short end of the stick.” Of course, this worry would have been exactly correct. There are massive problems with the distribution of wealth and resources that comes with money. These problems are persistent; we still don’t know how to deal with them, and they are worse than ever before. Nevertheless, money was the critical coordinating infrastructure that (more or less) set up the human population to flourish over the last 10k yrs or so. It was the tool that built the human population as it exists today. I don’t like money. The human population today is fat, dirty, wasteful, uncoordinated in distributing resources, and ineffective at exerting global, targeted control that does anything but kill people. These failures have piled up to the point that they legitimately pose widespread, calamitous dangers to large human populations and important cultural centers. Money is currently in no position to resolve the problems we face; if anything, it’s made them virtually intractable. We’re in an analogous situation to the early agriculturalists: we need a new tool. Attention is our new tool; the attention economy our new coordinating […]
July 14, 2015

REAL ROBOT MOVIES

There are two kinds of robot movies. The first treats robots as a spectacle. Robots in spectacle movies justify their existence by being badass and doing badass things. Sometimes spectacle robots work for the good guys (Pacific Rim, Big Hero 6). Sometimes they function as classic movie monsters (Terminator , The Matrix sequels) putting robots in the same monster family as zombies and Frankenstein, sources with which they share many tropes. But usually, spectacle robots serve as both heroes and villains simultaneously (Terminator 2, Transformers, Robocop, Avengers 2). Presenting robots in both positive and negative roles allows spectacle movies remain neutral on their nature. Robots can be a threat but they can also be a savior, so there’s no motivation to inquire deeply into the nature of robots as such. In effect, spectacle movies take the presence of robots for granted, and so reinforce our default presumptions: that robots exist for human use and entertainment. Robot spectacle movies can be entertaining but they tend to be shallow, and plenty of them are just plain boring (Real Steel, the animated Robots). Apart from functional novelties that advance the plot or (more likely) set up a slapstick gag, robot spectacle movies don’t bother to reflect on the robot’s experience of the world or how they might reflect on our human condition. The Terminator even provides the audience with glimpses of his heads-up display without hinting at the homunculus paradoxes it implies. Because once that robot’s function as a ruthless killing machine is established, the only question left is how to deal with it– a challenge to be met by the film’s human protagonists in an otherwise thoroughly conventional narrative. In spectacle movies, the robot is merely the pretense for telling that human story, another technological obstacle for humanity to overcome. The second […]
July 13, 2015

DISTURBINGLY LIVELY, FRIGHTENLINGLY INERT

This line of thinking traces to Drefyus’ What computers can’t do, and specifically of his reading of Heidegger’s care structure in Being and Time. Dreyfus’ views gained popularity during the first big AI wave and successfully put a lid on a lot of the hype around AI. I would say Dreyfus critiques are partly responsible for the terminological shift towards “machine learning” over AI, and also for the shifted focus on robotics and embodied cognition throughout the 90s. https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence But Drefyus’ critiques don’t really have a purchase anymore, and I’m surprised to see Sterling dusting them off. It’s hard to say that a driverless car doesn’t “care” about the conditions on the road; literally all it’s sensors and equipment are tuned to careful and persistent monitoring of road conditions. It remains in a ready state of action, equipped to interpret and respond to the world as a fully engaged participant. It is hard to read such a machine as a lifeless formal symbol manipulator. Haraway said it best: our machines are disturbingly lively, and we ourselves frighteningly inert. I think +Bruce Sterling underappreciates just how well we do understand the persistent complexities of biological organization. Driverless cars might be clunky and unreliable, but they are also orders of magnitude less complex than even a simple organism. The difference is more quantitative than qualitative, and is by no means mysterious or poorly understood. In a biological system, functional integration happens simultaneously at multiple scales; in a vehicle it might happen at two or three at most. This low organizational resolution makes it easier to see the structural inefficiencies and design choices in technological system. But this isn’t a rule for all technology. Software in particular isn’t subject to such design constraints. This is why we see neural nets making huge advances […]
May 17, 2015

ON THE ETHICS OF ROBOT ROACHES

+John Baez worries that +Backyard Brains dodges the hard questions in their ethics statement. I’m not sure they entirely dodge the ethics question, “when is it okay to turn animals into RC cyborgs?” By saying it isn’t a “toy” and emphasizing its educational applications, they’re distinguishing between frivolous and constructive uses of the tool. If you’re just messing around for entertainment, or if you have some malicious purpose (like a cyborg roach based bank heist) then it’s probably not okay. Turning animals into cyborgs is okay when the applications are constructive and educational: when students learn, when knowledge grows. This is a common response from scientists to questions of animal experimentation: to point at the benefits generated by the research. The distinction between frivolous “toys” and constructive uses might be clear enough, but as stated it’s only a rule of thumb. The harder question is how to distinguish the two. One might be skeptical that it’s possible to state the ethical rule any more clearly than this. After all, horribly inhumane and unethical acts have been conducted in the name of science, so obviously science itself can’t be cover for doing whatever you want. The developers also point to high schools and educators mentoring students on their use of these techniques. Indeed, they seem to be marketing primarily to educational institutions aiming to buy RoboRoaches in bulk. In effect, this diffuses the ethical questions by putting responsibility on the institutions and educators overseeing their use. Unfortunately, this doesn’t give those institutions much of a guideline for making that decision themselves. It also somewhat spoils the DIY-ness of “backyard brains”. I do appreciate that they have a dedicated discussion of the ethics at stake! Although I agree that they don’t nail down the ethics questions with complete satisfaction (and they admit […]
December 1, 2014

AUTISM AND WAR CRIMES: TURING’S MORAL CHARACTER IN THE IMITATION GAME

Last night I attended a packed screening of The Imitation Game. My thoughts on the movie are below, but tl;dr: I thought the film was great. If you have any interest in mathematics, cryptography, or the history of computing you will love this film. But this isn’t just a movie for nerds. The drama of the wartime setting and the arresting performance from Cumberbatch make this film entertaining and accessible to almost everyone– despite the fact that it’s a period war drama with almost no action or romance and doesn’t pass the Bechdel test. Of course, as a philosopher I have questions and criticisms. But don’t let that confuse you: go see this film. Turning history’s intellectual heroes into media’s popular heroes is a trend I’d like to reinforce. Turing’s story is timely and central for understanding the development of our world. I’m happy to see his work receive the publicity and recognition it deserves. Turing is something of a hero of mine; I spent half my dissertation wrestling with his thoughts on artificial intelligence, and I’ve found a way to work him in to just about every class I’ve taught for the last decade. I know many others feel just as passionately (or more!) about his life and work. I have been looking forward to this film for a long time and my expectations were high. I was not disappointed. The Oscar buzz around this film is completely appropriate. Spoilers will obviously follow. There are minor inaccuracies in the film: Knightley mispronounces Euler’s name; Turing’s paper is titled “Computing Machinery and Intelligence“, not “The Imitation Game”; the Polish bombe machine was eventually named Victory, never Christopher. But I’m not so interested in that sort of critique. I’d instead like to talk about two subtle but important themes in the […]
October 16, 2014

OUR SOCIAL NETWORKS ARE BROKEN. HERE’S HOW TO FIX THEM.

1. You can’t really blame us for building Facebook the way we have. By “we” I mean we billion-plus Facebook users, because of course we are the ones who built Facebook. Zuckerberg Inc. might take all the credit (and profit) from Facebook’s success, but all the content and contacts on Facebook– you know, the part of the service we users actually find valuable– was produced, curated, and distributed by us: by you and me and our vast network of friends. So you can’t blame us for how things turned out. We really had no idea what we were doing when we built this thing. None of us had ever built a network this big and important before. The digital age is still mostly uncharted territory. To be fair, we’ve done a genuinely impressive job given what we had to work with. Facebook is already the digital home to a significant fraction of the global human population. Whatever you think of the service, its size is nothing to scoff at. The population of Facebook users today is about the same as the global human population just 200 years ago. Human communities of this scale are more than just rare: they are historically unprecedented. We have accomplished something truly amazing. Good work, people. We have every right to be proud of ourselves. But pride shouldn’t prevent us from being honest about these things we build–it shouldn’t make us complacent, or turn us blind to the flaws in our creation. Our digital social networks are broken. They don’t work the way we had hoped they would; they don’t work for us. This problem isn’t unique to Facebook, so throwing stones at only the biggest of silicon giants won’t solve it. The problem is with the way we are thinking about the task of […]
October 13, 2014

BRUNO LATOUR IS TALKING ABOUT GAIA

// A few weeks ago I saw Bruno Latour give a talk called “Gaia Intrudes” at Columbia. I’ve struggled with the term “Gaia” since I came across Lovelock’s Gaia Hypothesis while studying complex systems a few years ago. On the one hand, Lovelock is obviously correct that we can and should treat the (surface of the) Earth and its inhabitants as an interconnected system, whose parts (both living and nonliving) all influence each other. On the other hand, the term “Gaia” has a New Agey, pseudosciencey flavor (even if Lovelock’s discussion doesn’t) that makes me hesitant to use the term in my public discussions of complexity theory, and immediately skeptical when I see others use it. Since my skepticism seems to align with the consensus position in the sciences, I’ve never bothered to resolve my ambivalence about the term. And to be completely honest, while I admired Latour’s work (he’s mentioned in my profile!), going into this talk I was also a little skeptical of _his_ use of the term. I’ve been thinking pretty seriously about the theoretical tools required for understanding the relationship between an organism, its functional components, and its environment, what and I have been calling “the individuation problem”. As far as I can tell, not even the sciences are thinking about this problem systematically across the many domains and scales where it arises. That same week I had written a critique of Tegmark’s recent proposal for a physical theory of consciousness; my core critique centered on his failure to distinguish the problems of integration and individuation. So to hear that Latour was approaching the discussion using the vocabulary of Gaia made me apprehensive, if not outright disappointed. I was worried that he would just muddy the waters of an already fantastically difficult discussion, and that it […]
April 4, 2014

HUMAN CASTE SYSTEMS: REIFYING CLASS

// From the ongoing SA thread on Strangecoin. > Just out of curiosity, RA, when you discuss ideas like reifying the class structure by assigning people coloured buttons identifying their social class and when you advocate a system that would admittedly make it more difficult for poor people to buy food and basic necessities, are you making any kind of value judgement on the merits of such a system? It’s hard for me to reconcile ‘worried about hypothetical silent discrimination against cyborgs’ RA vs ‘likes the idea of clearly identifying poors with brown badges to more easily refuse to serve them’ RA. // I would only advocate for the idea if I thought it had a chance to change the social circumstances for the better. The reasoning is something like the following: 1) People are psychologically disposed to reasoning about community membership (identity), their status within those communities (influence), and how to engage those communities(culture/convention). This is what significant portions of their brains evolved to do. 2) People are not particularly disposed to reasoning about traditional economic frameworks (supply and demand, wealth, etc), their status within those framworks (class, inequality), and how to engage those those frameworks (making sound economic decisions). They can do this, and the ones that do, do really well, but its hard and most people can’t and suffer because of it. 3) It would be easier for most people to do well in a system that emphasized transactions of the type that people are typically good at reasoning at than ones they are typically bad at reasoning at. 4) Therefore, we should prefer an economic framework that emphasizes reasoning of the former and not the latter type. I’m not saying this fixes all inequality and suffering, but it makes it easier for people to do things […]
March 30, 2014

FROM THE ARCHIVES, MY FIRST POST ON THE ATTENTION ECONOMY

// I was digging through the SomethingAwful archives and found my first essay on the attention economy, written on April 5th, 2011. At the time, Bitcoin had yet to experience it’s first bubble and was still trading below a dollar, and Occupy Wall Street was still five months in the future. If you don’t have access to the archives, the thread which prompted this first write up was titled “No More Bitchin: Let’s actually create solutions to society’s problems!” Despite my reputation on that forum, I’m not interested in pop speculative futurism or idle technoidealism. I don’t think there’s an easy technological fix for our many difficult problems. But I do think that our technological circumstances have a dramatic impact on our social, political, and economic organizations, and that we can design technologies to cultivate human communities that are healthy, stable, and cooperative. The political and economic infrastructure we have for managing collective human action was developed at a time when individual rational agency formed the basis of all political theory, and in a networked digital age we can do much better. An attention economy doesn’t solve all the problems, but it provides tools for addressing problems that simply aren’t available with the infrastructure we have available today. My discussion of the attention economy was aimed at discussing social organization at this level of abstraction, with the hopes that taking this networked perspective on social action would reveal some of the tools necessary for addressing our problems. . In the three years and multiple threads since that initial post, I’ve done research into the dynamics and organization of complex systems and taught myself some of the math and theory necessary for making the idea explicit and communicable. And in that time the field of data science has grown astronomically, making […]
December 29, 2015

YES, AI SHOULD BE OPEN

Scott Alexander: Should AI be Open? Or are we worried that AI will be so powerful that someone armed with AI is stronger than the government? Think about this scenario for a moment. If the government notices someone getting, say, a quarter as powerful as it is, it’ll probably take action. So an AI user isn’t likely to overpower the government unless their AI can become powerful enough to defeat the US military too quickly for the government to notice or respond to. But if AIs can do that, we’re back in the intelligence explosion/fast takeoff world where OpenAI’s assumptions break down. If AIs can go from zero to more-powerful-than-the-US-military in a very short amount of time while still remaining well-behaved, then we actually do have to worry about Dr. Evil and we shouldn’t be giving him all our research. // I’ve been meaning to write a critical take on the OpenAI project. I’m glad Scott Alexander did this first, because it allows me to start by pointing out how completely terrible the public discussion on AI is at the moment. We’re thinking about AI as if they are Super Saiyan warriors with a “power level” of some explicit quantity, as if such a number would determine the future success of a system. This is, for lack of a better word, a completely bullshit adolescent fantasy. For instance, there’s no question that the US government vastly overpowers ISIS and other terrorist organizations in strength, numbers, and strategy. Those terrorist groups nevertheless represent a persistent threat to global stability despite the radical asymmetry of power– or rather, precisely because of the ways we’ve abused this asymmetry. “Power level” here does not determine the trouble and disruption a system can cause; comparatively “weak” actors can nevertheless leave dramatic marks on history. Or […]
December 9, 2015

DELUSIONS ABOUT EUGENE (A REPLY TO ANDREAS SCHOU)

Andreas Schou writes: +Daniel Estrada finds this unnecessarily reductive and essentialist, and argues for a quacks-like-a-duck definition: if does a task which humans do, and effectively orients itself toward a goal, then it’s “intelligence.” After sitting on the question for a while, I think I agree — for some purposes. If your purpose is to build a philosophical category, “intelligence,” which at some point will entitle nonhuman intelligences to be treated as independent agents and valid objects of moral concern, reductive examination of the precise properties of nonhuman intelligences will yield consistently negative results. Human intelligence is largely illegible and was not, at any point, “built.” A capabilities approach which operates at a higher level of abstraction will flag the properties of a possibly-legitimate moral subject long before a close-to-the-metal approach will. (I do not believe we are near that point, but that’s also beyond the scope of this post.) But if your purpose is to build artificial intelligences, the reductive details matter in terms of practical ontology, but not necessarily ethics: a capabilities ontology creates a giant, muddy categorical mess which disallows engineers from distinguishing trivial parlor tricks like Eugene Goostman from meaningful accomplishments. The underspecified capabilities approach, without particulars, simply hands the reins over to the part of the human brain which draws faces in the clouds. Which is a problem. Because we are apparently built to greedily anthropomorphize. Historically, humans have treated states, natural objects, tools, the weather, their own thoughts, and their own unconscious actions as legitimate “persons.” (Seldom all at the same time, but still.) If we assigned the trait “intelligence” to every category which we had historically anthropomorphized, that would leave us treating the United States, Icelandic elf-stones, Watson, Zeus, our internal models of other peoples’ actions, and Ouija boards as being “intelligent.” Which […]
November 23, 2015

ATTENTION, OPINION DYNAMICS, AND CRYING BABIES

In a recent article, Adam Elkus argues two points: 1) Drawing attention to an issue doesn’t necessarily solve it. 2) Drawing attention might make things worse. For these reasons, Elkus argues against what he calls “tragedy hipsterism”: the “endless castigation of the West for sins and imperfections” without offering anything constructive. He says, “Awareness-raising is only useful if it is somehow necessary for the instrumental process of achieving the desired aim. In many cases, it is not and is in fact an obstacle to that aim.” I think this is completely mistaken, both about the utility of castigation, but more generally about the role of attention in shaping the social dynamics. Consider, for instance, a crying baby. Crying doesn’t solve any problem on its own. If an infant is hungry, crying won’t make food magically appear. At best, crying gets an adult to acquire food for the baby– but not necessarily so. The adult could easily ignore the baby, or misinterpret the cry as triggered by something other than hunger. Typically, an adult will feed the baby whether or not it cries, which renders the crying itself completely superfluous. And crying can be dangerous! In the wild, crying newborns tend to attract predators looking for an easy meal. On a plane, crying newborns create social animosity that might threaten the safety of the newborn and their family in other ways. Crying doesn’t always help, and it often makes things worse. So on Elkus’ argument, crying is actually an obstacle to the infant’s well being. If babies only understood the futility of crying, perhaps they’d be more effective at realizing their goals! Of course, this argument is ridiculous. Crying isn’t meant to solve problems directly. In fact, crying is usually issued from a place of helplessness: the inability to realize one’s […]
October 2, 2015

EARLY DIGITAL SOCIETIES

I’d invite everyone to imagine the human world as it existed before the invention of money. Prior to money, people engaged in cooperative behaviors for a variety of non-financial reasons (family, love, adventure, etc). But populations eventually grew too big to support the network with such slow, noisy transactions. Early agricultural societies invented money to help everyone collectively keep better track of how all their valuables were distributed. At the time, it would have been perfectly sensible to wonder about the complications that money would bring. “What if you’re in a situation where you need help, but you don’t have enough money to get anyone to help you? Seems like a lot of people could get the short end of the stick.” Of course, this worry would have been exactly correct. There are massive problems with the distribution of wealth and resources that comes with money. These problems are persistent; we still don’t know how to deal with them, and they are worse than ever before. Nevertheless, money was the critical coordinating infrastructure that (more or less) set up the human population to flourish over the last 10k yrs or so. It was the tool that built the human population as it exists today. I don’t like money. The human population today is fat, dirty, wasteful, uncoordinated in distributing resources, and ineffective at exerting global, targeted control that does anything but kill people. These failures have piled up to the point that they legitimately pose widespread, calamitous dangers to large human populations and important cultural centers. Money is currently in no position to resolve the problems we face; if anything, it’s made them virtually intractable. We’re in an analogous situation to the early agriculturalists: we need a new tool. Attention is our new tool; the attention economy our new coordinating […]
July 14, 2015

REAL ROBOT MOVIES

There are two kinds of robot movies. The first treats robots as a spectacle. Robots in spectacle movies justify their existence by being badass and doing badass things. Sometimes spectacle robots work for the good guys (Pacific Rim, Big Hero 6). Sometimes they function as classic movie monsters (Terminator , The Matrix sequels) putting robots in the same monster family as zombies and Frankenstein, sources with which they share many tropes. But usually, spectacle robots serve as both heroes and villains simultaneously (Terminator 2, Transformers, Robocop, Avengers 2). Presenting robots in both positive and negative roles allows spectacle movies remain neutral on their nature. Robots can be a threat but they can also be a savior, so there’s no motivation to inquire deeply into the nature of robots as such. In effect, spectacle movies take the presence of robots for granted, and so reinforce our default presumptions: that robots exist for human use and entertainment. Robot spectacle movies can be entertaining but they tend to be shallow, and plenty of them are just plain boring (Real Steel, the animated Robots). Apart from functional novelties that advance the plot or (more likely) set up a slapstick gag, robot spectacle movies don’t bother to reflect on the robot’s experience of the world or how they might reflect on our human condition. The Terminator even provides the audience with glimpses of his heads-up display without hinting at the homunculus paradoxes it implies. Because once that robot’s function as a ruthless killing machine is established, the only question left is how to deal with it– a challenge to be met by the film’s human protagonists in an otherwise thoroughly conventional narrative. In spectacle movies, the robot is merely the pretense for telling that human story, another technological obstacle for humanity to overcome. The second […]
July 13, 2015

DISTURBINGLY LIVELY, FRIGHTENLINGLY INERT

This line of thinking traces to Drefyus’ What computers can’t do, and specifically of his reading of Heidegger’s care structure in Being and Time. Dreyfus’ views gained popularity during the first big AI wave and successfully put a lid on a lot of the hype around AI. I would say Dreyfus critiques are partly responsible for the terminological shift towards “machine learning” over AI, and also for the shifted focus on robotics and embodied cognition throughout the 90s. https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence But Drefyus’ critiques don’t really have a purchase anymore, and I’m surprised to see Sterling dusting them off. It’s hard to say that a driverless car doesn’t “care” about the conditions on the road; literally all it’s sensors and equipment are tuned to careful and persistent monitoring of road conditions. It remains in a ready state of action, equipped to interpret and respond to the world as a fully engaged participant. It is hard to read such a machine as a lifeless formal symbol manipulator. Haraway said it best: our machines are disturbingly lively, and we ourselves frighteningly inert. I think +Bruce Sterling underappreciates just how well we do understand the persistent complexities of biological organization. Driverless cars might be clunky and unreliable, but they are also orders of magnitude less complex than even a simple organism. The difference is more quantitative than qualitative, and is by no means mysterious or poorly understood. In a biological system, functional integration happens simultaneously at multiple scales; in a vehicle it might happen at two or three at most. This low organizational resolution makes it easier to see the structural inefficiencies and design choices in technological system. But this isn’t a rule for all technology. Software in particular isn’t subject to such design constraints. This is why we see neural nets making huge advances […]
May 17, 2015

ON THE ETHICS OF ROBOT ROACHES

+John Baez worries that +Backyard Brains dodges the hard questions in their ethics statement. I’m not sure they entirely dodge the ethics question, “when is it okay to turn animals into RC cyborgs?” By saying it isn’t a “toy” and emphasizing its educational applications, they’re distinguishing between frivolous and constructive uses of the tool. If you’re just messing around for entertainment, or if you have some malicious purpose (like a cyborg roach based bank heist) then it’s probably not okay. Turning animals into cyborgs is okay when the applications are constructive and educational: when students learn, when knowledge grows. This is a common response from scientists to questions of animal experimentation: to point at the benefits generated by the research. The distinction between frivolous “toys” and constructive uses might be clear enough, but as stated it’s only a rule of thumb. The harder question is how to distinguish the two. One might be skeptical that it’s possible to state the ethical rule any more clearly than this. After all, horribly inhumane and unethical acts have been conducted in the name of science, so obviously science itself can’t be cover for doing whatever you want. The developers also point to high schools and educators mentoring students on their use of these techniques. Indeed, they seem to be marketing primarily to educational institutions aiming to buy RoboRoaches in bulk. In effect, this diffuses the ethical questions by putting responsibility on the institutions and educators overseeing their use. Unfortunately, this doesn’t give those institutions much of a guideline for making that decision themselves. It also somewhat spoils the DIY-ness of “backyard brains”. I do appreciate that they have a dedicated discussion of the ethics at stake! Although I agree that they don’t nail down the ethics questions with complete satisfaction (and they admit […]
December 1, 2014

AUTISM AND WAR CRIMES: TURING’S MORAL CHARACTER IN THE IMITATION GAME

Last night I attended a packed screening of The Imitation Game. My thoughts on the movie are below, but tl;dr: I thought the film was great. If you have any interest in mathematics, cryptography, or the history of computing you will love this film. But this isn’t just a movie for nerds. The drama of the wartime setting and the arresting performance from Cumberbatch make this film entertaining and accessible to almost everyone– despite the fact that it’s a period war drama with almost no action or romance and doesn’t pass the Bechdel test. Of course, as a philosopher I have questions and criticisms. But don’t let that confuse you: go see this film. Turning history’s intellectual heroes into media’s popular heroes is a trend I’d like to reinforce. Turing’s story is timely and central for understanding the development of our world. I’m happy to see his work receive the publicity and recognition it deserves. Turing is something of a hero of mine; I spent half my dissertation wrestling with his thoughts on artificial intelligence, and I’ve found a way to work him in to just about every class I’ve taught for the last decade. I know many others feel just as passionately (or more!) about his life and work. I have been looking forward to this film for a long time and my expectations were high. I was not disappointed. The Oscar buzz around this film is completely appropriate. Spoilers will obviously follow. There are minor inaccuracies in the film: Knightley mispronounces Euler’s name; Turing’s paper is titled “Computing Machinery and Intelligence“, not “The Imitation Game”; the Polish bombe machine was eventually named Victory, never Christopher. But I’m not so interested in that sort of critique. I’d instead like to talk about two subtle but important themes in the […]
October 16, 2014

OUR SOCIAL NETWORKS ARE BROKEN. HERE’S HOW TO FIX THEM.

1. You can’t really blame us for building Facebook the way we have. By “we” I mean we billion-plus Facebook users, because of course we are the ones who built Facebook. Zuckerberg Inc. might take all the credit (and profit) from Facebook’s success, but all the content and contacts on Facebook– you know, the part of the service we users actually find valuable– was produced, curated, and distributed by us: by you and me and our vast network of friends. So you can’t blame us for how things turned out. We really had no idea what we were doing when we built this thing. None of us had ever built a network this big and important before. The digital age is still mostly uncharted territory. To be fair, we’ve done a genuinely impressive job given what we had to work with. Facebook is already the digital home to a significant fraction of the global human population. Whatever you think of the service, its size is nothing to scoff at. The population of Facebook users today is about the same as the global human population just 200 years ago. Human communities of this scale are more than just rare: they are historically unprecedented. We have accomplished something truly amazing. Good work, people. We have every right to be proud of ourselves. But pride shouldn’t prevent us from being honest about these things we build–it shouldn’t make us complacent, or turn us blind to the flaws in our creation. Our digital social networks are broken. They don’t work the way we had hoped they would; they don’t work for us. This problem isn’t unique to Facebook, so throwing stones at only the biggest of silicon giants won’t solve it. The problem is with the way we are thinking about the task of […]
October 13, 2014

BRUNO LATOUR IS TALKING ABOUT GAIA

// A few weeks ago I saw Bruno Latour give a talk called “Gaia Intrudes” at Columbia. I’ve struggled with the term “Gaia” since I came across Lovelock’s Gaia Hypothesis while studying complex systems a few years ago. On the one hand, Lovelock is obviously correct that we can and should treat the (surface of the) Earth and its inhabitants as an interconnected system, whose parts (both living and nonliving) all influence each other. On the other hand, the term “Gaia” has a New Agey, pseudosciencey flavor (even if Lovelock’s discussion doesn’t) that makes me hesitant to use the term in my public discussions of complexity theory, and immediately skeptical when I see others use it. Since my skepticism seems to align with the consensus position in the sciences, I’ve never bothered to resolve my ambivalence about the term. And to be completely honest, while I admired Latour’s work (he’s mentioned in my profile!), going into this talk I was also a little skeptical of _his_ use of the term. I’ve been thinking pretty seriously about the theoretical tools required for understanding the relationship between an organism, its functional components, and its environment, what and I have been calling “the individuation problem”. As far as I can tell, not even the sciences are thinking about this problem systematically across the many domains and scales where it arises. That same week I had written a critique of Tegmark’s recent proposal for a physical theory of consciousness; my core critique centered on his failure to distinguish the problems of integration and individuation. So to hear that Latour was approaching the discussion using the vocabulary of Gaia made me apprehensive, if not outright disappointed. I was worried that he would just muddy the waters of an already fantastically difficult discussion, and that it […]
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }