Scott Alexander: Should AI be Open? Or are we worried that AI will be so powerful that someone armed with AI is stronger than the government? Think about this scenario for a moment. If the government notices someone getting, say, a quarter as powerful as it is, itāll probably take action. So an AI user isnāt likely to overpower the government unless their AI can become powerful enough to defeat the US military too quickly for the government to notice or respond to. But if AIs can do that, weāre back in the intelligence explosion/fast takeoff world where OpenAIās assumptions break down. If AIs can go from zero to more-powerful-than-the-US-military in a very short amount of time while still remaining well-behaved, then we actually do have to worry about Dr. Evil and we shouldnāt be giving him all our research. // Iāve been meaning to write a critical take on the OpenAI project. Iām glad Scott Alexander did this first, because it allows me to start by pointing out how completely terrible the public discussion on AI is at the moment. Weāre thinking about AI as if they are Super Saiyan warriors with a āpower levelā of some explicit quantity, as if such a number would determine the future success of a system. This is, for lack of a better word, a completely bullshit adolescent fantasy. For instance, thereās no question that the US government vastly overpowers ISIS and other terrorist organizations in strength, numbers, and strategy. Those terrorist groups nevertheless represent a persistent threat to global stability despite the radical asymmetry of powerā or rather, precisely because of the ways weāve abused this asymmetry. āPower levelā here does not determine the trouble and disruption a system can cause; comparatively āweakā actors can nevertheless leave dramatic marks on history. Or [ā¦]
Andreas Schou writes: +Daniel Estrada finds this unnecessarily reductive and essentialist, and argues for a quacks-like-a-duck definition: if does a task which humans do, and effectively orients itself toward a goal, then itās āintelligence.ā After sitting on the question for a while, I think I agree ā for some purposes. If your purpose is to build a philosophical category, āintelligence,ā which at some point will entitle nonhuman intelligences to be treated as independent agents and valid objects of moral concern, reductive examination of the precise properties of nonhuman intelligences will yield consistently negative results. Human intelligence is largely illegible and was not, at any point, ābuilt.ā A capabilities approach which operates at a higher level of abstraction will flag the properties of a possibly-legitimate moral subject long before a close-to-the-metal approach will. (I do not believe we are near that point, but thatās also beyond the scope of this post.) But if your purpose is to build artificial intelligences, the reductive details matter in terms of practical ontology, but not necessarily ethics: a capabilities ontology creates a giant, muddy categorical mess which disallows engineers from distinguishing trivial parlor tricks like Eugene Goostman from meaningful accomplishments. The underspecified capabilities approach, without particulars, simply hands the reins over to the part of the human brain which draws faces in the clouds. Which is a problem. Because we are apparently built to greedily anthropomorphize. Historically, humans have treated states, natural objects, tools, the weather, their own thoughts, and their own unconscious actions as legitimate āpersons.ā (Seldom all at the same time, but still.) If we assigned the trait āintelligenceā to every category which we had historically anthropomorphized, that would leave us treating the United States, Icelandic elf-stones, Watson, Zeus, our internal models of other peoplesā actions, and Ouija boards as being āintelligent.ā Which [ā¦]
In a recent article, Adam Elkus argues two points: 1) Drawing attention to an issue doesnāt necessarily solve it. 2) Drawing attention might make things worse. For these reasons, Elkus argues against what he calls ātragedy hipsterismā: the āendless castigation of the West for sins and imperfectionsā without offering anything constructive. He says, āAwareness-raising is only useful if it is somehow necessary for the instrumental process of achieving the desired aim. In many cases, it is not and is in fact an obstacle to that aim.ā I think this is completely mistaken, both about the utility of castigation, but more generally about the role of attention in shaping the social dynamics. Consider, for instance, a crying baby. Crying doesnāt solve any problem on its own. If an infant is hungry, crying wonāt make food magically appear. At best, crying gets an adult to acquire food for the babyā but not necessarily so. The adult could easily ignore the baby, or misinterpret the cry as triggered by something other than hunger. Typically, an adult will feed the baby whether or not it cries, which renders the crying itself completely superfluous. And crying can be dangerous! In the wild, crying newborns tend to attract predators looking for an easy meal. On a plane, crying newborns create social animosity that might threaten the safety of the newborn and their family in other ways. Crying doesnāt always help, and it often makes things worse. So on Elkusā argument, crying is actually an obstacle to the infantās well being. If babies only understood the futility of crying, perhaps theyād be more effective at realizing their goals! Of course, this argument is ridiculous. Crying isnāt meant to solve problems directly. In fact, crying is usually issued from a place of helplessness: the inability to realize oneās [ā¦]
Iād invite everyone to imagine the human world as it existed before the invention of money. Prior to money, people engaged in cooperative behaviors for a variety of non-financial reasons (family, love, adventure, etc). But populations eventually grew too big to support the network with such slow, noisy transactions. Early agricultural societies invented money to help everyone collectively keep better track of how all their valuables were distributed. At the time, it would have been perfectly sensible to wonder about the complications that money would bring. āWhat if youāre in a situation where you need help, but you donāt have enough money to get anyone to help you? Seems like a lot of people could get the short end of the stick.ā Of course, this worry would have been exactly correct. There are massive problems with the distribution of wealth and resources that comes with money. These problems are persistent; we still donāt know how to deal with them, and they are worse than ever before. Nevertheless, money was the critical coordinating infrastructure that (more or less) set up the human population to flourish over the last 10k yrs or so. It was the tool that built the human population as it exists today. I donāt like money. The human population today is fat, dirty, wasteful, uncoordinated in distributing resources, and ineffective at exerting global, targeted control that does anything but kill people. These failures have piled up to the point that they legitimately pose widespread, calamitous dangers to large human populations and important cultural centers. Money is currently in no position to resolve the problems we face; if anything, itās made them virtually intractable. Weāre in an analogous situation to the early agriculturalists: we need a new tool. Attention is our new tool; the attention economy our new coordinating [ā¦]
There are two kinds of robot movies. The first treats robots as a spectacle. Robots in spectacle movies justify their existence by being badass and doing badass things. Sometimes spectacle robots work for the good guys (Pacific Rim, Big Hero 6). Sometimes they function as classic movie monsters (Terminator , The Matrix sequels) putting robots in the same monster family as zombies and Frankenstein, sources with which they share many tropes. But usually, spectacle robots serve as both heroes and villains simultaneously (Terminator 2, Transformers, Robocop, Avengers 2). Presenting robots in both positive and negative roles allows spectacle movies remain neutral on their nature. Robots can be a threat but they can also be a savior, so thereās no motivation to inquire deeply into the nature of robots as such. In effect, spectacle movies take the presence of robots for granted, and so reinforce our default presumptions: that robots exist for human use and entertainment. Robot spectacle movies can be entertaining but they tend to be shallow, and plenty of them are just plain boring (Real Steel, the animated Robots). Apart from functional novelties that advance the plot or (more likely) set up a slapstick gag, robot spectacle movies donāt bother to reflect on the robotās experience of the world or how they might reflect on our human condition. The Terminator even provides the audience with glimpses of his heads-up display without hinting at the homunculus paradoxes it implies. Because once that robotās function as a ruthless killing machine is established, the only question left is how to deal with itā a challenge to be met by the filmās human protagonists in an otherwise thoroughly conventional narrative. In spectacle movies, the robot is merely the pretense for telling that human story, another technological obstacle for humanity to overcome. The second [ā¦]
This line of thinking traces to Drefyusā What computers canāt do, and specifically of his reading of Heideggerās care structure in Being and Time. Dreyfusā views gained popularity during the first big AI wave and successfully put a lid on a lot of the hype around AI. I would say Dreyfus critiques are partly responsible for the terminological shift towards āmachine learningā over AI, and also for the shifted focus on robotics and embodied cognition throughout the 90s. https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence But Drefyusā critiques donāt really have a purchase anymore, and Iām surprised to see Sterling dusting them off. Itās hard to say that a driverless car doesnāt ācareā about the conditions on the road; literally all itās sensors and equipment are tuned to careful and persistent monitoring of road conditions. It remains in a ready state of action, equipped to interpret and respond to the world as a fully engaged participant. It is hard to read such a machine as a lifeless formal symbol manipulator. Haraway said it best: our machines are disturbingly lively, and we ourselves frighteningly inert. I think +Bruce Sterling underappreciates just how well we do understand the persistent complexities of biological organization. Driverless cars might be clunky and unreliable, but they are also orders of magnitude less complex than even a simple organism. The difference is more quantitative than qualitative, and is by no means mysterious or poorly understood. In a biological system, functional integration happens simultaneously at multiple scales; in a vehicle it might happen at two or three at most. This low organizational resolution makes it easier to see the structural inefficiencies and design choices in technological system. But this isnāt a rule for all technology. Software in particular isnāt subject to such design constraints. This is why we see neural nets making huge advances [ā¦]
+John Baez worries that +Backyard Brains dodges the hard questions in their ethics statement. Iām not sure they entirely dodge the ethics question, āwhen is it okay to turn animals into RC cyborgs?ā By saying it isnāt a ātoyā and emphasizing its educational applications, theyāre distinguishing between frivolous and constructive uses of the tool. If youāre just messing around for entertainment, or if you have some malicious purpose (like a cyborg roach based bank heist) then itās probably not okay. Turning animals into cyborgs is okay when the applications are constructive and educational: when students learn, when knowledge grows. This is a common response from scientists to questions of animal experimentation: to point at the benefits generated by the research. The distinction between frivolous ātoysā and constructive uses might be clear enough, but as stated itās only a rule of thumb. The harder question is how to distinguish the two. One might be skeptical that itās possible to state the ethical rule any more clearly than this. After all, horribly inhumane and unethical acts have been conducted in the name of science, so obviously science itself canāt be cover for doing whatever you want. The developers also point to high schools and educators mentoring students on their use of these techniques. Indeed, they seem to be marketing primarily to educational institutions aiming to buy RoboRoaches in bulk. In effect, this diffuses the ethical questions by putting responsibility on the institutions and educators overseeing their use. Unfortunately, this doesnāt give those institutions much of a guideline for making that decision themselves. It also somewhat spoils the DIY-ness of ābackyard brainsā. I do appreciate that they have a dedicated discussion of the ethics at stake! Although I agree that they donāt nail down the ethics questions with complete satisfaction (and they admit [ā¦]
Last night I attended a packed screening of The Imitation Game. My thoughts on the movie are below, but tl;dr: I thought the film was great. If you have any interest in mathematics, cryptography, or the history of computing you will love this film. But this isnāt just a movie for nerds. The drama of the wartime setting and the arresting performance from Cumberbatch make this film entertaining and accessible to almost everyoneā despite the fact that itās a period war drama with almost no action or romance and doesnāt pass the Bechdel test. Of course, as a philosopher I have questions and criticisms. But donāt let that confuse you: go see this film. Turning historyās intellectual heroes into mediaās popular heroes is a trend Iād like to reinforce. Turingās story is timely and central for understanding the development of our world. Iām happy to see his work receive the publicity and recognition it deserves. Turing is something of a hero of mine; I spent half my dissertation wrestling with his thoughts on artificial intelligence, and Iāve found a way to work him in to just about every class Iāve taught for the last decade. I know many others feel just as passionately (or more!) about his life and work. I have been looking forward to this film for a long time and my expectations were high. I was not disappointed. The Oscar buzz around this film is completely appropriate. Spoilers will obviously follow. There are minor inaccuracies in the film: Knightley mispronounces Eulerās name; Turingās paper is titled āComputing Machinery and Intelligenceā, not āThe Imitation Gameā; the Polish bombe machine was eventually named Victory, never Christopher. But Iām not so interested in that sort of critique. Iād instead like to talk about two subtle but important themes in the [ā¦]
1. You canāt really blame us for building Facebook the way we have. By āweā I mean we billion-plus Facebook users, because of course we are the ones who built Facebook. Zuckerberg Inc. might take all the credit (and profit) from Facebookās success, but all the content and contacts on Facebookā you know, the part of the service we users actually find valuableā was produced, curated, and distributed by us: by you and me and our vast network of friends. So you canāt blame us for how things turned out. We really had no idea what we were doing when we built this thing. None of us had ever built a network this big and important before. The digital age is still mostly uncharted territory. To be fair, weāve done a genuinely impressive job given what we had to work with. Facebook is already the digital home to a significant fraction of the global human population. Whatever you think of the service, its size is nothing to scoff at. The population of Facebook users today is about the same as the global human population just 200 years ago. Human communities of this scale are more than just rare: they are historically unprecedented. We have accomplished something truly amazing. Good work, people. We have every right to be proud of ourselves. But pride shouldnāt prevent us from being honest about these things we buildāit shouldnāt make us complacent, or turn us blind to the flaws in our creation. Our digital social networks are broken. They donāt work the way we had hoped they would; they donāt work for us. This problem isnāt unique to Facebook, so throwing stones at only the biggest of silicon giants wonāt solve it. The problem is with the way we are thinking about the task of [ā¦]
// A few weeks ago I saw Bruno Latour give a talk called āGaia Intrudesā at Columbia. Iāve struggled with the term āGaiaā since I came across Lovelockās Gaia Hypothesis while studying complex systems a few years ago. On the one hand, Lovelock is obviously correct that we can and should treat the (surface of the) Earth and its inhabitants as an interconnected system, whose parts (both living and nonliving) all influence each other. On the other hand, the term āGaiaā has a New Agey, pseudosciencey flavor (even if Lovelockās discussion doesnāt) that makes me hesitant to use the term in my public discussions of complexity theory, and immediately skeptical when I see others use it. Since my skepticism seems to align with the consensus position in the sciences, Iāve never bothered to resolve my ambivalence about the term. And to be completely honest, while I admired Latourās work (heās mentioned in my profile!), going into this talk I was also a little skeptical of _his_ use of the term. Iāve been thinking pretty seriously about the theoretical tools required for understanding the relationship between an organism, its functional components, and its environment, what and I have been calling āthe individuation problemā. As far as I can tell, not even the sciences are thinking about this problem systematically across the many domains and scales where it arises. That same week I had written a critique of Tegmarkās recent proposal for a physical theory of consciousness; my core critique centered on his failure to distinguish the problems of integration and individuation. So to hear that Latour was approaching the discussion using the vocabulary of Gaia made me apprehensive, if not outright disappointed. I was worried that he would just muddy the waters of an already fantastically difficult discussion, and that it [ā¦]
// From the ongoing SA thread on Strangecoin. > Just out of curiosity, RA, when you discuss ideas like reifying the class structure by assigning people coloured buttons identifying their social class and when you advocate a system that would admittedly make it more difficult for poor people to buy food and basic necessities, are you making any kind of value judgement on the merits of such a system? Itās hard for me to reconcile āworried about hypothetical silent discrimination against cyborgsā RA vs ālikes the idea of clearly identifying poors with brown badges to more easily refuse to serve themā RA. // I would only advocate for the idea if I thought it had a chance to change the social circumstances for the better. The reasoning is something like the following: 1) People are psychologically disposed to reasoning about community membership (identity), their status within those communities (influence), and how to engage those communities(culture/convention). This is what significant portions of their brains evolved to do. 2) People are not particularly disposed to reasoning about traditional economic frameworks (supply and demand, wealth, etc), their status within those framworks (class, inequality), and how to engage those those frameworks (making sound economic decisions). They can do this, and the ones that do, do really well, but its hard and most people canāt and suffer because of it. 3) It would be easier for most people to do well in a system that emphasized transactions of the type that people are typically good at reasoning at than ones they are typically bad at reasoning at. 4) Therefore, we should prefer an economic framework that emphasizes reasoning of the former and not the latter type. Iām not saying this fixes all inequality and suffering, but it makes it easier for people to do things [ā¦]
// I was digging through the SomethingAwful archives and found my first essay on the attention economy, written on April 5th, 2011. At the time, Bitcoin had yet to experience itās first bubble and was still trading below a dollar, and Occupy Wall Street was still five months in the future. If you donāt have access to the archives, the thread which prompted this first write up was titled āNo More Bitchin: Letās actually create solutions to societyās problems!ā Despite my reputation on that forum, Iām not interested in pop speculative futurism or idle technoidealism. I donāt think thereās an easy technological fix for our many difficult problems. But I do think that our technological circumstances have a dramatic impact on our social, political, and economic organizations, and that we can design technologies to cultivate human communities that are healthy, stable, and cooperative. The political and economic infrastructure we have for managing collective human action was developed at a time when individual rational agency formed the basis of all political theory, and in a networked digital age we can do much better. An attention economy doesnāt solve all the problems, but it provides tools for addressing problems that simply arenāt available with the infrastructure we have available today. My discussion of the attention economy was aimed at discussing social organization at this level of abstraction, with the hopes that taking this networked perspective on social action would reveal some of the tools necessary for addressing our problems. . In the three years and multiple threads since that initial post, Iāve done research into the dynamics and organization of complex systems and taught myself some of the math and theory necessary for making the idea explicit and communicable. And in that time the field of data science has grown astronomically, making [ā¦]
The instant you hear a cellphone ring, your brain reacts in a unique way ā if the ringtone matches that of your own phone.More Ā» Ringtone ā Cellular Phone ā Communications ā Wireless ā Shopping