March 30, 2014

STRANGECOIN: A NONLINEAR CURRENCY

In this post I sketch a proposal for a digital currency that works unlike other *coins that have recently become available. I’m calling it Strangecoin, both to highlight its uniqueness as a currency and as a reference to the strange attractor, a special kind of nonlinear system. What’s unique about Strangecoin? Strangecoin transactions can be nonzero sum. A Strangecoin transaction might result in both parties having more Strangecoin. Strangecoin transactions can be one-sided and can be conducted entirely by only one party to the transaction. The rate of change of one’s Strangecoin balance is a more important indicator of economic influence than the balance itself. Optimal investment strategy in Strangecoin aims to stabilize one’s balance of Strangecoin. A universal account provides all users a basic Strangecoin income, effectively unlimited wealth, and direct feedback on the overall prosperity of the network. I’ve only started thinking through the idea, and implementing it would take more technical expertise than I have alone. For instance, I’m not sure if Strangecoin can be implemented as an extension of the bitcoin protocol, or if some fundamentally new technology is required. If you know something about the technical details, I’d love to hear your thoughts. If you might know how to implement something like this, I’d love to help you try. But since I don’t know of anything else that works like this, this proposal is mostly intended simply to put the idea out there, in the hopes of encouraging others to think in these directions. Background and Motivation If I give you a dollar for a burger, then I’ve lost a dollar and gained a burger, and you’ve gained a dollar and lost a burger. Assuming this was a fair trade (that dollars and burgers are of approximately equal value), then as a result of the […]
March 23, 2014

A FIELD GUIDE FROM THE PRESENT ON ORGANISMS OF THE FUTURE

Contents: On organisms On organisms of the present. On organisms of the future. 1. On Organisms a. Organisms are persistent complex systems with functionally differentiated components engaged in a cooperative, dynamic pattern of activity. b. Organisms typically[1] play a role as components of other organisms. Similarly, the components of organisms are typically themselves organisms. Organisms may have components at many different scales relative to other organisms. c. The components of organisms can be widely distributed in space and time. Each component will typically play multiple, cascading roles for many different organisms at many different scales. d. There are no general rules for identifying the components of an organism. It may be easier to identify the persistent organism itself than to identify its components or the roles they play. e. The persistence of an organism consists in the persistent cooperation of its components. The organism just is this pattern of cooperation among components. This pattern may be observed without full knowledge of its components or the particular role they play. This resolves the apparent paradoxes in 1d. f. Organisms develop over time, which is to say that the components of an organism may change radically in number and role over its lifetime. This development is sensitive to initial conditions and is subject to a potentially large number of constraints. Among these constraints are the frictions introduced by the cooperative activity itself. g. The cooperation of the components of an organism is also constrained by components that are common to many of the organism’s other components. It is against the background of these common components that cooperation takes place. Common components typically constrain the cooperation of the components of many other organisms, and provide anchors for identifying the cooperative relationships among organisms as a community. For this reason, common components may be […]
February 21, 2014

PROJECT TANGO, *COINS, AND THE ATTENTION ECONOMY

About 20 years ago the music world underwent a digital conversion. Our tapes and vinyl records were systematically turned into strings of bits. This conversion made music portable and manipulable in a way it had never been before, and completely transformed our relationship to music. It’s just one of dozens of similar stories about the digital conversion we’ve experienced in so many quadrants of human life. We’ve spent the last few decades uploading some of the most significant aspects of our lives into their digital form: our social networks, our economic infrastructure, our education and communication channels. Despite this historic progress, the digital conversion is far from complete. The trend towards participatory access characteristic of digital conversion is most notably absent from our political and governing infrastructure, even in technologically rich countries where the conversion has otherwise been successful. The cohesion of space and its contents is another gap in the process of conversion, which Project Tango is beginning to address. Unifying objects in a digital space is an extremely important step in the process. Think about how important GPS and digital maps have been in guiding your behavior over the last few decades. That same utility will soon be available for all the spaces you occupy and for all the objects you encounter. But for all the progress we’ve made, little effort has gone into thinking about what we will use these digital technologies for. Without understanding the uses cases which give these technologies context and meaning, a high resolution trail describing a person’s movement through space might appear unnecessarily invasive and foreign, even Orwellian. I want to provide a use case where hopefully the virtues of these technologies are clear. I’ve been talking about this context of use in terms of the attention economy. One of my early […]
February 17, 2014

GAMING COMMUNITIES AS EDUCATIONAL COMMUNITIES

Gaming communities as educational communities Presented at MAPSES 2014 in Scranton, PA I’m optimistic about massively open online education. There’s been a recent round of skepticism about low rates of completion and other difficulties with the current implementation of MOOCs, but I don’t find these numbers discouraging for two reasons. First, I think it’s good if anyone is learning anything. 13% is a low completion rate, but these courses are sometimes enrolling 100,000 people and regularly have enrollments of around 50,000 students. 13% completion still means that thousands of people are completing these courses, and tens of thousands more are receiving at least some exposure to material they otherwise wouldn’t. I see that as an unqualified positive success. And second, we don’t have standards for comparison for how well these online courses should be performing, and how far we are from meeting those standards. The College de France and the European Graduate School have both been mentioned at this conference as operating on open principles, albeit at smaller scales. But scales matter for evaluating systems as complex as education.So I want to throw another example into the mix in the hopes that it stimulates some more creative ideas in this direction. I want to look at online gaming communities from an educational context. Modern strategy games, like Starcraft and Dota and League of Legends, are deep and difficult games with steep learning curves and extremely high skill ceilings. Performing well at these games requires both quick strategic thinking under pressure and impressive displays of manual dexterity. Starcraft in particular has become a national sport in Korea over the last decade, with professional leagues broadcasting tournaments on television and top performing players earning salaries, sponsorship, and fan followings comparable to top athletes in other sports. Over the last 3 years the […]
February 13, 2014

THE LAZY ANIMISM IN EHRENREICH’S DEFENSE OF AGENCY

Barbara Ehrenreich has a new piece on agency in science that makes some serious mistakes and deserves careful treatment. I wanted to like the piece because she’s coming from a perspective I find attractive, but a correction of her mistakes ultimately undermines her view. It’s important for those of us interested in issues of organization and complexity to be clear about why her position is untenable. She opens her critique of “rationalist science” with a discussion of play that I’m quite sympathetic to, echoing some of David Graeber’s commentary in a companion article: So maybe carnival and ecstatic rituals serve no rational purpose and have no single sociological “function.” They are just something that people do, and, judging from Neolithic rock art depicting circle and line dances, they are something that people have done for thousands of years. The best category for such undertakings may be play, or exertion for the sheer pleasure of it. If that’s the case, then we have to ask why it has been so difficult for observers, especially perhaps white bourgeois Europeans, to recognize play as a time-honored category of experience. Ehrenreich isn’t talking about “play” in the sense of unstructured idle activity. She’s talking instead about celebration as a ritualized social event: “doing something together, something that was fun and sometimes ecstatic to the point of trance”. I think it’s important to distinguish these structured social rituals from “play”, since it’s often the case that the idle unstructured sort of play isn’t tolerated at social functions where ritualized repetition is central to the activity– try being a kid at a wedding or graduation and see how much fun you have. But that’s a minor complaint. I find the rituals of human celebration interesting too. I appreciate the article’s recognition that these behaviors don’t serve […]
January 19, 2014

RETHINKING MACHINES PART 2: THE MASTER ARGUMENT

// I’m laying out my Ph.D thesis systematically on my blog in the run up to my defense in July. In part 1, I introduced the “dual natures” theory of artifact, which is a primary target of my critique. In this post, I’ll explicitly lay out the big-picture structure of my argument and define some key technical terms I’ll be using throughout the discussion. Each claim here requires an explanation and defense that will be given independently in each subsequent post until I’ve covered the whole argument. I don’t expect it to be clear or convincing in this form, but I’ve put references where necessary to motivate further exploration until I can provide more satisfying remarks. This post will mostly be reference material for guiding us through what follows, and I’ll return to this post many times for context as we explain and justify its premises. I’ll also be updating the glossary here as new terms and concepts are introduced. First, some terms introduced in part 1: artifact: any product of human construction (including nonfunctional products, like art, waste, atmospheric carbon, etc). machine: any functional artifact (cars, hammers, bridges, etc) tool: any functional artifact whose functional character depends on human mental activity The dual natures view of artifacts insists that all machines are tools: that the categories are both coextensive as a matter of fact and cointensive as a matter of metaphysical or conceptual analysis. I will argue, contra the dual natures view, that some machines are not tools, but instead are participants that deserve treatment other than the purely instrumental. My argument is structured according to the outlined argument below. 1. Machines derive their functional natures from minds (and are therefore tools) in two primary ways: either through their use or their design. Design and use are semi-independent aspects […]
January 15, 2014

RETHINKING MACHINES PART 1: THE “DUAL NATURES” THEORY OF ARTIFACTS

/* I recently completed the first draft of my Ph.D. thesis in philosophy. As I prepare and polish for a final draft and defense in July I’ll be posting a series of articles that systematically present the thesis on my blog. I’m including full citations and sources from my collection when available, so hopefully others find this as useful as I will. I started this project in 2007 with the working title Rethinking machines: artificial intelligence beyond the philosophy of mind. The core of the thesis is that the primary philosophical challenge presented by artificial intelligence pertains not to our understanding of the mind, as it was overwhelmingly treated by philosophers in the classic AI debates of the 70’s and 80’s (see Dreyfus, Searle, Dennett, etc), but instead pertains to our understanding of technology, in the sense of the non-mindlike machines with which we machines with which we share existence (see Latour). Half of my dissertation involves a unique interpretation of Turing’s discussion of “fair play for machines,” an idea he develops in the course of his as a treatment of thinking machines, which I argue underlies his approach to artificial intelligence and represents the alternative view I’m endorsing. I’ve posted aspects of my interpretation of Turing in other posts on this blog if you’d like a preview of the more systematic presentation to come. The other half of my thesis is a critique of the so-called “dual natures” view of artifacts. This is where my thesis and these blog posts will begin. */ Artifacts are material, ordinary objects, and as such have physical descriptions that exhaustively explain their physical attributes and behaviors. Artifacts are also instruments of human design and creation, and as such also admit of descriptions in intentional terms that describe their functional nature in relation to […]
December 14, 2013

FAIR PLAY FOR MACHINES: CHOMSKY’S MISREADING OF TURING, AND WHY IT MATTERS.

PART 1: CHOMSKY’S MISREADING OF TURING In this interview, Chomsky reads the quote from Turing (1950): “I believe the question ‘can machines think’ to be too meaningless to deserve discussion” (at [10:05]) as a claim about the improbability of AI. He interprets this as if Turing is claiming that the issue of AI and thinking machines were irrelevant or uninteresting. This is a deliberately misleading interpretation. Turing obviously cares a lot about the issue of thinking machines, as evidenced by, for instance, the letter he sent his friends “in distress“. +Jay Gordon clarifies Chomsky’s views on Turing as follows: Chomsky states that Turing states that whether or not machines can think is a question of decision not a question of fact, akin to whether an airplane can fly. Chomsky actually cites Turing verbatim on this issue in his book Powers and Prospects (p 37ff -ed.) I’m not sure I appreciate the distinction drawn between a question of decision and a question of fact, or the suggestion that Turing treats the question of thinking machines as the former instead of the latter. Turing recognizes it as a fact that in his time people refused to accept the proposition that machines can think. But he also recognized that by the turn of the century these prejudices against machines would change, and that people would speak more freely of thinking machines. And getting from the former to the latter state of affairs isn’t a matter of any one decision; Turing thought it was a matter of social change, on par with the reversal of attitudes towards homosexuality, both of which unfortunately came too late for his time. Turing says the question “can machines think” isn’t helpful in this process because it invokes conceptual and prejudicial biases about “thinking” and “machines” that themselves can’t […]
November 30, 2013

A WORLD RUN BY SOFTWARE

A few days ago I reshared this talk from Balaji Srinivasan, along with my initial comments defending the position against what I took to be a superficial rejection from +David Brin and others. It was my first watching of the lecture, and my comments were borne of the passion that comes from having considered and argued for similar conclusions over the last few years, against those I felt were resisting the alternative framework BSS was suggesting without due consideration. But there is always room for critical reflection, and now that I’ve had a few days to digest the talk I’d like to write a more considered response. I am utterly convinced that a world run by software can be more fair, inclusive, and sustainable than any mode of organization the industrial age had to offer. Nevertheless, BSS says precious little in the talk of what such a world would look like, or what reasons we have for believing the conclusion to be true. BSS’s argument is largely critical about the problems and constraints of the existing system, with the goal of motivating interest in an alternative. I agree with much of his critique, especially his observation that people are already eagerly fleeing industrial age “paper” technologies in favor of digital alternatives. But the Silicon Valley audience to which the talk is directed might give the impression that a world run by software would benefit primarily those privileged few who are already benefiting from our nascent digital age, as yet another way to widen the gap between the wealthy and the rest. I think this is a misleading impression. A positive story that constructively described how a world run by software would operate would go a long way towards helping people imagine it as a real and plausible alternative, with distinct […]
November 25, 2013

TOLERATING EXTREME POSITIONS

Last time I explained that the instrumental value of extremism lies not in realizing extreme ends, but rather in framing the limits of what is considered “reasonable” or “moderate” discussion. The upshot is that extremist views play an important organizing role in the social discourse, whether or not the extremists themselves are successful at realizing their ends. People tend to decry extremism and urge moderation in its place; but a careful understanding of the dynamics of social organization might suggest better strategies for tolerating extreme positions. First, let’s be precise about our terms. I’m using a very simple model of opinion dynamics, specifically the Deffuant-Weisbuch (DW) bounded confidence model from 2002; the figures below are taken from the paper linked here. A more complex and interesting model can be found in the Hegselmann-Krause (HK) model and its extensions, but the simpler model is all we need for this post. The DW model describes a collection of agents with some opinions, each held with some degree of confidence. Individuals may have some impact on each other’s beliefs, adjusting them slightly in one direction or another. The less confident I am about my beliefs, the more room I might move in one direction or another depending on the beliefs and confidence of the agents I interact with. On this model, “extremists” are people who a) hold minority opinions, and b) are very confident about those opinions. Extremists aren’t likely to change their beliefs, but can be influential in drawing others towards their positions, especially when there is a high degree of uncertainty regarding those beliefs generally. In fact, that’s exactly what the DW model shows. In Figure 5, the y axis represents the range of opinions people might hold, centered on 0. The extremists hold their positions with very low uncertainty at […]
November 9, 2013

STEERING THE CROWD

I have been completely enamored with +Jon Kleinberg keynote address from HCOMP2013. It is the first model of human computation in field-theoretic terms I’ve encountered, and it is absolutely brilliant. Kleinberg is concerned with badges, like those used on Foursquare, Coursera, StackOverflow and the like. The badges provide some incentive to complete tasks that the system wants users to make; it gamifies the computational goals so people are motivated to complete the task. Kline’s paper provides a model for understanding how these incentives influence behavior. In this model, agents can act in any number of ways. If we consider StackOverflow, users might ask a question, answer a question, vote on questions and answers, and so on. They can also do something else entirely, like wash their cars. Each of these actions is represented as a vector in high dimensional space: one dimension for each action they might perform. In Figure 2, they consider a two dimensional sample of that action space, with distinct actions on the x and y axis. The dashed lines represent badge thresholds; completing 15 actions of type A1 earns you a badge, as does 10 actions of type A2. On this graph, Kleinberg draws arrows the length and orientation of which represent the optimal decision policies for users as they move through this action space. Users begin with some preferences for taking some actions over others, and the model assumes that the badges have some value for the users. The goal of the model is to show how the badges augment user action preferences as they approach the badge. Figure 2 shows a user near the origin has no strong incentives towards actions of either type. But as one starts accumulating actions and nearing a badge, the optimal policy changes. When I have 12 actions of […]
November 5, 2013

WHAT IS A COMPUTER?

+Yonatan Zunger recently reshared a youtube clip of the Writer, a 200 year old programmable automata that can write arbitrary words on a card. In the comments, someone claimed that the machine wasn’t technically a “computer” because it wasn’t computing anything. But there’s no mistake; the automata is certainly a computer, and it is performing a computation. Computation is defined in terms of the possible performances of a Turing machine. A Turing machine executes a formally specified function: given some starting state, a Turing machine executes a series of procedures (a “program”) that ultimately yield some final state. Any system that is formally equivalent to a Turing machine thus described is a computer. The writer automaton is a computer in this sense. It takes as input the set of characters on the programmable disk, and through a set of finite procedures (rotations of the cam) the machine produces a set of outputs, which involves the performance of writing words on a card. That’s an act of computation; that doll is a computer. Not only is the automata a computer, but any system that can be formally defined in terms of a set of procedures that takes an initial state into a final state can be called a “computation”. Whatever machine carries out those procedures is a “computer”. For instance, consider the water-boiling computer: Initial state: liquid water Final state: gaseous water Program: 1. Put liquid water in a pot sufficintly close to Earth. 2. Put the pot on a working stove 3. Light the stove. 4. Bring the water to 100 degrees celsius Properly executing the program will compute the gaseous water final state from the liquid water initial state. If I’m the one executing this program, than for that time I’m a water-boiling computer. This computer only handles a […]
April 4, 2014

HUMAN CASTE SYSTEMS: REIFYING CLASS

// From the ongoing SA thread on Strangecoin. > Just out of curiosity, RA, when you discuss ideas like reifying the class structure by assigning people coloured buttons identifying their social class and when you advocate a system that would admittedly make it more difficult for poor people to buy food and basic necessities, are you making any kind of value judgement on the merits of such a system? It’s hard for me to reconcile ‘worried about hypothetical silent discrimination against cyborgs’ RA vs ‘likes the idea of clearly identifying poors with brown badges to more easily refuse to serve them’ RA. // I would only advocate for the idea if I thought it had a chance to change the social circumstances for the better. The reasoning is something like the following: 1) People are psychologically disposed to reasoning about community membership (identity), their status within those communities (influence), and how to engage those communities(culture/convention). This is what significant portions of their brains evolved to do. 2) People are not particularly disposed to reasoning about traditional economic frameworks (supply and demand, wealth, etc), their status within those framworks (class, inequality), and how to engage those those frameworks (making sound economic decisions). They can do this, and the ones that do, do really well, but its hard and most people can’t and suffer because of it. 3) It would be easier for most people to do well in a system that emphasized transactions of the type that people are typically good at reasoning at than ones they are typically bad at reasoning at. 4) Therefore, we should prefer an economic framework that emphasizes reasoning of the former and not the latter type. I’m not saying this fixes all inequality and suffering, but it makes it easier for people to do things […]
March 30, 2014

FROM THE ARCHIVES, MY FIRST POST ON THE ATTENTION ECONOMY

// I was digging through the SomethingAwful archives and found my first essay on the attention economy, written on April 5th, 2011. At the time, Bitcoin had yet to experience it’s first bubble and was still trading below a dollar, and Occupy Wall Street was still five months in the future. If you don’t have access to the archives, the thread which prompted this first write up was titled “No More Bitchin: Let’s actually create solutions to society’s problems!” Despite my reputation on that forum, I’m not interested in pop speculative futurism or idle technoidealism. I don’t think there’s an easy technological fix for our many difficult problems. But I do think that our technological circumstances have a dramatic impact on our social, political, and economic organizations, and that we can design technologies to cultivate human communities that are healthy, stable, and cooperative. The political and economic infrastructure we have for managing collective human action was developed at a time when individual rational agency formed the basis of all political theory, and in a networked digital age we can do much better. An attention economy doesn’t solve all the problems, but it provides tools for addressing problems that simply aren’t available with the infrastructure we have available today. My discussion of the attention economy was aimed at discussing social organization at this level of abstraction, with the hopes that taking this networked perspective on social action would reveal some of the tools necessary for addressing our problems. . In the three years and multiple threads since that initial post, I’ve done research into the dynamics and organization of complex systems and taught myself some of the math and theory necessary for making the idea explicit and communicable. And in that time the field of data science has grown astronomically, making […]
March 30, 2014

STRANGECOIN: A NONLINEAR CURRENCY

In this post I sketch a proposal for a digital currency that works unlike other *coins that have recently become available. I’m calling it Strangecoin, both to highlight its uniqueness as a currency and as a reference to the strange attractor, a special kind of nonlinear system. What’s unique about Strangecoin? Strangecoin transactions can be nonzero sum. A Strangecoin transaction might result in both parties having more Strangecoin. Strangecoin transactions can be one-sided and can be conducted entirely by only one party to the transaction. The rate of change of one’s Strangecoin balance is a more important indicator of economic influence than the balance itself. Optimal investment strategy in Strangecoin aims to stabilize one’s balance of Strangecoin. A universal account provides all users a basic Strangecoin income, effectively unlimited wealth, and direct feedback on the overall prosperity of the network. I’ve only started thinking through the idea, and implementing it would take more technical expertise than I have alone. For instance, I’m not sure if Strangecoin can be implemented as an extension of the bitcoin protocol, or if some fundamentally new technology is required. If you know something about the technical details, I’d love to hear your thoughts. If you might know how to implement something like this, I’d love to help you try. But since I don’t know of anything else that works like this, this proposal is mostly intended simply to put the idea out there, in the hopes of encouraging others to think in these directions. Background and Motivation If I give you a dollar for a burger, then I’ve lost a dollar and gained a burger, and you’ve gained a dollar and lost a burger. Assuming this was a fair trade (that dollars and burgers are of approximately equal value), then as a result of the […]
March 23, 2014

A FIELD GUIDE FROM THE PRESENT ON ORGANISMS OF THE FUTURE

Contents: On organisms On organisms of the present. On organisms of the future. 1. On Organisms a. Organisms are persistent complex systems with functionally differentiated components engaged in a cooperative, dynamic pattern of activity. b. Organisms typically[1] play a role as components of other organisms. Similarly, the components of organisms are typically themselves organisms. Organisms may have components at many different scales relative to other organisms. c. The components of organisms can be widely distributed in space and time. Each component will typically play multiple, cascading roles for many different organisms at many different scales. d. There are no general rules for identifying the components of an organism. It may be easier to identify the persistent organism itself than to identify its components or the roles they play. e. The persistence of an organism consists in the persistent cooperation of its components. The organism just is this pattern of cooperation among components. This pattern may be observed without full knowledge of its components or the particular role they play. This resolves the apparent paradoxes in 1d. f. Organisms develop over time, which is to say that the components of an organism may change radically in number and role over its lifetime. This development is sensitive to initial conditions and is subject to a potentially large number of constraints. Among these constraints are the frictions introduced by the cooperative activity itself. g. The cooperation of the components of an organism is also constrained by components that are common to many of the organism’s other components. It is against the background of these common components that cooperation takes place. Common components typically constrain the cooperation of the components of many other organisms, and provide anchors for identifying the cooperative relationships among organisms as a community. For this reason, common components may be […]
February 21, 2014

PROJECT TANGO, *COINS, AND THE ATTENTION ECONOMY

About 20 years ago the music world underwent a digital conversion. Our tapes and vinyl records were systematically turned into strings of bits. This conversion made music portable and manipulable in a way it had never been before, and completely transformed our relationship to music. It’s just one of dozens of similar stories about the digital conversion we’ve experienced in so many quadrants of human life. We’ve spent the last few decades uploading some of the most significant aspects of our lives into their digital form: our social networks, our economic infrastructure, our education and communication channels. Despite this historic progress, the digital conversion is far from complete. The trend towards participatory access characteristic of digital conversion is most notably absent from our political and governing infrastructure, even in technologically rich countries where the conversion has otherwise been successful. The cohesion of space and its contents is another gap in the process of conversion, which Project Tango is beginning to address. Unifying objects in a digital space is an extremely important step in the process. Think about how important GPS and digital maps have been in guiding your behavior over the last few decades. That same utility will soon be available for all the spaces you occupy and for all the objects you encounter. But for all the progress we’ve made, little effort has gone into thinking about what we will use these digital technologies for. Without understanding the uses cases which give these technologies context and meaning, a high resolution trail describing a person’s movement through space might appear unnecessarily invasive and foreign, even Orwellian. I want to provide a use case where hopefully the virtues of these technologies are clear. I’ve been talking about this context of use in terms of the attention economy. One of my early […]
February 17, 2014

GAMING COMMUNITIES AS EDUCATIONAL COMMUNITIES

Gaming communities as educational communities Presented at MAPSES 2014 in Scranton, PA I’m optimistic about massively open online education. There’s been a recent round of skepticism about low rates of completion and other difficulties with the current implementation of MOOCs, but I don’t find these numbers discouraging for two reasons. First, I think it’s good if anyone is learning anything. 13% is a low completion rate, but these courses are sometimes enrolling 100,000 people and regularly have enrollments of around 50,000 students. 13% completion still means that thousands of people are completing these courses, and tens of thousands more are receiving at least some exposure to material they otherwise wouldn’t. I see that as an unqualified positive success. And second, we don’t have standards for comparison for how well these online courses should be performing, and how far we are from meeting those standards. The College de France and the European Graduate School have both been mentioned at this conference as operating on open principles, albeit at smaller scales. But scales matter for evaluating systems as complex as education.So I want to throw another example into the mix in the hopes that it stimulates some more creative ideas in this direction. I want to look at online gaming communities from an educational context. Modern strategy games, like Starcraft and Dota and League of Legends, are deep and difficult games with steep learning curves and extremely high skill ceilings. Performing well at these games requires both quick strategic thinking under pressure and impressive displays of manual dexterity. Starcraft in particular has become a national sport in Korea over the last decade, with professional leagues broadcasting tournaments on television and top performing players earning salaries, sponsorship, and fan followings comparable to top athletes in other sports. Over the last 3 years the […]
February 13, 2014

THE LAZY ANIMISM IN EHRENREICH’S DEFENSE OF AGENCY

Barbara Ehrenreich has a new piece on agency in science that makes some serious mistakes and deserves careful treatment. I wanted to like the piece because she’s coming from a perspective I find attractive, but a correction of her mistakes ultimately undermines her view. It’s important for those of us interested in issues of organization and complexity to be clear about why her position is untenable. She opens her critique of “rationalist science” with a discussion of play that I’m quite sympathetic to, echoing some of David Graeber’s commentary in a companion article: So maybe carnival and ecstatic rituals serve no rational purpose and have no single sociological “function.” They are just something that people do, and, judging from Neolithic rock art depicting circle and line dances, they are something that people have done for thousands of years. The best category for such undertakings may be play, or exertion for the sheer pleasure of it. If that’s the case, then we have to ask why it has been so difficult for observers, especially perhaps white bourgeois Europeans, to recognize play as a time-honored category of experience. Ehrenreich isn’t talking about “play” in the sense of unstructured idle activity. She’s talking instead about celebration as a ritualized social event: “doing something together, something that was fun and sometimes ecstatic to the point of trance”. I think it’s important to distinguish these structured social rituals from “play”, since it’s often the case that the idle unstructured sort of play isn’t tolerated at social functions where ritualized repetition is central to the activity– try being a kid at a wedding or graduation and see how much fun you have. But that’s a minor complaint. I find the rituals of human celebration interesting too. I appreciate the article’s recognition that these behaviors don’t serve […]
January 19, 2014

RETHINKING MACHINES PART 2: THE MASTER ARGUMENT

// I’m laying out my Ph.D thesis systematically on my blog in the run up to my defense in July. In part 1, I introduced the “dual natures” theory of artifact, which is a primary target of my critique. In this post, I’ll explicitly lay out the big-picture structure of my argument and define some key technical terms I’ll be using throughout the discussion. Each claim here requires an explanation and defense that will be given independently in each subsequent post until I’ve covered the whole argument. I don’t expect it to be clear or convincing in this form, but I’ve put references where necessary to motivate further exploration until I can provide more satisfying remarks. This post will mostly be reference material for guiding us through what follows, and I’ll return to this post many times for context as we explain and justify its premises. I’ll also be updating the glossary here as new terms and concepts are introduced. First, some terms introduced in part 1: artifact: any product of human construction (including nonfunctional products, like art, waste, atmospheric carbon, etc). machine: any functional artifact (cars, hammers, bridges, etc) tool: any functional artifact whose functional character depends on human mental activity The dual natures view of artifacts insists that all machines are tools: that the categories are both coextensive as a matter of fact and cointensive as a matter of metaphysical or conceptual analysis. I will argue, contra the dual natures view, that some machines are not tools, but instead are participants that deserve treatment other than the purely instrumental. My argument is structured according to the outlined argument below. 1. Machines derive their functional natures from minds (and are therefore tools) in two primary ways: either through their use or their design. Design and use are semi-independent aspects […]
January 15, 2014

RETHINKING MACHINES PART 1: THE “DUAL NATURES” THEORY OF ARTIFACTS

/* I recently completed the first draft of my Ph.D. thesis in philosophy. As I prepare and polish for a final draft and defense in July I’ll be posting a series of articles that systematically present the thesis on my blog. I’m including full citations and sources from my collection when available, so hopefully others find this as useful as I will. I started this project in 2007 with the working title Rethinking machines: artificial intelligence beyond the philosophy of mind. The core of the thesis is that the primary philosophical challenge presented by artificial intelligence pertains not to our understanding of the mind, as it was overwhelmingly treated by philosophers in the classic AI debates of the 70’s and 80’s (see Dreyfus, Searle, Dennett, etc), but instead pertains to our understanding of technology, in the sense of the non-mindlike machines with which we machines with which we share existence (see Latour). Half of my dissertation involves a unique interpretation of Turing’s discussion of “fair play for machines,” an idea he develops in the course of his as a treatment of thinking machines, which I argue underlies his approach to artificial intelligence and represents the alternative view I’m endorsing. I’ve posted aspects of my interpretation of Turing in other posts on this blog if you’d like a preview of the more systematic presentation to come. The other half of my thesis is a critique of the so-called “dual natures” view of artifacts. This is where my thesis and these blog posts will begin. */ Artifacts are material, ordinary objects, and as such have physical descriptions that exhaustively explain their physical attributes and behaviors. Artifacts are also instruments of human design and creation, and as such also admit of descriptions in intentional terms that describe their functional nature in relation to […]
December 14, 2013

FAIR PLAY FOR MACHINES: CHOMSKY’S MISREADING OF TURING, AND WHY IT MATTERS.

PART 1: CHOMSKY’S MISREADING OF TURING In this interview, Chomsky reads the quote from Turing (1950): “I believe the question ‘can machines think’ to be too meaningless to deserve discussion” (at [10:05]) as a claim about the improbability of AI. He interprets this as if Turing is claiming that the issue of AI and thinking machines were irrelevant or uninteresting. This is a deliberately misleading interpretation. Turing obviously cares a lot about the issue of thinking machines, as evidenced by, for instance, the letter he sent his friends “in distress“. +Jay Gordon clarifies Chomsky’s views on Turing as follows: Chomsky states that Turing states that whether or not machines can think is a question of decision not a question of fact, akin to whether an airplane can fly. Chomsky actually cites Turing verbatim on this issue in his book Powers and Prospects (p 37ff -ed.) I’m not sure I appreciate the distinction drawn between a question of decision and a question of fact, or the suggestion that Turing treats the question of thinking machines as the former instead of the latter. Turing recognizes it as a fact that in his time people refused to accept the proposition that machines can think. But he also recognized that by the turn of the century these prejudices against machines would change, and that people would speak more freely of thinking machines. And getting from the former to the latter state of affairs isn’t a matter of any one decision; Turing thought it was a matter of social change, on par with the reversal of attitudes towards homosexuality, both of which unfortunately came too late for his time. Turing says the question “can machines think” isn’t helpful in this process because it invokes conceptual and prejudicial biases about “thinking” and “machines” that themselves can’t […]
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }