May 19, 2012

RESHARED POST FROM JORDAN PEACOCK

The charts in the link are beautiful. http://io9.com/5911520/a-chart-that-reveals-how-science-fiction-futures-changed-over-time Jordan Peacock originally shared this post: A Chart that Reveals How Science Fiction Futures Changed Over Time io9 In the 1900s and the 1980s, there were huge spikes in near-future science fiction. What do these eras have in common? Both were times of rapid technological change. In the 1900s you begin to see the widespread use of telephones, cameras, automobiles (the Model T came out in 1908), motion pictures, and home electricity. In the 1980s, the personal computer transformed people’s lives. In general, the future got closer at the end of the twentieth century. You can see a gradual trend in this chart where after the 1940s, near-future SF grows in popularity. Again, this might reflect rapid technological change and the fact that SF entered mainstream popular culture. The future is getting farther away from us right now. One of the only far-future narratives of the 1990s was Futurama. Then suddenly, in the 2000s, we saw a spike in far-future stories, many of them about posthuman, postsingular futures. It’s possible that during periods of extreme uncertainty about the future, as the 00s were in the wake of massive economic upheavals and 9/11, creators and audiences turn their eyes to the far future as a balm. A Chart that Reveals How Science Fiction Futures Changed Over Time
May 19, 2012

RESHARED POST FROM JON LAWHEAD

Jon Lawhead originally shared this post: This is Yaneer Bar-Yam’s “A Mathematical Theory of Strong Emergence Using Multi-Scale Variety.” It, along with the other paper of his I just posted, is going to turn out to be one of the most significant papers of the 21st century. I would bet money on it. Integrating the insight in these two papers into contemporary philosophy of science (and expanding on them) is one of the central pillars of my overall professional project. #complexitytheory is the next big scientific paradigm shift. All the pieces are out there now (these two papers are two of them); we just need to put them all together into a unified, coherent narrative. The first person/people to do that will go down in history as being the Darwin of the 21st century. I’ll race you. #science #emergence #complexsystems #selforganization http://www.necsi.edu/research/multiscale/MultiscaleEmergence.pdf
May 19, 2012

RESHARED POST FROM KEVIN CLIFT

+Christine Paluch Kevin Clift originally shared this post: Subway Maps Converge Mathematically There are mathematical similarities between subway/underground systems that have been allowed to grow in response to urban demand, even though they may not have been planned to be similar. Understand those principles, and one might “make urbanism a quantitative science, and understand with data and numbers the construction of a city,” said statistical physicist Marc Barthelemy of France’s National Center for Scientific Research. More here: http://www.wired.com/wiredscience/2012/05/subway-convergence/ Paper: http://rsif.royalsocietypublishing.org/content/early/2012/05/15/rsif.2012.0259 Sample of subway network structures from (clockwise, top left) Shanghai, Madrid, Moscow, Tokyo, Seoul and Barcelona. Image: Roth et al./JRSI
May 18, 2012

#COMPLEXITY SCIENTISTS ARE ALREADY WATERING…

#complexity scientists are already watering at the mouth for #exascale computing. This is a fabulous demonstration of what they can already do at petascale levels. From http://news.stanford.edu/news/2012/may/engineering-hypersonic-flight-051512.html One reason computational uncertainty quantification is a relatively new science is that, until recently, the necessary computer resources simply didn’t exist. “Some of our latest calculations run on 163,000 processors simultaneously,” Moin said. “I think they’re some of the largest calculations ever undertaken.” Thanks to its close relationship with the Department of Energy, however, the Stanford PSAAP team enjoys access to the massive computer facilities at the Lawrence Livermore, Los Alamos and Sandia national laboratories, where their largest and most complex simulations can be run. It takes specialized knowledge to get computers of this scale to perform effectively, however. “And that’s not something scientists and engineers should be worrying about,” said Alonso, which is why the collaboration between departments is critical. “Mechanical engineers and those of us in aeronautics and astronautics understand the flow and combustion physics of scramjet engines and the predictive tools. We need the computer scientists to help us figure out how to run these tests on these large computers,” he said. That need will only increase over the next decade as supercomputers move toward the exascale – computers with a million or more processors able to execute a quintillion calculations in a single second. Modeling the Complexities of Hypersonic Flight via +Amy Shira Teitel!
May 18, 2012

THIS ROBOT MAKES ITS OWN CUSTOM TOOLS OUT…

This Robot Makes Its Own Custom Tools Out of Glue At this point, you’ve probably noticed the similarities between this process and 3D printing, which is much faster and provides a lot more detail. The reason this robot can’t just 3D print a cup is that the thermoplastic materials don’t provide any good ways of bonding objects to the robot itself, which would mean that the robot would have complex manipulators and deal with grasping, and the whole point (or part of the point) of the HMA is to make complicated things like that unnecessary. While the actual execution of this task was performed autonomously by the robot, the planning was not, since the robot doesn’t yet have a perception process (or perception hardware, for that matter). This is something that the researchers will be working on in the future, and they fantasize about a robot that can adaptively extend its body how and when it deems fit. They also suggest that this technique could be used to create robots that can autonomously repair themselves, autonomously increase their own size and functionality, and even autonomously construct other robots out of movable HMA parts and integrated motors, all of which sounds like a surefire recipe for disaster if we’ve ever heard one. More from +IEEE Spectrum +Evan Ackerman here: http://spectrum.ieee.org/automaton/robotics/diy/this-robot-makes-its-own-custom-tools-out-of-glue Self-Reconfigurable Robot With Hot Melt Adhesives
May 18, 2012

THIS ROBOT MAKES POST FROM INFORMS

Very interesting! From the Wiki: http://en.wikipedia.org/wiki/Braess%27s_paradox The paradox is stated as follows: “For each point of a road network, let there be given the number of cars starting from it, and the destination of the cars. Under these conditions one wishes to estimate the distribution of traffic flow. Whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favorable to him, the resultant running times need not be minimal. Furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times.” The reason for this is that in a Nash equilibrium, drivers have no incentive to change their routes. If the system is not in a Nash equilibrium, selfish drivers must be able to improve their respective travel times by changing the routes they take. In the case of Braess’s paradox, drivers will continue to switch until they reach Nash equilibrium, despite the reduction in overall performance. INFORMS originally shared this post: New blog from Game Theory Strategies More roads can mean slower traffic Does building a big fast road between two towns make the traffic go faster. You would think so but it is not always the case. Imagine that you live in a place called Greenville and you want to get to …
May 17, 2012

DIGITAL POLITICS

This slide show highlights key points from my essay on the Attention Economy. Most can be found in Part 11: Systems of Organization. _______________ The Attention Economy Part 0: Preamble Part 1: Thinking about yourself in a complex system Part 10: The Marble Network Part 11: Systems of organization Interlude: a response to questions Starcraft 2 is Brutally Honest: Lessons for the Attention Economy
May 17, 2012

DIGITAL POLITICS MY MOST RECENT #ATTENTIONECONOMY…

Digital Politics My most recent #attentioneconomy is difficult, and several people have asked for a clear summary or introduction to motivate the time and effort required to slog through it. So I built a slideshow to present the argument. It isn’t short and it isn’t much easier than the essay, but I put a lot of effort into the presentation so I hope it helps! If you appreciate the work, please participate! You can see the full presentation here: http://digitalinterface.blogspot.com/2012/05/digital-politics.html The essay on which this slideshow is based can be found here: http://digitalinterface.blogspot.com/2012/05/attention-economy-11-systems-of.html My blog has links to all my work on the attention economy, and links for further research. I’d love to hear any thoughts you have!
May 17, 2012

RESHARED POST FROM AMIRA NOTES

Amira notes originally shared this post: The Self Illusion: How the Brain Creates Identity “John Locke, the philosopher, who also argued that personal identity was really dependent on the autobiographical or episodic memories, and you are the sum of your memories, which, of course, is something that fractionates and fragments in various forms of dementia. (…) As we all know, memory is notoriously fallible. It’s not cast in stone. It’s not something that is stable. It’s constantly reshaping itself. So the fact that we have a multitude of unconscious processes which are generating this coherence of consciousness, which is the I experience, and the truth that our memories are very selective and ultimately corruptible, we tend to remember things which fit with our general characterization of what our self is. We tend to ignore all the information that is inconsistent. We have all these attribution biases. We have cognitive dissonance. The very thing psychology keeps telling us, that we have all these unconscious mechanisms that reframe information, to fit with a coherent story, then both the “I” and the “me”, to all intents and purposes, are generated narratives. The illusions I talk about often are this sense that there is an integrated individual, with a veridical notion of past. And there’s nothing at the center. We’re the product of the emergent property, I would argue, of the multitude of these processes that generate us. (…) The irrational superstitious behaviors : what I think religions do is they capitalize on a lot of inclinations that children have. Then I entered into a series of work, and my particular interest was this idea of essentialism and sacred objects and moral contamination. (…) If you put people through stressful situations or you overload it, you can see the reemergence of these kinds of […]
May 15, 2012

RESHARED POST FROM RAYMUND KHO K.D

Raymund Kho K.D. originally shared this post: #neuroscience #deception #lying #signal_dectection_theory deception and deception detecting, an evolutionary advantage a very informative article on the evolutionary aspects of deception and deception detection. currently it is possible to detect deception in near all cases in real-time. further i disagree the observation where the reported chronometric cues were replicated in relation to significant longer response latencies. a more modern example relates to the case of the infamous confidence-trickster, frank abagnale jr., who is now an fbi financial fraud consultant. those who employ former “poachers” assume that people who are good at breaking the law are good at detecting when others break the law. this assumption is widespread, but at least in the case of deception, there is no scientific evidence to suggest that good liars are necessarily good lie detectors. results indicate that the current paradigm is comparable to previous studies with regards to the participants’ self-reported experience of guilt, anxiety, and cognitive load during the task, and overall lie detection accuracy. In addition, previously reported chronometric cues to deception were replicated in this study, with significantly longer response latencies when lying than when telling the truth. moreover, as far as we are aware, this study is the first to provide evidence that the capacity to detect lies and the ability to deceive others are associated. this finding suggests the existence of a “deception-general” ability that may influence both “sides” of deceptive interactions. open for discussion. “You can’t kid a kidder”: association between production and detection of deception in an interactive deception task full article. “You can’t kid a kidder”: association between production and detection of deception in an interactive deception task Both the ability to deceive others, and the ability to detect deception, has long been proposed to confer an evolutionary advantage. […]
May 15, 2012

RESHARED POST FROM DEVELOPMENTAL PSYCHOLOGY…

Developmental Psychology News originally shared this post: ICIS 2012 Preconference Workshop on Developmental Robotics The workshop will provide a comprehensive introduction to the robot platforms and research methods of developmental robotics. In addition, invited speakers will describe their recent findings from work on language acquisition, social interaction, perceptual and cognitive development, and motor skill acquisition. Additional information is available at http://icdl-epirob.org/icisdevrob2012.html. Please remember when making travel arrangements that the workshop takes place the day before ICIS begins.
May 14, 2012

RESHARED POST FROM DERYA UNUTMAZ

Derya Unutmaz originally shared this post: A group of American researchers from MIT, Indiana University, and Tufts University, led by Erin Treacy Solovey, have developed Brainput — pronounced brain-put, not bra-input — a system that can detect when your brain is trying to multitask, and offload some of that workload to a computer. The idea of using computers to do our grunt work isn’t exactly new — without them, the internet wouldn’t exist, manufacturing would be a very different beast, and we’d all have to get a lot better at mental arithmetic. I would say that the development of cheap, general purpose computers over the last 50 years, and the freedoms they have granted us, is one of mankind’s most important advancements. Brainput is something else entirely though. Using functional near-infrared spectroscopy (fNIRS), which is basically a portable, poor man’s version of fMRI, Brainput measures the activity of your brain. This data is analyzed, and if Brainput detects that you’re multitasking, the software kicks in and helps you out. In the case of the Brainput research paper, Solovey and her team set up a maze with two remotely controlled robots. The operator, equipped with fNIRS headgear, has to navigate both robots through the maze simultaneously, constantly switching back and forth between them. When Brainput detects that the driver is multitasking, it tells the robots to use their own sensors to help with navigation. Overall, with Brainput turned on, operator performance improved — and yet they didn’t generally notice that the robots were partially autonomous. MIT’s Brainput boosts your brain power by offloading multitasking to a computer | ExtremeTech
May 26, 2012

THE NETWORKED PARADIGM

Over the last few weeks we’ve seen an explosion of blog posts, videos, and journals publishing on this major developing paradigm shift in social organization. Of course, it is 2012 and networks are hardly new. Facebook’s IPO already seems like old news; no one doubts the importance of networks. We’ve been living on them and in them for decades. What’s changed is our understanding of how #networks behave. Our mathematics and computer science has made tremendous progress over the last few years. Our ability to visualize #bigdata in instructive and useful ways it in a golden age. Until now, the Internet has been mostly flopping along blindly, confident that we were doing good work but not entirely understanding how we were doing it. But over the last month or so our #science has grown strong. When our science is strong, we can be deliberate about how we use our tools. +Bruno Gonçalves and his colleagues gave a vivid but somehow unsurprising demonstration of this power just this week. They predicted the winner of +American Idol by doing nothing more elaborate than counting tweets. This was almost a trivial exercise, but the authors are explicit that this is simply a demonstration of the potential of these techniques: On a more general basis, our results highlight that *the aggregate preferences and behaviors of large numbers of people can nowadays be observed in real time, or even forecasted, through open source data freely available in the web*. The task of keeping them private, even for a short time, has therefore become extremely hard (if not impossible), and this trend is likely to become more and more evident in the future years. Although the success of the prediction isn’t itself surprising, the consequences of the result are not only surprising but fundamentally revolutionary for […]
May 26, 2012

THE NETWORKED PARADIGM VOLUME ONE OVER THE…

The Networked Paradigm volume one Over the last few weeks we’ve seen an explosion of blog posts, videos, and journals publishing on this major developing paradigm shift in social organization. Of course, it is 2012 and networks are hardly new. Facebook’s IPO already seems like old news; no one doubts the importance of networks. We’ve been living on them and in them for decades. What’s changed is our understanding of how #networks behave. Our mathematics and computer science has made tremendous progress over the last few years. Our ability to visualize #bigdata in instructive and useful ways it in a golden age. Until now, the Internet has been mostly flopping along blindly, confident that we were doing good work but not entirely understanding how we were doing it. But over the last month or so our #science has grown strong. When our science is strong, we can be deliberate about how we use our tools. +Bruno Gonçalves and his colleagues gave a vivid but somehow unsurprising demonstration of this power just this week. They predicted the winner of +American Idol by doing nothing more elaborate than counting tweets. https://plus.google.com/u/0/117828903900236363024/posts/aSUDwAggmgz This was almost a trivial exercise, but the authors are explicit that this is simply a demonstration of the potential of these techniques: “On a more general basis, our results highlight that the aggregate preferences and behaviors of large numbers of people can nowadays be observed in real time, or even forecasted, through open source data freely available in the web. The task of keeping them private, even for a short time, has therefore become extremely hard (if not impossible), and this trend is likely to become more and more evident in the future years.” Although the success of the prediction isn’t itself surprising, the consequences of the result are not […]
May 25, 2012

RESHARED POST FROM COGSAI

Spreading good memes is good. CogSai originally shared this post: Which are deadlier: sharks or horses? Find out now on the debut of +CogSai! Easy share link: http://bit.ly/cogsai1 Cognitive science is a combo of psych, AI, philosophy, neuroscience, linguistics, anthro, sociology, and lots more. CogSai includes short illustrated explanations, live interviews with researchers, and group discussions. Coming soon: LIVE interview with a scientist researching how analytical and heuristic thinking compete in the brain. Subscribe & follow to participate live! Go to cogsai.com/q to contribute your questions on this episode & suggestions for future episodes. See you there. 🙂 – +Sai
May 25, 2012

RESHARED POST FROM GIDEON ROSENBLATT

The hybrid ideal Today it is clear that the independence of social value and commercial revenue creation is a myth. In reality, the vectors of social value and commercial revenue creation can reinforce and undermine each other. The social consequences of the recent financial crisis demonstrated with great clarity the danger of “negative externalities”—social costs resulting from corporate profit-seeking activities. But in some cases, “positive externalities” may also exist. It is this possibility that integrated hybrid models seek to exploit. When we talk to entrepreneurs and students about hybrid organizations, a common theme that emerges is what we call the “hybrid ideal.” This hypothetical organization is fully integrated—everything it does produces both social value and commercial revenue.4 This vision has at least two powerful features. In the hybrid ideal, managers do not face a choice between mission and profit, because these aims are integrated in the same strategy. More important, the integration of social and commercial value creation enables a virtuous cycle of profit and reinvestment in the social mission that builds large-scale solutions to social problems. http://www.ssireview.org/articles/entry/in_search_of_the_hybrid_ideal via +Gregory Esau _______________ It is wonderful to see so many people waking up simultaneously to the same basic unified frameworks. People are catching on to it from so many diverse perspective it is very humbling. The overlap and diversity of perspectives is interesting for many reasons. “The Hybrid Ideal”, for instance, is a very clearly transhumanist value, but I imagine that the number of people involved in producing or sharing this content that explicitly recognize it as such is vanishingly small. I personally get this content through the small-but-growing network of businessmen and entrepreneurship whose interesting organizational strategies have been filling my stream. This amuses me somewhat, because I’m an anarchist looking to seize the means of production, yet somehow we’ve […]
May 25, 2012

RESHARED POST FROM MATT UEBEL

Matt Uebel originally shared this post: “How is it possible to have any informed democratic debate over a policy about which the U.S. media relentlessly propagandizes this way? If drone strikes kill nobody other than “militants,” then very few people will even think about opposing them (and that’s independent of the fact that the word “militant” is a wildly ambiguous term — militant about what? — though it is clearly designed (when combined with “Pakistan”) to evoke images of those who attacked the World Trade Center). Debate-suppression is not just the effect but the intent of this propaganda: like all propaganda, it is designed to deceive the citizenry in order to compel acquiescence to government conduct.” Deliberate media propaganda The media now knows that “militant” is a term of official propaganda, yet still use it for America’s drone victims
May 25, 2012

RESHARED POST FROM PBS NEWSHOUR

vis +James Wood. Pasting his comment below: “An effect of the constant advances in technology is a complete restructuring of the way we think about the division of labor and citizens’ roles in a future society. As our lives become ever more digitalized, we realize many real concerns–namely the fear that a handful of extremely wealthy and extremely powerful individuals will take over the world, leaving the rest of us to fight over the few “real” jobs remaining. However, in an #attentioneconomy , such disparity can not exist. First of all, the Internet in future forms can reach the point that it itself functions as an economy–one of attention and attenders. Imagine that machines have evolved to the point at which manual human labor is truly obsolete; they become in a way our “digital” infrastructure. then whatever frontier remains unconquered will become the platform for our human interaction (the Internet). This network will still be just as competitive as any free market system to date, only it will be wholly self-organized, meaning that the behavior of the network as a whole will be more or less equally influenced by each individual node. The main obstruction to this practically in the future is that in this vast digital infrastructure, you might ask, who controls that? Who owns it? Who makes sure it is functioning properly? If a few people do own it, then wouldn’t the very phenomenon we are trying to avoid still happen in an attention economy (other-organized network)? The answer is that no one owns the infrastructure (or anything). The infrastructure will become advanced enough that it becomes essentially self-improving, self-organizing, self-replicating, etc. The technology will become intelligent, or I daresay, alive (oooh). It will become integrated into our very consciousness–it will become us, or rather extensions of us. […]
May 24, 2012

RESHARED POST FROM XAVIER MARQUEZ

Cognitive Democracy “This points, we think, to a very clear constructive agenda. To exaggerate a little, it is to see how far the Internet enables modern democracies to make as much use of their citizens’ minds as did Ober’s Athens. We want to learn from existing online ventures in collective cognition and decision-making. We want to treat these ventures are, more or less, spontaneous experiments10, and compare the success and failures (including partial successes and failures) to learn about institutional mechanisms which work well at harnessing the cognitive diversity of large numbers of people who do not know each other well (or at all), and meet under conditions of relative equality, not hierarchy. If this succeeds, what we learn from this will provide the basis for experimenting with the re-design of democratic institutions themselves.” ______ This is absolutely wonderful. Via +Michael Chui Xavier Marquez originally shared this post: Cosma Shalizi and +Henry Farrell make an epistemic argument for democracy (vis a vis markets and hierarchies). I suspect at this level of generality the question is a bit too abstract – the more interesting questions remain below this level, concerning the scope of each mechanism and the mediation of conflicts at the edges between markets, participatory discussion fora, and hierarchies. Nevertheless, a very interesting piece. Cognitive Democracy But the economical advantages of commerce are surpassed in importance by those of its effects which are intellectual and moral. It is hardly possible to overrate the value, in the present low state of…
May 24, 2012

+DILAN TORY AND +STEVEN WAGNER RAISE SOME…

+dilan tory and +Steven Wagner raise some interesting questions regarding the complaint in the linked article. Any thoughts from the stream? Quoting dilan’s question below. “Dan, I thought you might be interested in the legal/conceptual issues here. Steve wonders what you think about the following: What counts as a database/what does intent mean in this context? As for myself, I’m just wondering how often someone in France has to search for “Jew” after a name to cause google’s algorithmic whizgizmos to suggest it as a prompt. And I want to know whether French people do this more than, say, Germans or Americans-is French pop culture becoming obsessively anti-semitic? I can’t help but find this menacing–I’m in the middle of reading the postwar correspondance between Jaspers and Hannah Arendt…” _____ One interesting thing to consider is that Google already shapes your search results according to personalized metrics. In other words, if my communities are heavily involved and influential in, say, antisemetic circles, that would tend to raise the number of antisemetic results I’d get in a search. Presumably, this is what I’d want, given my online communities. Consequently, laws like this seem like ways of restricting which communities one might engage with. This might seem like an acceptable result when dealing with so-called “hate-based” communities, but of course when state institutions have this kind of authority it tends to over step its bounds. More generally, I think a lot of issues with online censorship are traditionally understood in terms of “free speech”, and so issues like this become difficult because it isn’t exactly clear what speech is getting suppressed when you silence autocomplete results. I’d personally suggest that +Google (as an artificially intelligent computing agent, not the corporation) is having its speech suppressed, but since I don’t think Google yet has […]
May 24, 2012

THE LOGIC OF DIVERSITY THE COMPLEXITY OF…

The Logic of Diversity The Complexity of a Controversial Concept By +Cosma Shalizi … The division of labor is, in part, an adaptation for handling complex problems, but only those which are complex in the straightforward sense of being very large. It relies on finding a way of decomposing the large problem into nearly-separate parts, so that it can be attacked through a strategy of divide-and-conquer, with different people specializing in conquering the various divisions. (This topic, and its relation to hierarchical structure, was explored by Herbert Simon in his classic Sciences of the Artificial.) Diversity, in the sense Page is talking about, is another way of adapting to complexity, and specifically to complex problems which are not decomposable into neat hierarchies. Put strategically, the idea is like this: Agents have only a limited capacity to represent, learn about, and predict their world, and so solve their problems. When the problem or environment is too complex for any one agent, then you should have many weak agents make partial, incomplete, overlapping representations. You’ll be better off by doing this, and then learning a way to combine them, than by trying to find a single, globally accurate representation, such as a single super-genius agent which can handle the problem all by itself. Collectively, the combined representations of the group of agents are equivalent to a single high-capacity representation. But nobody, individually, has anything like the complete picture; in fact, everybody’s individual picture is pretty much wrong, or at best drastically incomplete. Powerful, high-level capacities which emerge from the interplay of low-level components are a common feature of complex systems, but here as elsewhere, just having the components and letting them interact is not enough. The organization of the interactions is crucial. In the brain, for instance, this is the difference between […]
May 24, 2012

RESHARED POST FROM BRUNO GONÇALVES

+Bruno Gonçalves and his colleagues have put together attention-based models looking at Twitter activity in the run up to the #americanidol decisions. They were able to predict the winner with a pretty high degree of confidence using an extremely simplified model of the Twitter activity. They posted the model to the arXiv a few days ago, and then updated with results from the big vote yesterday. Their updated paper is linked below. This is both a validation and benchmarking case for quantifying and modeling the #attentioneconomy . They conclude: ” We have shown that the open source data available on the web can be used to make educated guesses on the outcome of societal events. Speci?cally, we have shown that extremely simple measures quantifying the popularity of the American Idol participants on Twitter strongly correlate with their performances in terms of votes. A post-event analysis shows that the less voted competitors can be identi?ed with reasonable accuracy (Table II) looking at the Twitter data collected during the airing of the show and in the immediately following hours.” Bruno Gonçalves originally shared this post: Beating the news using Social Media: the case study of American Idol. (arXiv:1205.4467v2 [physics.soc-ph] UPDATED) We present a contribution to the debate on the predictability of social events using big data analytics. We focus on the elimination of contestants in the American Idol TV shows as an example of a well defined electoral phenomenon that each week draws millions of votes in the USA. We provide evidence that Twitter activity during the time span defined by the TV show airing and the voting period following it, correlates with the contestants ranking and allows the anticipation of the voting outc…
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }