May 19, 2012

RESHARED POST FROM JORDAN PEACOCK

The charts in the link are beautiful. http://io9.com/5911520/a-chart-that-reveals-how-science-fiction-futures-changed-over-time Jordan Peacock originally shared this post: A Chart that Reveals How Science Fiction Futures Changed Over Time io9 In the 1900s and the 1980s, there were huge spikes in near-future science fiction. What do these eras have in common? Both were times of rapid technological change. In the 1900s you begin to see the widespread use of telephones, cameras, automobiles (the Model T came out in 1908), motion pictures, and home electricity. In the 1980s, the personal computer transformed people’s lives. In general, the future got closer at the end of the twentieth century. You can see a gradual trend in this chart where after the 1940s, near-future SF grows in popularity. Again, this might reflect rapid technological change and the fact that SF entered mainstream popular culture. The future is getting farther away from us right now. One of the only far-future narratives of the 1990s was Futurama. Then suddenly, in the 2000s, we saw a spike in far-future stories, many of them about posthuman, postsingular futures. It’s possible that during periods of extreme uncertainty about the future, as the 00s were in the wake of massive economic upheavals and 9/11, creators and audiences turn their eyes to the far future as a balm. A Chart that Reveals How Science Fiction Futures Changed Over Time
May 19, 2012

RESHARED POST FROM JON LAWHEAD

Jon Lawhead originally shared this post: This is Yaneer Bar-Yam’s “A Mathematical Theory of Strong Emergence Using Multi-Scale Variety.” It, along with the other paper of his I just posted, is going to turn out to be one of the most significant papers of the 21st century. I would bet money on it. Integrating the insight in these two papers into contemporary philosophy of science (and expanding on them) is one of the central pillars of my overall professional project. #complexitytheory is the next big scientific paradigm shift. All the pieces are out there now (these two papers are two of them); we just need to put them all together into a unified, coherent narrative. The first person/people to do that will go down in history as being the Darwin of the 21st century. I’ll race you. #science #emergence #complexsystems #selforganization http://www.necsi.edu/research/multiscale/MultiscaleEmergence.pdf
May 19, 2012

RESHARED POST FROM KEVIN CLIFT

+Christine Paluch Kevin Clift originally shared this post: Subway Maps Converge Mathematically There are mathematical similarities between subway/underground systems that have been allowed to grow in response to urban demand, even though they may not have been planned to be similar. Understand those principles, and one might “make urbanism a quantitative science, and understand with data and numbers the construction of a city,” said statistical physicist Marc Barthelemy of France’s National Center for Scientific Research. More here: http://www.wired.com/wiredscience/2012/05/subway-convergence/ Paper: http://rsif.royalsocietypublishing.org/content/early/2012/05/15/rsif.2012.0259 Sample of subway network structures from (clockwise, top left) Shanghai, Madrid, Moscow, Tokyo, Seoul and Barcelona. Image: Roth et al./JRSI
May 18, 2012

#COMPLEXITY SCIENTISTS ARE ALREADY WATERING…

#complexity scientists are already watering at the mouth for #exascale computing. This is a fabulous demonstration of what they can already do at petascale levels. From http://news.stanford.edu/news/2012/may/engineering-hypersonic-flight-051512.html One reason computational uncertainty quantification is a relatively new science is that, until recently, the necessary computer resources simply didn’t exist. “Some of our latest calculations run on 163,000 processors simultaneously,” Moin said. “I think they’re some of the largest calculations ever undertaken.” Thanks to its close relationship with the Department of Energy, however, the Stanford PSAAP team enjoys access to the massive computer facilities at the Lawrence Livermore, Los Alamos and Sandia national laboratories, where their largest and most complex simulations can be run. It takes specialized knowledge to get computers of this scale to perform effectively, however. “And that’s not something scientists and engineers should be worrying about,” said Alonso, which is why the collaboration between departments is critical. “Mechanical engineers and those of us in aeronautics and astronautics understand the flow and combustion physics of scramjet engines and the predictive tools. We need the computer scientists to help us figure out how to run these tests on these large computers,” he said. That need will only increase over the next decade as supercomputers move toward the exascale – computers with a million or more processors able to execute a quintillion calculations in a single second. Modeling the Complexities of Hypersonic Flight via +Amy Shira Teitel!
May 18, 2012

THIS ROBOT MAKES ITS OWN CUSTOM TOOLS OUT…

This Robot Makes Its Own Custom Tools Out of Glue At this point, you’ve probably noticed the similarities between this process and 3D printing, which is much faster and provides a lot more detail. The reason this robot can’t just 3D print a cup is that the thermoplastic materials don’t provide any good ways of bonding objects to the robot itself, which would mean that the robot would have complex manipulators and deal with grasping, and the whole point (or part of the point) of the HMA is to make complicated things like that unnecessary. While the actual execution of this task was performed autonomously by the robot, the planning was not, since the robot doesn’t yet have a perception process (or perception hardware, for that matter). This is something that the researchers will be working on in the future, and they fantasize about a robot that can adaptively extend its body how and when it deems fit. They also suggest that this technique could be used to create robots that can autonomously repair themselves, autonomously increase their own size and functionality, and even autonomously construct other robots out of movable HMA parts and integrated motors, all of which sounds like a surefire recipe for disaster if we’ve ever heard one. More from +IEEE Spectrum +Evan Ackerman here: http://spectrum.ieee.org/automaton/robotics/diy/this-robot-makes-its-own-custom-tools-out-of-glue Self-Reconfigurable Robot With Hot Melt Adhesives
May 18, 2012

THIS ROBOT MAKES POST FROM INFORMS

Very interesting! From the Wiki: http://en.wikipedia.org/wiki/Braess%27s_paradox The paradox is stated as follows: “For each point of a road network, let there be given the number of cars starting from it, and the destination of the cars. Under these conditions one wishes to estimate the distribution of traffic flow. Whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favorable to him, the resultant running times need not be minimal. Furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times.” The reason for this is that in a Nash equilibrium, drivers have no incentive to change their routes. If the system is not in a Nash equilibrium, selfish drivers must be able to improve their respective travel times by changing the routes they take. In the case of Braess’s paradox, drivers will continue to switch until they reach Nash equilibrium, despite the reduction in overall performance. INFORMS originally shared this post: New blog from Game Theory Strategies More roads can mean slower traffic Does building a big fast road between two towns make the traffic go faster. You would think so but it is not always the case. Imagine that you live in a place called Greenville and you want to get to …
May 17, 2012

DIGITAL POLITICS

This slide show highlights key points from my essay on the Attention Economy. Most can be found in Part 11: Systems of Organization. _______________ The Attention Economy Part 0: Preamble Part 1: Thinking about yourself in a complex system Part 10: The Marble Network Part 11: Systems of organization Interlude: a response to questions Starcraft 2 is Brutally Honest: Lessons for the Attention Economy
May 17, 2012

DIGITAL POLITICS MY MOST RECENT #ATTENTIONECONOMY…

Digital Politics My most recent #attentioneconomy is difficult, and several people have asked for a clear summary or introduction to motivate the time and effort required to slog through it. So I built a slideshow to present the argument. It isn’t short and it isn’t much easier than the essay, but I put a lot of effort into the presentation so I hope it helps! If you appreciate the work, please participate! You can see the full presentation here: http://digitalinterface.blogspot.com/2012/05/digital-politics.html The essay on which this slideshow is based can be found here: http://digitalinterface.blogspot.com/2012/05/attention-economy-11-systems-of.html My blog has links to all my work on the attention economy, and links for further research. I’d love to hear any thoughts you have!
May 17, 2012

RESHARED POST FROM AMIRA NOTES

Amira notes originally shared this post: The Self Illusion: How the Brain Creates Identity “John Locke, the philosopher, who also argued that personal identity was really dependent on the autobiographical or episodic memories, and you are the sum of your memories, which, of course, is something that fractionates and fragments in various forms of dementia. (…) As we all know, memory is notoriously fallible. It’s not cast in stone. It’s not something that is stable. It’s constantly reshaping itself. So the fact that we have a multitude of unconscious processes which are generating this coherence of consciousness, which is the I experience, and the truth that our memories are very selective and ultimately corruptible, we tend to remember things which fit with our general characterization of what our self is. We tend to ignore all the information that is inconsistent. We have all these attribution biases. We have cognitive dissonance. The very thing psychology keeps telling us, that we have all these unconscious mechanisms that reframe information, to fit with a coherent story, then both the “I” and the “me”, to all intents and purposes, are generated narratives. The illusions I talk about often are this sense that there is an integrated individual, with a veridical notion of past. And there’s nothing at the center. We’re the product of the emergent property, I would argue, of the multitude of these processes that generate us. (…) The irrational superstitious behaviors : what I think religions do is they capitalize on a lot of inclinations that children have. Then I entered into a series of work, and my particular interest was this idea of essentialism and sacred objects and moral contamination. (…) If you put people through stressful situations or you overload it, you can see the reemergence of these kinds of […]
May 15, 2012

RESHARED POST FROM RAYMUND KHO K.D

Raymund Kho K.D. originally shared this post: #neuroscience #deception #lying #signal_dectection_theory deception and deception detecting, an evolutionary advantage a very informative article on the evolutionary aspects of deception and deception detection. currently it is possible to detect deception in near all cases in real-time. further i disagree the observation where the reported chronometric cues were replicated in relation to significant longer response latencies. a more modern example relates to the case of the infamous confidence-trickster, frank abagnale jr., who is now an fbi financial fraud consultant. those who employ former “poachers” assume that people who are good at breaking the law are good at detecting when others break the law. this assumption is widespread, but at least in the case of deception, there is no scientific evidence to suggest that good liars are necessarily good lie detectors. results indicate that the current paradigm is comparable to previous studies with regards to the participants’ self-reported experience of guilt, anxiety, and cognitive load during the task, and overall lie detection accuracy. In addition, previously reported chronometric cues to deception were replicated in this study, with significantly longer response latencies when lying than when telling the truth. moreover, as far as we are aware, this study is the first to provide evidence that the capacity to detect lies and the ability to deceive others are associated. this finding suggests the existence of a “deception-general” ability that may influence both “sides” of deceptive interactions. open for discussion. “You can’t kid a kidder”: association between production and detection of deception in an interactive deception task full article. “You can’t kid a kidder”: association between production and detection of deception in an interactive deception task Both the ability to deceive others, and the ability to detect deception, has long been proposed to confer an evolutionary advantage. […]
May 15, 2012

RESHARED POST FROM DEVELOPMENTAL PSYCHOLOGY…

Developmental Psychology News originally shared this post: ICIS 2012 Preconference Workshop on Developmental Robotics The workshop will provide a comprehensive introduction to the robot platforms and research methods of developmental robotics. In addition, invited speakers will describe their recent findings from work on language acquisition, social interaction, perceptual and cognitive development, and motor skill acquisition. Additional information is available at http://icdl-epirob.org/icisdevrob2012.html. Please remember when making travel arrangements that the workshop takes place the day before ICIS begins.
May 14, 2012

RESHARED POST FROM DERYA UNUTMAZ

Derya Unutmaz originally shared this post: A group of American researchers from MIT, Indiana University, and Tufts University, led by Erin Treacy Solovey, have developed Brainput — pronounced brain-put, not bra-input — a system that can detect when your brain is trying to multitask, and offload some of that workload to a computer. The idea of using computers to do our grunt work isn’t exactly new — without them, the internet wouldn’t exist, manufacturing would be a very different beast, and we’d all have to get a lot better at mental arithmetic. I would say that the development of cheap, general purpose computers over the last 50 years, and the freedoms they have granted us, is one of mankind’s most important advancements. Brainput is something else entirely though. Using functional near-infrared spectroscopy (fNIRS), which is basically a portable, poor man’s version of fMRI, Brainput measures the activity of your brain. This data is analyzed, and if Brainput detects that you’re multitasking, the software kicks in and helps you out. In the case of the Brainput research paper, Solovey and her team set up a maze with two remotely controlled robots. The operator, equipped with fNIRS headgear, has to navigate both robots through the maze simultaneously, constantly switching back and forth between them. When Brainput detects that the driver is multitasking, it tells the robots to use their own sensors to help with navigation. Overall, with Brainput turned on, operator performance improved — and yet they didn’t generally notice that the robots were partially autonomous. MIT’s Brainput boosts your brain power by offloading multitasking to a computer | ExtremeTech
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }