April 14, 2012

RESHARED POST FROM ALEX SCHLEBER

Smartphones—extraordinarily powerful, mobile, data-network-connected computers equipped with GPS, accelerometers and all sort of other gee-whizzery—have become so ubiquitous so fast because they’re so remarkable (and because falling tech prices have quickly made them affordable). But because they’ve become so ubiquitous so fast, I think we underappreciate the revolutionary potential of a world in which powerful mini-computers are everywhere, and where every person has an unfathomable about of information available all the time. Innovations like Twitter are dazzling and useful; they may well end up the technological equivalent of a plasma globe, a shiny, technological trinket that only hinted at the social and economic potential of the concepts upon which it was based. The potential of the smartphone age is deceptive. We look around and see more people talking on phones in more places and playing Draw Something when they’re bored. This is just the beginning. In time, business models, infrastructure, legal environments, and social norms will evolve, and the world will become a very different and dramatically more productive place. Alex Schleber originally shared this post: Great #stats on technology adoption cycles from @asymco via +The Economist (which is noteworthy in and of itself, he is obviously becoming more widely noted as a mobile analyst). The rampant sub 10-year adoption of the smartphone by 50% of the market is also the likely reason why we are NOT in an unjustified mobile/tech bubble, and why paying $1B for Instagram was not a mistake. Related -> plus.google.com/112964117318166648677/posts/czadRruFzf6 The revolution to come EARLIER this week, Matt Yglesias discussed an interesting analysis of penetration rates for various modern technologies, built around the piece of data that smartphones have now achieved 50% penetrati…
April 13, 2012

RESHARED POST FROM SAKIS KOUKOUVIS

Direct link: http://news.softpedia.com/news/Mathematical-Model-of-the-Brain-Developed-264350.shtml h/t +Kimberly Chapman Sakis Koukouvis originally shared this post: Mathematical Model of the Brain Developed Taking complex systems such as the Internet and social networks as examples, a group of experts at the University of Cambridge was able to create a mathematical model of the brain. Though simple, their tool provides a surprisingly complete statistical account of how different regions interact. Mathematical Model of the Brain Developed | Science News Taking complex systems such as the Internet and social networks as examples, a group of experts at the University of Cambridge was able to create a mathematical model of the brain. Though simple, thei…
April 13, 2012

RESHARED POST FROM JOHN VERDON

The high levels of intelligence seen in humans, other primates, certain cetaceans and birds remain a major puzzle for evolutionary biologists, anthropologists and psychologists. It has long been held that social interactions provide the selection pressures necessary for the evolution of advanced cognitive abilities (the ‘social intelligence hypothesis’), and in recent years decision-making in the context of cooperative social interactions has been conjectured to be of particular importance. Here we use an artificial neural network model to show that selection for efficient decision-making in cooperative dilemmas can give rise to selection pressures for greater cognitive abilities, and that intelligent strategies can themselves select for greater intelligence, leading to a Machiavellian arms race. Our results provide mechanistic support for the social intelligence hypothesis, highlight the potential importance of cooperative behaviour in the evolution of intelligence and may help us to explain the distribution of cooperation with intelligence across taxa. John Verdon originally shared this post: Cooperation and the evolution of intelligence Abstract The high levels of intelligence seen in humans, other primates, certain cetaceans and birds remain a major puzzle for evolutionary biologists, anthropologists and psychologists. It has long b…
April 13, 2012

RESHARED POST FROM GIDEON ROSENBLATT

I’m putting my comment here so I’m not taken as trolling the thread, which is filled with good vibes but very little critical discussion. I left a comment in one of the guessing threads, and then immediately muted it so I didn’t have a deluge of arbitrary numbers filling my notification stream. +Gideon Rosenblatt, these are important issues and I totally appreciate and would encourage you to keep running social experiments like this in the future. Whatever they demonstrate, they serve as useful educational exercises to get us thinking about the right things. You are really good at doing it, and I genuinely appreciate your posts and your work. So I apologize for being a brat =D ____ I’m going to push a little more on the ambiguities in the language here, because there are important and interesting issues and I like to have my issues clear. Aristotle distinguished between five “intellectual virtues”. These virtues are: episteme: scientific knowledge. Think of it as “books smarts”. techne: craft knowledge. Think of it as skills and abilities, or “street smarts”. This is where we get our word “technology”. phronesis: intelligence nous: understanding sophia: wisdom These distinctions are very interesting; you can read more here: http://en.wikipedia.org/wiki/Nicomachean_Ethics#Book_VI:_Intellectual_virtue I have a lot to say about techne, obviously, but the two terms that are of interest to us here are intelligence and wisdom. Aristotle thinks we are always aimed and directed at goals or projects, what he calls a telos, or an end. So intelligence is about our ability to realize those ends, and how well we can do it. There are lots of ways of accomplishing a goal, and our intelligence is, in a sense, a measure of our ability to do it. The better you are at seeing means and opportunities for accomplishing your […]
April 13, 2012

RESHARED POST FROM MATT UEBEL

The key here is to think about automated vehicles not as a change in my relation to my car (“it is driving for me”), but rather as an infrastructural change about the way we drive as a collective community practice (“they are driving for us”). Driving is one of these fundamentally “American” past times, so the collective ownership of our cars is perhaps the most appropriate way to introduce our culture to the collaborative efforts that the Digital Age requires of us. I give more thoughts on #systemhack issues involving #cars and #sustainability in the comments on +Matt Uebel‘s original post. Matt Uebel originally shared this post: #RaceAgainstTheMachine #autocars #futurism . Futuristic cars are coming faster than you think Cars that drive themselves are not just the stuff of sci-fi movies. The technology is real, the cars can now drive legally and the debate is starting on whether society is better off when software is behind the wheel.
April 12, 2012

RESHARED POST FROM SINGULARITY UTOPIA

I left comment in +Singularity Utopia‘s post that I’m copying here for archiving purposes. I’ve been frustrated with the discussions surrounding the Singularity for a long time, and I’ve found the philosophical and theoretical foundations for the discussion of technology to be significantly lacking. I tend to take it out on SU because they do good job highlighting the “mainstream” singularity view, so I mean it as no disrespect and I’m not trying to troll. I’m talking about these issues because I think they are serious and important. I am still baffled why anyone thinks the singularity is an “event”, or if they do, why they would put it off into the future. Technological progress is already accelerating faster than our human ability to keep up, and it is already having dramatic and devastating consequences for ourselves and our planet. We are already surrounded by a variety of intelligent machines, each of which are performing tasks that baffle and dazzle and amaze us, and which few (if any) of us understand completely. Some of these machines are responsible for maintaining critical aspects of human well-being and social practices, and we’ve become dependent on their operation for our very being. Although both changes are definitely happening, and with accelerating pace, I’m not sure what point break event the Singularity theorists expect to distinguish some future state from the existing states. If the claim is that there is some qualitative distinction between the pre- and post-Singularity world, I would offer that such changes have already occurred, as part of the Digital Revolution. The Digital Age begins in the late 70’s, but doesn’t really kick off full blast until the last decade, and really with the introduction of Google. The Digital Age is going strong, and shows no signs of stopping, but there’s […]
April 12, 2012

RESHARED POST FROM MATT UEBEL

I don’t know if this is bad form, but I’m archiving another comment here, this time in +Jonathan Langdale‘s post linked below. https://plus.google.com/u/0/109667384864782087641/posts/5fDf3r3AsHe I need to write up a longer post on Turing, but there are two distinct parts to the test: 1) The machine uses language like a natural language user (it can carry on a conversation). 2) We take that language use to be an indicator of intelligence. The whole history of attempting to build machines that “pass” the Turing Test has been an engineering project designed to solve criteria 1. Its certainly not a trivial problem; but I think it is taken to be much more complicated than it need be. For instance, when I am talking to someone who doesn’t know English well, the language might be grammatically messy, even indecipherable, but I nevertheless tend to give a lot of charity and presume my interlocutor’s general intelligence anyway. So on this criteria, I have argued that some machines are already language users and have been for some time. We aren’t on the “brink” of passing it; we’ve shot right by it, and it is now common place to talk in semi-conversational language to our devices and expect those devices to understand (at least to some extent) the meaning and intention of those words. Google in particular is not only a language user, but its use of language is highly influential in the community of language users; denying that Google uses language therefore threatens to misunderstand what language use is. I’m currently having an extended philosophical discussion along these lines in this thread: https://plus.google.com/u/0/117828903900236363024/posts/RT5hG9a4dNd But even granted that you build a conversational machine, it is still an open question of whether we take that machine to be intelligent. Turing recognized very clearly that regardless of the machines […]
April 12, 2012

THE PRESENTATION OF THE DATA HERE IS BEAUTIFUL…

The presentation of the data here is beautiful. Visualization is a key to realizing the #attentioneconomy. h/t +Michael Chui I saw some historians talking on Twitter about a very nice data visualization of shipping routes in the 18th and 19th centuries on Spatial Analysis. (Which is a great blog–looking through their archives, I think I’ve seen every previous post linked from somewhere else before). They make a basically static visualization. I wanted to see the ships in motion. Plus, Dael Norwood made some guesses about the increasing prominence of Pacific trade in the period that I would like to see confirmed. That got me interested with the ship data that they use, which consists of detailed logbooks that have been digitized for climatological purposes. On the more technical side, I have been fiddling a bit lately with ffmpeg and ggplot (two completely unrelated systems, despite what the names imply) to make animated visualizations, and wanted to put one up. And it’s an interesting case; historical data was digitized for climatological purposes, which means visualization is going to be on of the easiest ways to think about whether it might be usable for historical demonstration or analysis, as well. 100 Years of ships!
April 12, 2012

RESHARED POST FROM NEUROSCIENCE NEWS

They took a dataset that Prof Markram and others had collected a few years ago, in which they recorded the expression of 26 genes encoding ion channels in different neuronal types from the rat brain. They also had data classifying those types according to a neuron’s morphology, its electrophysiological properties and its position within the six, anatomically distinct layers of the cortex. They found that, based on the classification data alone, they could predict those previously measured ion channel patterns with 78 per cent accuracy. And when they added in a subset of data about the ion channels to the classification data, as input to their data-mining programme, they were able to boost that accuracy to 87 per cent for the more commonly occurring neuronal types. “This shows that it is possible to mine rules from a subset of data and use them to complete the dataset informatically,” says one of the study’s authors, Felix Schürmann. “Using the methods we have developed, it may not be necessary to measure every single aspect of the behaviour you’re interested in.” Once the rules have been validated in similar but independently collected datasets, for example, they could be used to predict the entire complement of ion channels presented by a given neuron, based simply on data about that neuron’s morphology, its electrical behaviour and a few key genes that it expresses. Cross-reference the #connectome debate from this lecture: https://plus.google.com/u/0/117828903900236363024/posts/Ky5piPLjhYd Neuroscience News originally shared this post: Data Mining Opens the Door to Predictive Neuroscience Researchers at the EPFL have discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties an
April 12, 2012

I’M SHARING THE OTHER TWO +JASON SILVA VIDEOS…

I’m sharing the other two +Jason Silva videos below: To understand is to perceive patterns RCVR The pattern video is amazing. I like the RCVR video less because I think the idea is less clearly thought through. We aren’t just receivers, which implies the same kind of passivity as consumers. We also act on the information we receive; this is part of the massive feedback loop that is allowing us to self-organize, and characterizing ourselves as mere receivers threatens to miss the cybernetic dimension of this organization. In the #attentioneconomy , I describe the nodes not just as receivers but as attenders to try and capture this dynamic activity as more than mere receptivity. If you like the Web 2.0 schtick, call us ATNDRS, or better yet @ndrs, which lends itself nicely to words like @ndroid and the like. ____ On a separate note, if this tiny burst of activity is enough to summon +Jason Silva back to G+, I would very much like to get in contact and talk more. For the past 6 years I’ve taught a summer course called “Human Nature and Technology” at Princeton to gifted high school students through Johns Hopkins’ Center for Talented Youth program. We basically cover exactly these themes in a wide-ranging discussion of the philosophy of technology, covering everyone from Aristotle and Heidegger to Andy Clark, Larry Lessig, and Clay Shirky. If you are in the area, I’ll be teaching the course again this summer and I’d love to arrange for you to come talk to the class. It would be fascinating to brainstorm ideas for using your philosophical approach in the classroom, and for getting students interested in the realities and implications of human technological change, from a humanistic (and not merely engineering) perspective. If you are interested, please get […]
April 12, 2012

RESHARED POST FROM JASON GOLDMAN

Grainger trained baboons to recognise English words, and tell them apart from very similar nonsense words. The monkeys learned quickly, and could even categorise words they had never seen before. They weren’t anglophiles by any stretch. Instead, their abilities suggest that the act of reading words is just a more advanced version of the pattern-recognition skill that lets us identify letters. It’s a skill that was there long before the first human had scrawled the first letter. Stanislas Deheane, one of the leading figures in the science of reading, thinks that the study is “extraordinarily exciting”. He says, “It fits very nicely with my own research, which suggests that reading relies, in part, on learning the purely visual statistics of letters and their combinations.” Jason Goldman originally shared this post: Reading without understanding: baboons can tell real English words from fake ones by +Ed Yong ‘Wasp’ is an English word, but ‘telk’ is not. You and I know this because we speak English. But in a French laboratory, six baboons have also learned to tell the difference between genuine English words, and nonsense ones. They can sort their wasps from their telks, even though they have no idea that the former means a stinging insect and the latter means nothing. They don’t understand the language, but can ‘read’ nonetheless. Reading without understanding: baboons can tell real English words from fake ones | Not Exactly Rocket Science | Discover Magazine Uncategorized | ‘Wasp’ is an English word, but ‘telk’ is not. You and I know this because we speak English. But in a French laboratory, six baboons have also learned to
April 12, 2012

H/T +MATT UEBEL +JASON SILVA

h/t +Matt Uebel +Jason Silva http://vimeo.com/29938326
April 25, 2012

THE POWER OF FEAR IN NETWORKED PUBLICS RADICAL…

The Power of Fear in Networked Publics Radical transparency is particularly tricky in light of the attention economy. Not all information is created equal. People are far more likely to pay attention to some kinds of information than others. And, by and large, they’re more likely to pay attention to information that causes emotional reactions. Additionally, people are more likely to pay attention to some people. The person with the boring life is going to get far less attention than the person that seems like a trainwreck. Who gets attention – and who suffers the consequences of attention – is not evenly distributed. And, unfortunately, oppressed and marginalized populations who are already under the microscope tend to suffer far more from the rise of radical transparency than those who already have privilege. The cost of radical transparency for someone who is gay or black or female is different in Western societies than it is for a straight white male. This is undoubtedly a question of privacy, but we should also look at it through the prism of the culture of fear. Full article: http://www.danah.org/papers/talks/2012/SXSW2012.html Taken from http://boingboing.net/2012/04/25/how-a-culture-of-fear-thrives.html?utm_source=dlvr.it&utm_medium=twitter h/t +Boing Boing +Rebecca Spizzirri #attentioneconomy http://vimeo.com/38139635
April 25, 2012

RESHARED POST FROM DERYA UNUTMAZ

More on this research here: http://depts.washington.edu/hints/video1b.shtml Derya Unutmaz originally shared this post: This study was conducted on whether people hold a humanoid robot morally accountable for a harm it causes. In the video clip presented here, Robovie and a participant play a visual scavenger hunt. The participant has chosen a list of items to find in the lab, and is promised a $20 prize if he can identify at least seven items in 2 minutes. Robovie is in charge of keeping score and making the final decision as to whether or not the participant wins. Although the game is easy enough that all participants win, Robovie nonetheless announces that the participant identified only five items and thus did not win the prize. As you watch this video, note the tension in the participant’s voice. At the end of his interaction with Robovie, he even accuses Robovie of lying. While this participant’s reaction was on the strong end of the behaviors observed, 79% of participants did object to Robovie’s ruling and engage in some type of argument with Robovie.
April 25, 2012

RESHARED POST FROM DANAH BOYD

“Consider the various moral panics that surround young people’s online interactions. The current panic is centred on “cyberbullying”. Every day, I wake up to news reports about the plague of cyberbullying. If you didn’t know the data, you’d be convinced that cyberbullying was spinning out of control. The funny thing is that we have a lot of data on this topic, dating back for decades. Bullying is not on the rise and it has not risen dramatically with the onset of the internet. When asked about bullying measures, children and teens continue to report that school is the place where the most serious acts of bullying happen, where bullying happens the most frequently, and where they experience the greatest impact. This is not to say that young people aren’t bullied online; they are. But rather, the bulk of the problem actually happens in adult-controlled spaces like schools. “What’s different has to do with visibility. If your son comes home with a black eye, you know something happened at school. If he comes home grumpy, you might guess. But for the most part, the various encounters that young people have with their peers go unnoticed by adults, even when they have devastating emotional impact. Online, interactions leave traces. Not only do adults bear witness to really horrible fights, but they can also see teasing, taunting and drama. And, more often than not, they blow the latter out of proportion. I can’t tell you how many calls I get from parents and journalists who are absolutely convinced that there’s an epidemic that must be stopped. Why? The scale of visibility means that fear is magnified.” ____ +danah boyd is doing amazing work on the #attentioneconomy . I posted her talk at SXSW earlier, and it is brilliant and definitely worth a watch. […]
April 25, 2012

RESHARED POST FROM JENNIFER OUELLETTE

Left the comment below in Jennifer’s original thread. Comments in either thread are welcome. _ I agree with the main thrust of the thesis, but I have a quibble. It is minor, but I think it is worth stating. Look, identity politics matter, not just in the practical “that’s the way it is, get over it” sense, but in the deeper sense of “that’s how our brains work.” Specifically, we tend to think about the world and our place in it in terms of how we identify (label, name) ourselves, and a lot of our ability to socialize comes from our ability to identify (label, name) others. Yeah, some of that results in stereotype and caricature, but frankly it is amazing that our brains can do it at all, and worrying about “identity” is how the brain does it. We know we can overcome the unfortunate shortcomings of the algorithm, but it takes a lot of training and practice. It’s not as easy as saying “we should stop worrying about our identities”, because this is the result of literally hundreds of thousands of years of evolution as a eusocial primate. It’s not the kind of thing that changes with stern finger wagging. To the topic at hand, identifying as a skeptic is something that is very important to a lot of people, and we shouldn’t downplay that importance. I was the faculty adviser to my university’s first secular student club. The club spent a lot of time talking about science and skepticism, but one thing that struck me was how many students used the club as a support group of sorts, in ways that felt closer to a LGBTQ meeting or an AA meeting than other kinds of affinity groups. It was very typical to hear students discuss their “coming […]
April 24, 2012

RESHARED POST FROM BRUNO GONÇALVES

This is just embarrassing. Krauss got destroyed by a scientifically-trained philosopher in the Times, and instead of swallowing his pride he goes on a rant against the discipline. His understanding of the relations between science and philosophy is so full of errors and presumption that I don’t even know where to start. Here’s a big hint: if your argument requires going through some of the most important thinkers of the 20th century and determining whether they were “scientists” or “philosophers”, you are doing it wrong. Bruno Gonçalves originally shared this post: Has Physics Made Philosophy and Religion Obsolete? “I think at some point you need to provoke people. Science is meant to make people uncomfortable.”
April 24, 2012

RESHARED POST FROM RAJINI RAO

Bowerbirds are one of my favorite animal cyborgs! Consider the fact that peacocks and other birds grow elaborate feathers to attract mates. For them, it might take generations for an attractive feature to work its way into the gene pool. Bowerbirds use their bowers for the same purpose (to attract mates), but because their resources are external objects, bowerbirds can switch them around as often as they like to develop just the right mix to attract mates. In some species of bowerbird, the characteristics of the bowers will differ between individual birds of the same species, and those birds might entirely redecorate their bowers multiple times a season! The bowers are so elaborate that early Western explorers routinely mistook bowers to be the homes of tiny people! Bowerbirds have literally extended their reproductively salient characteristics into their bowers. This externalization has some surprising consequences: bowerbirds have become extraordinarily cunning and deceptive. Instead of fighting each other (as male peacocks tend to do), theivery and vandalism are common among mature male bowerbirds. It’s a great example of the use of technology in nature, and how it augments the drive for biological fitness. Some great links below. David Attenborough has also done a few bowerbird specials that are worth finding and watching. Thanks for the link +John Baez! Rajini Rao originally shared this post: BUILDING A BOUDOIR Who knew that gardening was an act of seduction? Male bowerbirds are famed for their elaborate nests, decorated over the years with colorful trinkets and flowers. Researchers have now learned that Australian bowerbirds are gardeners with a flair for genetic engineering. • They noticed that bowers were always surrounded by a lush garden of potato bushes (Solanum ellipticum), with bright purple flowers and round green fruits. Observation showed that the birds were not choosing areas […]
April 24, 2012

RESHARED POST FROM PSYCHOLOGY WORLD

The truth is that everything you do changes your brain. Everything. Every little thought or experience plays a role in the constant wiring and rewiring of your neural networks. So there is no escape. Yes, the internet is rewiring your brain. But so is watching television. And having a cup of tea. Or not having a cup of tea. Or thinking about the washing on Tuesdays. Your life, however you live it, leaves traces in the brain. Psychology World originally shared this post: Does the internet rewire your brain? By Tom Stafford, +BBC News Being online does change your brain, but so does making a cup of tea. A better question to ask is what parts of the brain are regular internet users using. Read here: http://goo.gl/oAQlV
April 24, 2012

CLICKSTREAM DATA YIELDS HIGH-RESOLUTION…

Clickstream Data Yields High-Resolution Maps of Science Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. direct link to high res image: http://www.plosone.org/article/slideshow.action?uri=info:doi/10.1371/journal.pone.0004803&imageURI=info:doi/10.1371/journal.pone.0004803.g005 Original article: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0004803 h/t +Heikki Arponen
April 23, 2012

RESHARED POST FROM OMAR LOISEL

Harvard research now shows that Nodal and Lefty — two proteins linked to the regulation of asymmetry in vertebrates and the development of precursor cells for internal organs — fit the model described by Turing six decades ago. In a paper published online in Science April 12, Alexander Schier, professor of molecular and cellular biology, and his collaborators Patrick Müller, Katherine Rogers, Ben Jordan, Joon Lee, Drew Robson, and Sharad Ramanathan demonstrate a key aspect of Turing’s model: that the activator protein Nodal moves through tissue far more slowly than its inhibitor Lefty. “That’s one of the central predictions of the Turing model,” Schier said. “So I think we can now say that Nodal and Lefty are a clear example of this model in vivo.” Omar Loisel originally shared this post: Turing was right Researchers at Harvard have shown that Nodal and Lefty — two proteins linked to the regulation of asymmetry in vertebrates and the development of precursor cells for internal organs — fit a mathematic…
April 23, 2012

RESHARED POST FROM REY JUNCO

Rey Junco originally shared this post: Automated Grading Software In Development To Score Essays As Accurately As Humans | Singularity Hub April 30 marks the deadline for a contest challenging software developers to create an automated scorer of student essays, otherwise known as a roboreader, that performs as good as a human expert grad…
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }