April 14, 2012

RESHARED POST FROM ALEX SCHLEBER

Smartphones—extraordinarily powerful, mobile, data-network-connected computers equipped with GPS, accelerometers and all sort of other gee-whizzery—have become so ubiquitous so fast because they’re so remarkable (and because falling tech prices have quickly made them affordable). But because they’ve become so ubiquitous so fast, I think we underappreciate the revolutionary potential of a world in which powerful mini-computers are everywhere, and where every person has an unfathomable about of information available all the time. Innovations like Twitter are dazzling and useful; they may well end up the technological equivalent of a plasma globe, a shiny, technological trinket that only hinted at the social and economic potential of the concepts upon which it was based. The potential of the smartphone age is deceptive. We look around and see more people talking on phones in more places and playing Draw Something when they’re bored. This is just the beginning. In time, business models, infrastructure, legal environments, and social norms will evolve, and the world will become a very different and dramatically more productive place. Alex Schleber originally shared this post: Great #stats on technology adoption cycles from @asymco via +The Economist (which is noteworthy in and of itself, he is obviously becoming more widely noted as a mobile analyst). The rampant sub 10-year adoption of the smartphone by 50% of the market is also the likely reason why we are NOT in an unjustified mobile/tech bubble, and why paying $1B for Instagram was not a mistake. Related -> plus.google.com/112964117318166648677/posts/czadRruFzf6 The revolution to come EARLIER this week, Matt Yglesias discussed an interesting analysis of penetration rates for various modern technologies, built around the piece of data that smartphones have now achieved 50% penetrati…
April 13, 2012

RESHARED POST FROM SAKIS KOUKOUVIS

Direct link: http://news.softpedia.com/news/Mathematical-Model-of-the-Brain-Developed-264350.shtml h/t +Kimberly Chapman Sakis Koukouvis originally shared this post: Mathematical Model of the Brain Developed Taking complex systems such as the Internet and social networks as examples, a group of experts at the University of Cambridge was able to create a mathematical model of the brain. Though simple, their tool provides a surprisingly complete statistical account of how different regions interact. Mathematical Model of the Brain Developed | Science News Taking complex systems such as the Internet and social networks as examples, a group of experts at the University of Cambridge was able to create a mathematical model of the brain. Though simple, thei…
April 13, 2012

RESHARED POST FROM JOHN VERDON

The high levels of intelligence seen in humans, other primates, certain cetaceans and birds remain a major puzzle for evolutionary biologists, anthropologists and psychologists. It has long been held that social interactions provide the selection pressures necessary for the evolution of advanced cognitive abilities (the ‘social intelligence hypothesis’), and in recent years decision-making in the context of cooperative social interactions has been conjectured to be of particular importance. Here we use an artificial neural network model to show that selection for efficient decision-making in cooperative dilemmas can give rise to selection pressures for greater cognitive abilities, and that intelligent strategies can themselves select for greater intelligence, leading to a Machiavellian arms race. Our results provide mechanistic support for the social intelligence hypothesis, highlight the potential importance of cooperative behaviour in the evolution of intelligence and may help us to explain the distribution of cooperation with intelligence across taxa. John Verdon originally shared this post: Cooperation and the evolution of intelligence Abstract The high levels of intelligence seen in humans, other primates, certain cetaceans and birds remain a major puzzle for evolutionary biologists, anthropologists and psychologists. It has long b…
April 13, 2012

RESHARED POST FROM GIDEON ROSENBLATT

I’m putting my comment here so I’m not taken as trolling the thread, which is filled with good vibes but very little critical discussion. I left a comment in one of the guessing threads, and then immediately muted it so I didn’t have a deluge of arbitrary numbers filling my notification stream. +Gideon Rosenblatt, these are important issues and I totally appreciate and would encourage you to keep running social experiments like this in the future. Whatever they demonstrate, they serve as useful educational exercises to get us thinking about the right things. You are really good at doing it, and I genuinely appreciate your posts and your work. So I apologize for being a brat =D ____ I’m going to push a little more on the ambiguities in the language here, because there are important and interesting issues and I like to have my issues clear. Aristotle distinguished between five “intellectual virtues”. These virtues are: episteme: scientific knowledge. Think of it as “books smarts”. techne: craft knowledge. Think of it as skills and abilities, or “street smarts”. This is where we get our word “technology”. phronesis: intelligence nous: understanding sophia: wisdom These distinctions are very interesting; you can read more here: http://en.wikipedia.org/wiki/Nicomachean_Ethics#Book_VI:_Intellectual_virtue I have a lot to say about techne, obviously, but the two terms that are of interest to us here are intelligence and wisdom. Aristotle thinks we are always aimed and directed at goals or projects, what he calls a telos, or an end. So intelligence is about our ability to realize those ends, and how well we can do it. There are lots of ways of accomplishing a goal, and our intelligence is, in a sense, a measure of our ability to do it. The better you are at seeing means and opportunities for accomplishing your […]
April 13, 2012

RESHARED POST FROM MATT UEBEL

The key here is to think about automated vehicles not as a change in my relation to my car (“it is driving for me”), but rather as an infrastructural change about the way we drive as a collective community practice (“they are driving for us”). Driving is one of these fundamentally “American” past times, so the collective ownership of our cars is perhaps the most appropriate way to introduce our culture to the collaborative efforts that the Digital Age requires of us. I give more thoughts on #systemhack issues involving #cars and #sustainability in the comments on +Matt Uebel‘s original post. Matt Uebel originally shared this post: #RaceAgainstTheMachine #autocars #futurism . Futuristic cars are coming faster than you think Cars that drive themselves are not just the stuff of sci-fi movies. The technology is real, the cars can now drive legally and the debate is starting on whether society is better off when software is behind the wheel.
April 12, 2012

RESHARED POST FROM SINGULARITY UTOPIA

I left comment in +Singularity Utopia‘s post that I’m copying here for archiving purposes. I’ve been frustrated with the discussions surrounding the Singularity for a long time, and I’ve found the philosophical and theoretical foundations for the discussion of technology to be significantly lacking. I tend to take it out on SU because they do good job highlighting the “mainstream” singularity view, so I mean it as no disrespect and I’m not trying to troll. I’m talking about these issues because I think they are serious and important. I am still baffled why anyone thinks the singularity is an “event”, or if they do, why they would put it off into the future. Technological progress is already accelerating faster than our human ability to keep up, and it is already having dramatic and devastating consequences for ourselves and our planet. We are already surrounded by a variety of intelligent machines, each of which are performing tasks that baffle and dazzle and amaze us, and which few (if any) of us understand completely. Some of these machines are responsible for maintaining critical aspects of human well-being and social practices, and we’ve become dependent on their operation for our very being. Although both changes are definitely happening, and with accelerating pace, I’m not sure what point break event the Singularity theorists expect to distinguish some future state from the existing states. If the claim is that there is some qualitative distinction between the pre- and post-Singularity world, I would offer that such changes have already occurred, as part of the Digital Revolution. The Digital Age begins in the late 70’s, but doesn’t really kick off full blast until the last decade, and really with the introduction of Google. The Digital Age is going strong, and shows no signs of stopping, but there’s […]
April 12, 2012

RESHARED POST FROM MATT UEBEL

I don’t know if this is bad form, but I’m archiving another comment here, this time in +Jonathan Langdale‘s post linked below. https://plus.google.com/u/0/109667384864782087641/posts/5fDf3r3AsHe I need to write up a longer post on Turing, but there are two distinct parts to the test: 1) The machine uses language like a natural language user (it can carry on a conversation). 2) We take that language use to be an indicator of intelligence. The whole history of attempting to build machines that “pass” the Turing Test has been an engineering project designed to solve criteria 1. Its certainly not a trivial problem; but I think it is taken to be much more complicated than it need be. For instance, when I am talking to someone who doesn’t know English well, the language might be grammatically messy, even indecipherable, but I nevertheless tend to give a lot of charity and presume my interlocutor’s general intelligence anyway. So on this criteria, I have argued that some machines are already language users and have been for some time. We aren’t on the “brink” of passing it; we’ve shot right by it, and it is now common place to talk in semi-conversational language to our devices and expect those devices to understand (at least to some extent) the meaning and intention of those words. Google in particular is not only a language user, but its use of language is highly influential in the community of language users; denying that Google uses language therefore threatens to misunderstand what language use is. I’m currently having an extended philosophical discussion along these lines in this thread: https://plus.google.com/u/0/117828903900236363024/posts/RT5hG9a4dNd But even granted that you build a conversational machine, it is still an open question of whether we take that machine to be intelligent. Turing recognized very clearly that regardless of the machines […]
April 12, 2012

THE PRESENTATION OF THE DATA HERE IS BEAUTIFUL…

The presentation of the data here is beautiful. Visualization is a key to realizing the #attentioneconomy. h/t +Michael Chui I saw some historians talking on Twitter about a very nice data visualization of shipping routes in the 18th and 19th centuries on Spatial Analysis. (Which is a great blog–looking through their archives, I think I’ve seen every previous post linked from somewhere else before). They make a basically static visualization. I wanted to see the ships in motion. Plus, Dael Norwood made some guesses about the increasing prominence of Pacific trade in the period that I would like to see confirmed. That got me interested with the ship data that they use, which consists of detailed logbooks that have been digitized for climatological purposes. On the more technical side, I have been fiddling a bit lately with ffmpeg and ggplot (two completely unrelated systems, despite what the names imply) to make animated visualizations, and wanted to put one up. And it’s an interesting case; historical data was digitized for climatological purposes, which means visualization is going to be on of the easiest ways to think about whether it might be usable for historical demonstration or analysis, as well. 100 Years of ships!
April 12, 2012

RESHARED POST FROM NEUROSCIENCE NEWS

They took a dataset that Prof Markram and others had collected a few years ago, in which they recorded the expression of 26 genes encoding ion channels in different neuronal types from the rat brain. They also had data classifying those types according to a neuron’s morphology, its electrophysiological properties and its position within the six, anatomically distinct layers of the cortex. They found that, based on the classification data alone, they could predict those previously measured ion channel patterns with 78 per cent accuracy. And when they added in a subset of data about the ion channels to the classification data, as input to their data-mining programme, they were able to boost that accuracy to 87 per cent for the more commonly occurring neuronal types. “This shows that it is possible to mine rules from a subset of data and use them to complete the dataset informatically,” says one of the study’s authors, Felix Schürmann. “Using the methods we have developed, it may not be necessary to measure every single aspect of the behaviour you’re interested in.” Once the rules have been validated in similar but independently collected datasets, for example, they could be used to predict the entire complement of ion channels presented by a given neuron, based simply on data about that neuron’s morphology, its electrical behaviour and a few key genes that it expresses. Cross-reference the #connectome debate from this lecture: https://plus.google.com/u/0/117828903900236363024/posts/Ky5piPLjhYd Neuroscience News originally shared this post: Data Mining Opens the Door to Predictive Neuroscience Researchers at the EPFL have discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties an
April 12, 2012

I’M SHARING THE OTHER TWO +JASON SILVA VIDEOS…

I’m sharing the other two +Jason Silva videos below: To understand is to perceive patterns RCVR The pattern video is amazing. I like the RCVR video less because I think the idea is less clearly thought through. We aren’t just receivers, which implies the same kind of passivity as consumers. We also act on the information we receive; this is part of the massive feedback loop that is allowing us to self-organize, and characterizing ourselves as mere receivers threatens to miss the cybernetic dimension of this organization. In the #attentioneconomy , I describe the nodes not just as receivers but as attenders to try and capture this dynamic activity as more than mere receptivity. If you like the Web 2.0 schtick, call us ATNDRS, or better yet @ndrs, which lends itself nicely to words like @ndroid and the like. ____ On a separate note, if this tiny burst of activity is enough to summon +Jason Silva back to G+, I would very much like to get in contact and talk more. For the past 6 years I’ve taught a summer course called “Human Nature and Technology” at Princeton to gifted high school students through Johns Hopkins’ Center for Talented Youth program. We basically cover exactly these themes in a wide-ranging discussion of the philosophy of technology, covering everyone from Aristotle and Heidegger to Andy Clark, Larry Lessig, and Clay Shirky. If you are in the area, I’ll be teaching the course again this summer and I’d love to arrange for you to come talk to the class. It would be fascinating to brainstorm ideas for using your philosophical approach in the classroom, and for getting students interested in the realities and implications of human technological change, from a humanistic (and not merely engineering) perspective. If you are interested, please get […]
April 12, 2012

RESHARED POST FROM JASON GOLDMAN

Grainger trained baboons to recognise English words, and tell them apart from very similar nonsense words. The monkeys learned quickly, and could even categorise words they had never seen before. They weren’t anglophiles by any stretch. Instead, their abilities suggest that the act of reading words is just a more advanced version of the pattern-recognition skill that lets us identify letters. It’s a skill that was there long before the first human had scrawled the first letter. Stanislas Deheane, one of the leading figures in the science of reading, thinks that the study is “extraordinarily exciting”. He says, “It fits very nicely with my own research, which suggests that reading relies, in part, on learning the purely visual statistics of letters and their combinations.” Jason Goldman originally shared this post: Reading without understanding: baboons can tell real English words from fake ones by +Ed Yong ‘Wasp’ is an English word, but ‘telk’ is not. You and I know this because we speak English. But in a French laboratory, six baboons have also learned to tell the difference between genuine English words, and nonsense ones. They can sort their wasps from their telks, even though they have no idea that the former means a stinging insect and the latter means nothing. They don’t understand the language, but can ‘read’ nonetheless. Reading without understanding: baboons can tell real English words from fake ones | Not Exactly Rocket Science | Discover Magazine Uncategorized | ‘Wasp’ is an English word, but ‘telk’ is not. You and I know this because we speak English. But in a French laboratory, six baboons have also learned to
April 12, 2012

H/T +MATT UEBEL +JASON SILVA

h/t +Matt Uebel +Jason Silva http://vimeo.com/29938326
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }