April 19, 2012

RESHARED POST FROM REBECCA MACKINNON

In an interview with the Guardian, Berners-Lee said: “My computer has a great understanding of my state of fitness, of the things I’m eating, of the places I’m at. My phone understands from being in my pocket how much exercise I’ve been getting and how many stairs I’ve been walking up and so on.” Exploiting such data could provide hugely useful services to individuals, he said, but only if their computers had access to personal data held about them by web companies. “One of the issues of social networking silos is that they have the data and I don’t … There are no programmes that I can run on my computer which allow me to use all the data in each of the social networking systems that I use plus all the data in my calendar plus in my running map site, plus the data in my little fitness gadget and so on to really provide an excellent support to me.” Rebecca MacKinnon originally shared this post: Tim Berners-Lee: demand your data from Google and Facebook Exclusive: world wide web inventor says personal data held online could be used to usher in new era of personalised services
April 19, 2012

RESHARED POST FROM DREW SOWERSBY

“Many broadly significant scientific questions, ranging from self-organization and information flow to systemic robustness, can now be properly formalized within the emerging theory of networks,” says Adilson E. Motter, professor of physics and astronomy. “I was thus humbled to be invited to write such a timely piece.” The authors argue that, as network research matures, there will be increasing opportunities to exploit network concepts to also engineer new systems with desirable properties that may not be readily available in existing ones. Drew Sowersby originally shared this post: Networks and the patterns they express #Networks #Patterns #Collaboration #Complexity They are just beginning to establish how to properly read patterns within networks. I found the following insight to mesh with my own intuition about how to proceed! “One such method mentioned in the article aims at resolving the internal structure of complex networks by organizing the nodes into groups that share something in common, even if researchers do not know a priori what that thing is.” Futurity.org – To control a network, find the pattern Research news from leading universities
April 18, 2012

RESHARED POST FROM WAYNE RADINSKY

” We all know that, for example, the iPad is assembled from lots of high tech suppliers. The piece below thinks about that in terms of the knowledge required to build an iPad. That knowledge is more than a single company can handle and it must be spread among a number of companies. In the early days, life was simple. We did important things like make spears and arrowheads. The amount of knowledge needed to make these items, however, was small enough that a single person could master their production. There was no need for a large division of labor and new knowledge was extremely precious. If you got new knowledge, you did not want to share it. After all, in a world where most knowledge can fit in someone’s head, stealing ideas is easy, and appropriating the value of the ideas you generate is hard. At some point, however, the amount of knowledge required to make things began to exceed the cognitive limit of a single human being. Things could only be done in teams, and sharing information among team members was required to build these complex items. Organizations were born as our social skills began to compensate for our limited cognitive skills. Society, however, kept on accruing more and more knowledge, and the cognitive limit of organizations, just like that of the spearmaker, was ultimately reached. … Today … most products are combinations of knowledge and intellectual property that resides in different organizations. Our world is less and less about the single pieces of intellectual property and more and more about the networks that help connect these pieces. The total stock of information used in these ecosystems exceeds the capacity of single organizations because doubling the size of huge organizations does not double the capacity of that organization […]
April 17, 2012

RESHARED POST FROM MICHAEL CHUI

I wrote the following rant in +Michael Chui‘s thread. ______ There is nothing wrong with the entrepreneurial spirit. In the biological world they call entrepreneuralism “opportunism”, and being opportunistic is vital for evolutionary for success. The problem is that entrepreneurialism must exist within an incentive structure that encourages and incentivizes deplorable and inhumane acts, which has rapidly led to an overwhelming correlation between entrepreneurs and the most vile excess of capitalism. The article you post here is suggesting we discourage opportunism; I think this is a losing suggestion; it’s not the kind of thing that can will catch on. The far more promising solution is to change the incentive structure to stop encouraging inhumanity, instead of trying to force ourselves to make up for the deficiencies of our organizational schema. We can’t do it; it’s pretty clear by now that even our best efforts are promptly quashed by the crushing inertia of the existing order of things. I’ve seen at least a few posts over the last few days clearly describing the transition to a “cashless” economy. The universal presumption of all these articles is that the basic inventive of money will stay the same, and that the only change will be at the interface: instead of handling little pieces of paper, the flow of cash will all be digital. The small-mindedness of these articles frustrates me to no end. Obviously the transactions will be increasingly digital; they already are largely digital, and it isn’t hard to imagine the pieces of paper going away. Such a change would be about as interesting as announcing that they are painting all the money pink; its a completely superficial difference. But seriously, everyone, if we are going to make such a transition anyway, why not redesign the thing from scratch? With a little […]
April 17, 2012

THESE ARE AWESOME! THANKS, +REBECCA SPIZZIRRI…

These are awesome! Thanks, +Rebecca Spizzirri Mapping Great Debates: Can Computers Think? A set of 7 poster-sized argumentation maps that chart the entire history of the debate. The maps outline arguments put forth since 1950 by more than 380 cognitive scientists, philosophers, artificial intelligence researchers, mathematicians, and others. Every map presents 100 or more major claims, each of which is summarized succinctly and placed in visual relationship to the other arguments that it supports or disputes. The maps, thus, both show the intellectual history of this interdisciplinary debate and display its current status. Claims are further organized into more than 70 issue areas, or major branches of the arguments. http://www.macrovu.com/CCTGeneralInfo.html CCT General Information MacroVU, Inc. is a leader in visual language and information design books and training courses for business, science, and technology
April 17, 2012

RESHARED POST FROM JEFF SAYRE

Jeff Sayre originally shared this post: A Rare Look at Google’s Secret Networking Infrastructure This is a fascinating read! /by +Steven Levy\ for +WIRED #Google #InternetBackbone #networking #infrastructure Going With the Flow: Google’s Secret Switch to the Next Wave of Networking | Wired Enterprise | Wired.com Google treats its infrastructure like a state secret, so Google czar of infrastructure Urs Hölzle rarely ventures out into the public to speak about it. Today is one of those rare days. At the Open Ne…
April 17, 2012

RESHARED POST FROM KENNETH READ

Thus the proposed PageRank Opinion Formation (PROF) model takes into account the situation in which an opinion of an in?uential friend from high ranks of the society counts more than an opinion of a friend from lower society level. We argue that the PageRank probability is the most natural form of ranking of society members. Indeed, the e?ciency of PageRank rating is demonstrated for various types of scale-free networks including the World Wide Web (WWW), Physical Review citation network, scienti?c journal rating, ranking of tennis players, Wikipedia articles, the world trade network and others. Due to the above argument we consider that the PROF model captures the reality of social networks and below we present the analysis of its interesting properties. _______ I’ve posted four distinct articles describing various methods for modeling the #attentioneconomy today, in case anyone happened to notice. Hopefully the scientists involved are also working on popularized texts to help the public understand what they are doing. I’m trying to describe it as best I can, but I’m worried that the science is outpacing my attempts to clarify. I think that’s a good kind of problem. I’m really not sure. Kenneth Read originally shared this post: PageRank. Imagine simulating or even predicting opinion formation on large social networks, and the preservation of opinions in small circles. Are the tools of theoretical physics relevant and up to the challenge?… Boltzmann meets Twitter tonight. [1204.3806] PageRank model of opinion formation on social networks Abstract: We propose the PageRank model of opinion formation and investigate its rich properties on real directed networks of Universities of Cambridge and Oxford, LiveJournal and Twitter. In this mod…
April 17, 2012

RESHARED POST FROM TECHNICS ?

TECHNICS ? originally shared this post: ENGINEERING • ROBOTICS • ACM-R5 Amphibious Robosnake ———————————————————————————————— The eight-kilogram reptile is powered by a 30-minute lithium ion battery, during which time the remote operator sets its general direction while sensors feeding a 32-bit microprocessor guide the actual underwater acrobatics and terra firma terrain negotiation. Like most technology, this will initially be used in a humani- tarian capacity to help locate victims of earthquakes and other disasters . . . ———————————————————————————————— Read more ? goo.gl/SY5i3
April 17, 2012

TURING’S INTELLIGENT MACHINES THIS WILL…

Turing’s intelligent machines This will be the first in a series of essays discussing Turing’s view of artificial intelligence. You can find some relevant links for further consideration at the bottom of the post. Questions, comments, and suggestions are appreciated! !: Turing’s prediction In his 1950’s paper Computing Machinery and Intelligence, Turing gives one of the first systematic philosophical treatments of the question of artificial intelligence. Philosophers back to Descartes have worried about whether “automatons” were capable of thinking, but Turing pioneered the invention of a new kind of machine that was capable of performances unlike any machine that had come before. This new machine was called the digital computer, and instead of doing physical work like all other machines before, the digital computer was capable for doing logical work. This capacity for abstract symbolic processing, for reasoning, was taken as the fundamentally unique distinction of the human mind since the time of Aristotle, and yet suddenly we were building machines that were capable of automating the same formal processes. When Turing wrote his essay, computers were still largely the stuff of science fiction; the term “computer” hadn’t really settled into popular use, mostly because people weren’t really using computers. Univac’s introduction in the 1950’s census effort and its prediction of the 1952 presidential election was still a few years into the future, and computing played virtually no role in the daily lives of the vast majority of people. In lieu of a better name, the press would describe the new digital computers as “mechanical brains”, and this rhetoric fed into the public’s uncertainty and fear of these unfamiliar machines. Despite his short life, Turing’s vision was long. His private letters show that he felt some personal stake in the popular acceptance of these “thinking machines”, and his 1950 essay […]
April 17, 2012

RESHARED POST FROM BRUNO GONÇALVES

Recent empirical evidence has shown that enabling collective intelligence by introducing social influence, can be detrimental to the aggregate performance of a population (Lorenz et al. 2011). By social influence, we understand the pervasive tendency of individuals to conform to the behavior and expectations of others (Kahan 1997). In separate experiments, Lorenz et al. asked participants to re-evaluate their opinions on quantitative subjects over several rounds and under three information spreading scenarios — no information about others’ estimations (control group), the average of all opinions in each round and full information on other subjects’ judgements. They found evidence that under the latter two regimes, the diversity in the population decreased, while the collective deviation from the truth increased. This result justi ed the disheartening conclusion that allowing people to learn about others’ behaviours and adapt their own as a response does not always lead to the group acting “wiser”. Rather, as the authors posited, not only is the population jointly convinced of a wrong result, but even the simple aggregation technique of the wisdom of crowds is deteriorated. From a policy-maker’s perspective, such groups are, thus, not wise. Current research has not yet investigated thoroughly the theoretical link between social influence and its eff ect on the wisdom of crowds. In this paper, we build upon the empirical study in (Lorenz et al. 2011) by developing a formal model of social influence. Our goal is to unveil whether the e ffects of social in influence are unconditionally positive or negative, or whether its ultimate role is mediated through some mechanism, so that the e ffect on the group wisdom is only indirect. We adopt a minimalistic agent-based model, which successfully reproduces the fin dings of the said study and gives enough insight to draw more general conclusions. In particular, we confirm that small amounts […]
April 17, 2012

TODAY MARKS AN IMPORTANT MILESTONE FOR WOLFRAM…

Today marks an important milestone for Wolfram|Alpha, and for computational knowledge in general: for the first time, Wolfram|Alpha is now on average giving complete, successful responses to more than 90% of the queries entered on its website (and with “nearby” interpretations included, the fraction is closer to 95%). I consider this an impressive achievement—the hard-won result of many years of progressively filling out the knowledge and linguistic capabilities of the system. The picture below shows how the fraction of successful queries (in green) has increased relative to unsuccessful ones (red) since Wolfram|Alpha was launched in 2009. And from the log scale in the right-hand panel, we can see that there’s been a roughly exponential decrease in the failure rate, with a half-life of around 18 months. It seems to be a kind of Moore’s law for computational knowledge: the net effect of innumerable individual engineering achievements and new ideas is to give exponential improvement. http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/ thx +Peter Asaro Stephen Wolfram Blog : Overcoming Artificial Stupidity Progressive improvements allow Wolfram|Alpha to give successful responses 90% of the time. Stephen Wolfram shares some quirky answers that have been corrected along the way.
April 16, 2012

RESHARED POST FROM MATT UEBEL

Matt Uebel originally shared this post: #futurism #science #singularity #optimism #hope #awe #youtube @jason_silva.
.twitter-timeline.twitter-timeline-rendered { position: relative !important; left: 50%; transform: translate(-50%, 0); }