Building the greatest artificial intelligence lab on Earth

Just read this on Mind Hacks…. looks like Google is becoming an AI company; and, with Ray Kurzeil and other AI and transhumanist theoreticians at the helm what should we expect in the future from Google? Just looking at the $3.2 Billion dollar investment in Nest Labs alone, not to speak of all the other companies it has bought up lately one wonders just what “deep learning” and the future of data mining holds out for our freedom? One of the investors from DeepMind told the reporters at technology publication Re/code two weeks ago that Google is starting up the next great “Manhattan project of AI”. As the investor continued: “If artificial intelligence was really possible, and if anybody could do it, this will be the team. The future, in ways we can’t even begin to imagine, will be Google’s.”

Kurzeil says that his main job mission is to offer an AI intelligence system based on natural language “my project is ultimately to base search on really understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.” Continuing, he says, “Google will know the answer to your question before you have asked it. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.” Who needs Big Brother when you have Google in your head? And, with Google in collusion with DARPA initiatives, who is to say what military and securitization issues will arise from such systems of intelligence? (see Google dominates Darpa robotics…) Will the WorldMind 1.0 be the militaries secret initiative to take over control not only of all information on the web, but of those hooked into its virtual playpen of false delights? Instead of “dropping out” like my fellow hippies did in the sixties, maybe we should soon think about unplugging, disconnecting, and cutting the neurocircuits that are being rewired by the global brain? Or is it already too late?

Orwell wrote of NewsSpeak… which in our time is becoming “GoogleSpeak” your friendly Avatar of the information highway. What next? A little smiley faced icon on your car google visor, iPhone, or thinkpad, an avatar that follows you everywhere 24/7 chattering away about this or that… all the while smiling as it also relays your deepest medical, social, private or intimate informatics messages to the NSA or any of a multiple other cyberagencies for data crystallization and surveillance recon. Oh, the wonders of the control society… blah, blah, blah…. the naturalization of security in our age: GoogleSpeak is your friend, download her now! Or, better yet, let GoogleMind(tm) back up your brainwaves today, don’t lose another mindless minute of your action filled life: let the GoogleMeisters upload your brain patterns to the Cloud…

As John Foreman at GigaCom remarks on Data privacy, machine learning…

“If an AI model can determine your emotional makeup (Facebook’s posts on love certainly betray this intent), then a company can select from a pool of possible ad copy to appeal to whatever version of yourself they like. They can target your worst self — the one who’s addicted to in-app payments in Candy Crush Saga. Or they can appeal to your aspirational best self, selling you that CrossFit membership at just the right moment.

In the hands of machine learning models, we become nothing more than a ball of probabilistic mechanisms to be manipulated with carefully designed inputs that lead to anticipated outputs.” And, quoting Victor Frankl, he continues: ““A human being is a deciding being.” But if our decisions can be hacked by model-assisted corporations, then we have to admit that perhaps we cease to be human as we’ve known it. Instead of being unique or special, we all become predictable and expected, nothing but products of previous measured actions.” In this sense what Deleuze once described as the “dividual” – “a physically embodied human subject that is endlessly divisible and reducible to data representations via the modern technologies of control” is becoming naturalized in this new world of GoogleSpeak. Just another happy netizen of the slaveworlds of modern globalism where even the best and brightest minds become grist for the noosphere mill of the praxelogical GoogleMind(tm).

Mind Hacks

The Guardian has an article on technologist Ray Kurzeil’s move to Google that also serves to review how the search company is building an artificial intelligence super lab.

Google has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest artificial intelligence laboratory on Earth; a laboratory designed to feast upon a resource of a kind that the world has never seen before: truly massive data. Our data. From the minutiae of our lives.

Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for…

View original post 134 more words

7 thoughts on “Building the greatest artificial intelligence lab on Earth

  1. In “The Heart of Matter,” Teilhard de Chardin writes: “[H]ow can we fail to see that the process of convergence from which we emerged, body and soul, is continuing to envelop us more closely than ever, to grip us, in the form of—under the folds of, we might say—a gigantic planetary contraction?

    The irresistible ‘setting’ or cementing together of a thinking mass (Mankind) which is continually more compressed upon itself by the simultaneous multiplication and expansion of its individual elements: there is not one of us, surely, who is not almost agonizingly aware of this, in the very fibre of his being. This is one of the things that no one today would even try to deny: we can all see the fantastic anatomical structure of a vast phylum whose branches, instead of diverging as they normally do, are ceaselessly folding in upon one another ever more closely, like some monstrous inflorescence—like, indeed, an enormous flower folding-in upon itself; the literally global physiology of an organism in which production, nutrition, the machine, research, and the legacy of heredity are, beyond any doubt, building up to planetary dimensions; the increasing impossibility of the individual’s attaining economic and intellectual self-sufficiency”

    Like

  2. Perhaps the people in Google should read “What Computer Still Can’t Do”, by Hubert Dreyfus. I think that would soften their cough.

    Like

    • Back then in 1972 that was true, but now they already are mimicking certain functions of the brain. It’s not the whole shebang, granted, but the neurosciences are still young as well. Yet, we do know that the decision functions are situated outside consciousness and are more than likely pattern based. It’s this type of feedback looping that their working on in what they term “deep learning”. Watson beat the two great Jepordy players after reading the complete Wikipedia. What happens with a system that has access to the complete knowledge base of every online university and foundation journal? Remember that certain elminativists and functionalists in neurosciences don’t see second-order intentionality, aboutness, directedness, etc. as real. They’ve been able to pinpoint the actual decision making processes in the subsystems of the brain before it reaches consciousness. This is why there are so many battles about “free will” at the moment. Scary stuff when you think about it…

      Like

      • Back in 1972 Dreyfus wrote the book “What Computers Can’t Do”, in 1992 he revised his book and cheekily called it “What Computers Still Can’t Do” and I would contend that he is correct because the premise for AI was mistaken in the first place. Has AI captured the “rules” of everyday knowledge? Are “rules” used by humans in their everyday knowledge? I don’t think so. I could be mistaken here but are Expert System not the pinnacle of all this research, which are very useful. I get quite jaundiced about terms like “deep learning” because it simply cannot be defined adequately. As for neuroscience research, it is extremely useful for the diagnosing and treatment of brain disorders. But to interpret brain scans as some indicator of learning must be considered as premature extrapolation. The working of the organ itself cannot tell you anything about learning, love, hate, superstition, etc., assuming there is no malfunction or disease.

        Like

      • That was my point: their no longer trying to mimic every aspect of the brain in AI only aspects of it. And, yes, I read the latter work which was updated to connectionism etc. But in the past 5 years things have changed drastically. I’m not a scientist so cannot give you the details on the logic behind “deep learning” so at the moment want try. And, obviously, if you read my post you can see that I’m very much opposed to this trend in the commercialization of AI and neurosicences for profit and military agendas. Obviously they proved their points with Watson a couple points back. And, now Google is investing upwards of 600 billion in both AI and Robotics companies that it has bought up. Why would they do that if there wasn’t something obvious in this? Not sure why you’re arguing from an outmoded standpoint such as Dreyfus? His work is definitely dated whether you accept that fact or not. I just no longer speaks to the current changes in these converging technologies. I’m on your side in that I hope this fails because of other reasons than that they are barking up the wrong tree in their thinking. For me its about privacy and the darker aspects of military use of such technologies in surveillance and in control and command of both civilian and military targets through manipulation of data, etc.

        Like

      • I share your concerns and I’m sorry if I misunderstood you. The big brother issues you bring up are indeed very worrying. However, I would disagree that Dreyfus is outmoded, but we will have to agree to disagree on that. Why are they investing 600 billion in this foolishness? Because they believe that the power of computers will exponentially increase over time and that that power will eventually mimic human intelligence, a sort of irrational rationality. I think they see it as a holy grail, a quest, a wet dream of sorts, and like the holy grail it’s a puff of smoke. Maybe they are all Indiana Jones fans. IMHO, I see the default human disposition as irrational, but capable of rational thought. I will predict that they will give-up on this quest within a short number of years and call their failure something else and pronounce it a success by giving it a name.

        Like

      • I see your point, but no I don’t think they believe that at all. Their seeking a marginal aspect of intelligence: the ability for a machine to read statements, propositional statements, semantic statements, natural language as we know it on the web; and, then they propose to datafy this information within a global system. It’s just an expanded version of what they already did with the Watson program http://en.wikipedia.org/wiki/Watson_(computer) with obviously newer technologies from companies they bought up like Nest Labs: http://en.wikipedia.org/wiki/Nest_Labs etc.

        Another one on Deep Learning: http://deeplearning.net/tutorial/

        As well as an article on Geoff Hinton, hired by Google, too: Wired: http://www.wired.com/wiredenterprise/2014/01/geoffrey-hinton-deep-learning

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s