Romancing the Machine: Intelligence, Myth, and the Singularity

“We choose to go to the moon,” the president said. “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.”

I was sitting in front of our first Motorola color television set when President Kennedy spoke to us of going to the moon. After the Manhattan Project to build a nuclear bomb this was the second great project that America used to confront another great power in the race to land on the moon. As I listened to the youtube.com video (see below) I started thinking about a new race going on in our midst: the intelligence race to build the first advanced Artificial General Intelligence (AGI). As you listen to Kennedy think about how one of these days soon we might very well hear another President tell us that we must fund the greatest experiment in the history of human kind: the building of a superior intelligence.

Why? Because if we do not we face certain extinction. Oh sure, such rhetoric of doom and fear has always had a great effect on humans. I’ll imagine him/her trumping us with all the scientific validation about climate change, asteroid impacts, food and resource depletion, etc., but in the end he may pull out the obvious trump card: the idea that a rogue state – maybe North Korea, or Iran, etc. is on the verge of building such a superior machinic intelligence, an AGI. But hold on. It gets better. For the moment an AGI is finally achieved is not the end. No. That is only the beginning, the tip of the ice-berg. What comes next is AI or complete artificial intelligence: superintelligence. And, know one can tell you what that truly means for the human race. Because for the first time in our planetary history we will live alongside something that is superior and alien to our own life form, something that is both unpredictable and unknown: an X Factor.

 

Just think about it. Let it seep down into that quiet three pounds of meat you call a brain. Let it wander around the neurons for a few moments. Then listen to Kennedy’s speech on the romance of the moon, and remember the notion of some future leader who will one day come to you saying other words, promising a great and terrible vision of surpassing intelligence and with it the likely ending of the human species as we have known it:

“We choose to build an Artificial Intelligence,” the president said. “We choose to build it in this decade, not because it is easy, but because it is for our future, our security, because that goal will serve to organize our defenses and the security of the world, because that risk is one that we are willing to accept, one we are not willing to postpone, because of the consequences of rogue states gaining such AI’s, and one which we intend to win at all costs.”


Is it really so far-fetched to believe that we will eventually uncover the principles that make intelligence work and implement them in a machine, just like we have reverse engineered our own versions of the particularly useful features of natural objects, like horses and spinnerets? News flash: the human brain is a natural object.

—Michael Anissimov, MIRI Media Director

 We are all bound by certain cognitive biases. Looking them over I was struck by the conservativism bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” As we move into the 21st Century we are confronted with what many term convergence technologies: nanotechnology, biotechnology, genetechnology, and AGI. As I was looking over PewResearch’s site which does analysis of many of our most prone belief systems I spotted one on AI, robotics, et. al.:

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade. (see AI, Robotics, and the Future of Jobs)

 This almost universal acceptance that robotics and AI will be a part of our inevitable future permeates the mythologies of our culture at the moment. Yet, as shows there is a deep divide as to what this means and how it will impact the daily lives of most citizens. Of course the vanguard pundits and intelligent AGI experts hype it up, telling us as Benjamin Goertzel and Steve Omohundro argue AGI, robotics, medical apps, finance, programming, etc. will improve substantially:

…robotize the AGI— put it in a robot body— and whole worlds open up. Take dangerous jobs— mining, sea and space exploration, soldiering, law enforcement, firefighting. Add service jobs— caring for the elderly and children, valets, maids, personal assistants. Robot gardeners, chauffeurs, bodyguards, and personal trainers. Science, medicine, and technology— what human enterprise couldn’t be wildly advanced with teams of tireless and ultimately expendable human-level-intelligent agents working for them around the clock?1

As I read the above I hear no hint of the human workers that will be displaced, put out of jobs, left to their own devices, lost in a world of machines, victims of technological and economic progress. In fact such pundits are only hyping to the elite, the rich, the corporations and governments that will benefit from such things because humans are lazy, inefficient, victims of time and energy, expendable. Seems most humans at this point will be of no use to the elite globalists, so will be put to pasture in some global commons or maybe fed to the machine gods.

Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.

—Ray Kurzweil, inventor, author, futurist

In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

—George Dyson, historian

Kurzweil and Dyson agree that whatever these new beings become, they will not have our interests as a central motif of their ongoing script.  As Goertzel tells Barrat the arrival of human-level intelligent systems would have stunning implications for the world economy. AGI makers will receive immense investment capital to complete and commercialize the technology. The range of products and services intelligent agents of human caliber could provide is mind-boggling. Take white-collar jobs of all kinds— who wouldn’t want smart-as-human teams working around the clock doing things normal flesh-and-blood humans do, but without rest and without error. (Barrat, pp 183-184) Oh, yes, who wouldn’t… one might want to ask all those precarious intellectual laborers that will be out on the street in soup lines with the rest of us that question.

As many of the experts in the report mentioned above relate: about half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.

Sounds more like dystopia for the mass, and just another nickelodeon day for the elite oligarchs around the globe. Yet, the other 52% have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution. Sounds a little optimistic to me. Human ingenuity versus full-blown AI? Sound more like blind-man’s bluff with the deck stacked in favor of the machines. As one researcher Stowe Boyd, lead researcher at GigaOM Research, said of the year 2025 when all this might be in place: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?’ Indeed, one wonders… we know the Romans built the great Circus, gladiatorial combat, great blood-bath entertainment for the bored and out-of-work minions of the Empire. What will the Globalists do?

A sort of half-way house of non-commitment came from Seth Finkelstein, a programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, who responded, “The technodeterminist-negative view, that automation means jobs loss, end of story, versus the technodeterminist-positive view, that more and better jobs will result, both seem to me to make the error of confusing potential outcomes with inevitability. Thus, a technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole….this is not a technological consequence; rather it’s a political choice.” 

I love it that one can cop-out by throwing it back into politics, thereby washing one’s hands of the whole problem as if magically saying: “I’m just a technologist, let the politicians worry about jobs. It’s not technology’s fault, there is no determinism on our side of the fence.” Except it is not politicians who supply jobs, its corporations: and, whether technology is determined or not, corporations are: their determined by capital, by their stockholders, by profit margins, etc. So if they decide to replace workers with more efficient players (think AI, robots, multi-agent systems, etc.) they will if it make them money and profits. Politicians can hem and haw all day about it, but will be lacking in answers. So as usual the vast plebian forces of the planet will be thrown back onto their own resources, and for the most part excluded from the enclaves and smart cities of the future. In this scenario humans will become the untouchables, the invisible, the servants of machines or pets; or, worst case scenario: pests to be eliminated.

Yet, there are others like Vernor Vinge who believe all the above may be true, but not for a long while, that we will probably go through a phase when humans are augmented by intelligence devices. He believes this is one of three sure routes to an intelligence explosion in the future, when a device can be attached to your brain that imbues it with additional speed, memory, and intelligence. (Barrat, p. 189) As Barrat tells us our intelligence is broadly enhanced by the mobilization of powerful information technology, for example, our mobile phones, many of which have roughly the computing power of personal computers circa 2000, and a billion times the power per dollar of sixties-era mainframe computers. We humans are mobile, and to be truly relevant, our intelligence enhancements must be mobile. The Internet, and other kinds of knowledge, not the least of which is navigation, gain vast new power and dimension as we are able to take them wherever we go. (Barrat, p. 192)

But even if we have all this data at our braintips it is still data that must be filtered and appraised, evaluated. Data is not information. As Luciano Floridi tells us “we need more and better technologies and techniques to see the small-data patterns, but we need more and better epistemology to sift the valuable ones”.2 As Floridi will explain it what Descartes acknowledged to be an essential sign of intelligence— the capacity to learn from different circumstances, adapt to them, and exploit them to one’s own advantage— would be a priceless feature of any appliance that sought to be more than merely smart. (Floridi, KL 2657) Floridi will put an opposite spin on all the issues around AGI and AI telling us that whatever it ultimately becomes it will not be some singular entity or self-aware being, but will instead be our very environment – what he terms, the InfoSphere: the world is becoming an infosphere increasingly well adapted to ICTs’ (Information and Communications Technologies) limited capacities. In a comparable way, we are adapting the environment to our smart technologies to make sure the latter can interact with it successfully. (Floridi, KL 2661)

For Floridi the environment around us is taking on intelligence, that it will be so ubiquitous and invisible, naturalized that it will be seamless and a part of our very onlife lives. The world itself will be intelligent:

Light AI, smart agents, artificial companions, Semantic Web, or Web 2.0 applications are part of what I have described as a fourth revolution in the long process of reassessing humanity’s fundamental nature and role in the universe. The deepest philosophical issue brought about by ICTs concerns not so much how they extend or empower us, or what they enable us to do, but more profoundly how they lead us to reinterpret who we are and how we should interact with each other. When artificial agents, including artificial companions and software-based smart systems, become commodities as ordinary as cars, we shall accept this new conceptual revolution with much less reluctance. It is humbling, but also exciting. For in view of this important evolution in our self-understanding, and given the sort of ICT-mediated interactions that humans will increasingly enjoy with other agents, whether natural or synthetic, we have the unique opportunity of developing a new ecological approach to the whole of reality. (Floridi, KL 3055-62)

That our conceptions of reality, self, and environment will suddenly take on a whole new meaning is beyond doubt. Everything we’ve been taught for two-thousand years in the humanistic traditions will go bye-bye; or, at least will be treated for the ramblings of early human children fumbling in the dark. At least so goes the neo-information philosophers such as Floridi. He tries to put a neo-liberal spin on it and sponsors an optimistic vision of economic paradises for all, etc. As he says in his conclusion we are constructing an artificial intelligent environment, an infosphere that will be inhabited for millennia of future generations. “We shall be in serious trouble, if we do not take seriously the fact that we are constructing the new physical and intellectual environments that will be inhabited by future generations (Floridi, KL 3954).”  Because of this he tells us we will need to forge a new new alliance between the natural and the artificial. It will require a serious reflection on the human project and a critical review of our current narratives, at the individual, social, and political levels. (Floridi, 3971) 

In some ways I concur with his statement that we need to take a critical view of our current narratives. To me the key is just that. Humans live by narratives, stories, tales, fictions, etc., always have. The modernists wanted grand narratives, while the postmodernists loved micro-narratives. What will our age need? What will help us to understand and to participate in this great adventure ahead in which the natural and artificial suddenly form alliances in ways never before seen from the beginning of human history. From the time of the great agricultural civilizations to the Industrial Age to our own strange fusion of science fiction and fact in a world where superhuman agents might one day walk among us what stories will we tell? What narratives do we need to help us contribute to our future, and to the future hopefully of our species? Will the narratives ultimately be told a thousand years from now by our inhuman alien AI’s to their children of a garden that once existed wherein ancient flesh and blood beings once lived: the beings that once were our creators? Or shall it be a tale of symbiotic relations in which natural and artificial kinds walk hand in hand forging together adventures in exploration of the galaxy and beyond? What tale will it be?

Romance or annihilation? Let’s go back to the bias: “The tendency to revise one’s belief insufficiently when presented with new evidence.” If we listen to the religious wing of transhumanism and the singulatarians, we are presented with a rosy future full of augmentations, wonders, and romance. On the other side we have the dystopians, the pessimists, the curmudgeons who tell us the future of AGI leads to the apocalypse of AI or superintelligence and the demise of the human race as a species. Is their a middle ground. Floridi seems to opt for that middle ground where humans and technologies do not exactly merge nor destroy each other, but instead become symbionts in an ongoing onlife project without boundaries other than those we impose by a shared vision of balance and affiliation between natural and artificial kinds. Either way we do not know for sure what that future holds, but as some propose the future is not some blank slate or mirror but is instead to be constructed. How shall we construct it? Above all: whose future is it anyway? 

As James Barrat will tell us consider DARPA. Without DARPA, computer science and all we gain from it would be at a much more primitive state. AI would lag far behind if it existed at all. But DARPA is a defense agency. Will DARPA be prepared for just how complex and inscrutable AGI will be? Will they anticipate that AGI will have its own drives, beyond the goals with which it is created? Will DARPA’s grantees weaponize advanced AI before they’ve created an ethics policy regarding its use? (Barrat, 189)

My feeling is that even if they had an ethics policy in place would it matter? Once AGI takes off and is self-aware and able to self-improve its capabilities, software, programs, etc. it will as some say become in a very few iterations a full blown AI or superintelligence with as much as a thousand, ten thousand, or beyond intelligence beyond the human. Would ethics matter when confronted with an alien intelligence that is so far beyond our simple three pound limited organic brain that it may not even care or bother to recognize us or communicate. What then?

We might be better off studying some of the posthuman science fiction authors in our future posts (from i09 Essential Posthuman Science Fiction):

  1. Frankenstein, by Mary Shelley
  2. The Time Machine, by H.G. Wells
  3. Slan, by A.E. Van Vogt
  4. Dying Earth, Jack Vance
  5. More Than Human, by Theodore Sturgeon
  6. Slave Ship, Fredrick Pohl
  7. The Ship Who Sang, by Anne McCaffrey
  8. Dune, by Frank Herbert
  9. “The Girl Who Was Plugged In” by James Tiptree Jr.
  10. Aye, And Gomorrah, by Samuel Delany
  11. Uplift Series, by David Brin
  12. Marooned In Realtime, by Vernor Vinge
  13. Beggars In Spain, by Nancy Kress
  14. Permutation City, by Greg Egan
  15. The Bohr Maker, by Linda Nagata
  16. Nanotech Quartet series, by Kathleen Ann Goonan
  17. Patternist series, by Octavia Butler
  18. Blue Light, Walter Mosley
  19. Look to Windward, by Iain M. Banks
  20. Revelation Space series, by Alasdair Reynolds
  21. Blindsight, by Peter Watts
  22. Saturn’s Children, by Charles Stross
  23. Postsingular, by Rudy Rucker
  24. The World Without Us, by Alan Weisman
  25. Natural History, by Justina Robson
  26. Windup Girl, by Paolo Bacigalupi

1. Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (pp. 184-185). St. Martin’s Press. Kindle Edition.
2. Floridi, Luciano (2014-06-26). The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Kindle Locations 2422-2423). Oxford University Press. Kindle Edition.

13 thoughts on “Romancing the Machine: Intelligence, Myth, and the Singularity

  1. This is really interesting. And necessary. But in the same wAy as computers caused no great threat and no catastrophe for humanity, nothing else will either. Rather. The humanity that deal with such things will be perfectly adapted to deal with what ever possibilities arise. For if they don’t, then there will have never been the problem to begin with, and the effort where there was a major problem would be do only in so much as humanity is still around to reflect upon how catastrophic the problem was or now could be, which is only a perspective one can have if they are dealing with it successfully.

    This is why to me your discourse appears to me like science fiction; valid, necessary as part of present negotiation to reconsile past and future, interesting and thought provoking for such real concerns — but I cannot bring myself to consider it beyond such fictional influence. Frankenstein could be said to be just as considerate, and marvel comics, and the Once and future king, or Foundation.
    But it seems somehow it wants to be taken more ‘seriously’ than how I see it.

    But I see no possibility of human beings existing separately from the situation they are in at any given moment, such that such a world could be viewed as so strange. Only through projection of past-future speculation can it be so strange and anxious.

    And that’s fine; but like I said it seems somehow it is meant differently than I view it. Maybe I’m incorrect in this appraisal?

    Like

    • For whatever reason I think you miss the point. Landsek, these ideas are being taken seriously by both capitalists, scientists, and philosophers. Not sure what the you’re talking about. Convergence technologies have more money being invested in them than any other type of capitalists venture. DARPA which is the main defense arm of the US sponsors billions in each of the areas of nanotechnology, biotechnology, genetechnology, and AGI. So if you think this is all fantasy on my part you’ve got some screws missing. Do you ever read about sciences and what the various governments are doing?

      As far as humans being prepared? That’s another cognitive bias, its called the ‘Normalcy’ bias: — the refusal to plan for, or react to, a disaster that has never happened before.

      Like

    • As I was reading his transcript I kept thinking: dumbed down version of Marx’s points line by line. Nothing new there at all. He should’ve started first with the two basic premises: humans are lazy animals, and we’re being replaced eventually because the machines (with AGI or AI) will be more efficient, tireless (24/7), and less cost, etc.

      When he says: “My argument isn’t that we shouldn’t let computers do anything, it’s that we should be very careful and try to exhibit wisdom in allocating which tasks we hand off and which we keep to ourselves.” – But will we (the common man) have that choice? I doubt it. The investors, stockholders, owners, etc. will demand more, and if the machine offers it they’ll go with it not the human.

      He tries to revert to another bias: “But the programs have no human insight.” But they’ll not need ‘insight’ an intentional aspect of consciousness… they want think like humans – at least according to certain AGI experts: they’ll be of another order of intelligence; yes, self-aware, but also self-improving and able to rewrite their own algorithms and software on the fly to meet new challenges. So his notions don’t apply to what is coming. He’s still dealing with the base set of knowledge that’s already obsolete.

      He says: “We are very quick to want to automate everything, and it’s only afterwards we realise, wow, we’ve kind of taken the excitement and the fulfilment out of our lives.” I tend to agree with him, but the common man want have much choice in the matter. We are slowly being replaced whether we will or want. Even Lanier in ‘You’re not a Gadget’ said as much. Even if we try to disconnect from the machine there will still be all the other sleepwalkers attuned to the new world view of the machines; and, loving it. (think Matrix or Floridi’s InfoSphere surrounding us: the ubiquitous environment will be AI – you’ll live in an intelligent environment).

      He says: “So as a society we’d have to accept some amount of inefficiency in order to get the best of both human beings and computers. And the question is are we capable of even making slight trade-offs of efficiency in a time when it seems like all our emphasis is on efficiency and expediency…” All good stuff, but it is too utopian. Corporations are dictatorships controlled by Capital, Wall-Street, etc. they do profit not human.

      He says: ” I think the biggest hope that I have for the way that robotics will get us to rethink our humanity is that it will get us to incorporate all the things that our brains do and come to understand them as different types of thinking…” Some AGI experts are already doing this: reverse engineering the brain (but others disagree with this approach, saying it will take too long, and we can follow other trails). Think of it this way, we’ve always been in a symbiotic relationship to technology: we invent tools, and they in turn reinvent us. The tool was an idea first, a human idea manifested. It’s not some alien thing, its what we are. Same with AGI… it’s our inhuman core.

      I think the augmentation path will be a transitional phase that will combine the best of both worlds. Pragmatically the chess champion Gasparov proved that when he was augmented by another system he was able to beat Big Blue steadily because of augmentation. Why? That’s the million dollar question.

      Like

  2. Is Seth Finkelstein really washing his hands of the political problem by saying that is for the politicians to sort out? A more charitable reading might be that he sees himself as a political actor, and the evolution of tech platforms in society as a political process? It would be consistent with the EFF to recognize technologies provide stages for the theatre of politics.

    Like

    • Yea, I don’t think he’s washing his hands of it, just passing it off as something in his role cannot be addressed. This is what he said, not me. The point being that the development of AGI will go on unless their is political will against it to stop it. But either way, as he admits, it will go on somewhere, so why not us? This is his point: rogue states are already investing heavily in it. Cybercrime syndicates as well. Mainly in the malware end of it like Zeus and the use of Cloud platforms for computing power as bases for stealing both information and money from governments and institutions. etc.

      Like

      • I would be surprised to see GAI come out of crime networks or a rogue state … But they might hook together enhanced versions of cloud / massively distributed / convenience AI in nasty ways. Like using big compute resources together with a pattern matched with the sophistication of Siri. Nigerian scam cold calls? Impersonation bots …

        Like

      • Actually its big business. And many of these underground hackers are already using variants of different types of malware to break into banks. As one analyst says: “The cybercriminal underground is a market, source code leakages and botnet shutdowns have been happening constantly but we see virus writers from time to time come up with new (or based on old but modified) banking malware. It proves that the market wants such tools.” New online banking malware threat comes after law enforcement agencies from several countries at the beginning of June worked with security vendors to shut down a financial fraud botnet based on a Zeus spin-off called Gameover. The FBI estimates that the botnet led to losses of over US$100 million globally. (here)

        In 2009 a criminal network used Amazon’s Elastic Cloud Computing Service (EC2) as a command center for Zeus, one of the largest botnets ever. Zeus stole some $ 70 million from customers of corporations, including Amazon, Bank of America, and anti-malware giants Symantec and McAfee.

        Baggart did an interview with former William Lynn, the former United States Deputy Secretary of Defense was told that his hypothesis is nothing revolutionary: as AI develops it will be used for cybercrime. Or put another way, the cybercrime tool kit will look a lot like narrow AI. In some cases it already does. So, on the road to AGI, we’ll experience accidents. What kind? Well, I think the worst case is the infrastructure of the nation. The worst case is that either some nation or some group decides to go after the critical infrastructure of the nation through the cybervector, so that means the power grid, the transportation network, the financial sector. You could certainly cause loss of life, and you can do enormous damage to the economy. You can ultimately threaten the workings of our society. The attacker has a huge advantage. Structurally it works out that the attacker only has to succeed once in a thousand attacks. The defender has to succeed every time. It’s a mismatch.

        Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 249). St. Martin’s Press. Kindle Edition.

        There are only “around 100″ cybercriminal kingpins behind global cybercrime, according to the head of Europol’s Cybercrime Centre. (here) But with the release of new and exotic source code every years this is increasing. As many AGI experts suggest, its only a matter of time and money before the criminals have AGI as well. Some of these underground expert criminal kingpins have teams just as smart and capable as larger countries because of money. And, they exist outside of the normal worlds that even law enforcement can gain access too: China, Russian, Iran, North Korea, etc. where these States turn a blind eye to it, and even sponsor a lot of it in their continuing cyberwar against EU and US interests, etc.

        Barrat, James (2013-10-01). Our Final Invention: Artificial Intelligence and the End of the Human Era (p. 248). St. Martin’s Press. Kindle Edition.

        Bloomberg summarized this concept this the following statement:

        “The U.S. national security apparatus may be dominant in the physical world, but it’s far less prepared in the virtual one. The rules of cyberwarfare are still being written, and it may be that the deployment of attack code is an act of war as destructive as the disabling of any real infrastructure. And it’s an act of war that can be hard to trace: Almost four years after the initial NASDAQ intrusion, U.S. officials are still sorting out what happened. Although American military is an excellent deterrent, it doesn’t work if you don’t know whom to use it on.”

        Like

  3. I don’t really disagree with any of this, or dispute the facts of how echoes of it are already happening today, except the idea that AGI would be invented outside of a massive institutional research context. It’s just not that sort of tech. In that sense your space race example is better guidance than a global crime network; but once invented, as a digital form, it may perhaps transition more quickly into cybercrime networks etc. It’s “only a matter of time” that they have AGI in the same sense “it’s only a matter of time” they have cold fusion. The amount of damage / change that can be done by narrow AI is massive though, because people behave like narrow AI a lot of the time, it is a mentally cheap way to interact, and there are always other things to focus on.

    The idea of trying hundreds of times for each success, because of the cheapness of each try, applied to crime is so very true of software, will need to grapple with it a bit. Spamcrime.

    Like

    • I agree it want be invented outside of a massive institutional research context. But, remember what I said of the experts in these fields: the cybercrime is already being sponsored by large States like Russian, China, North Korea, Iran will money and people that (think on this) that have been trained not only by their own institutions, but by the EU, US and other institutions…

      Also, think about this: DARPA in the US has funded many stealth company start-ups that are already emerging as AGI leaders. Even Google’s Google X company was a stealth company at one time.

      Remember there is in many ways a collusion and collaboration between the larger underground syndicates and rogue states: money laundering, protection, etc. They exist because governments and their Black Ops want them to exist. There are case study after case study about such things drifting back to the WWII, etc.

      The other thing that these experts say is that the cybercriminals need not create or invent this on their own: they only need to steal the already locked up source code. They’ve done this with malware for ages: stealing technologies and source and information and selling it to highest bidders, etc.

      Like

      • I think it is plausible that once one group cracks AGI, unless it is based on very specific hardware, it will leak everywhere. I am skeptical about AGI being cracked soon, which I know puts me on the wrong side of a lot of accelerationist thought. Arguably the major AI-like advances of the last fifteen years – Siri, Google Search etc – were achieved by giving up on the idea of a computer as a brain and treating it as a really great filing cabinet and number cruncher.

        Or you could argue that a distributed spamcrime botnet behaves so like a malicious intelligence as to make no difference … even if it can’t answer questions about Hamlet. Software Day of the Triffids.

        Like

      • Of course, remember AGI is not AI. Artificial General Intelligence need not be self-aware, it could be just more advanced algorithms with the ability to improve on its own systems and innovate: and, as said, innovation is based on understanding previous code embedded in other systems, allowing for a remix and transformation into new frameworks, etc.

        While AI or Artificial Intelligence is probably a longer way off.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s