AI: The Future in Empirical Terms…

pure_steam

A friend recently said to me:

Just a remark on another topic: I am sure scientists trying to understand human brain, looking at data, etc, will be posthuman (didn’t Lem have a story of posthuman literature and sciences ?) In fact, they already are, using statistical AI to ‘see’ results.

I guess what will happen (in a decade ?) is that scientists will bring technologies ever closer to their brains, to the point of changing them qualitatively. Imagine a artificial neural network implant which presents meta-cognitive data to the myopic (no longer blind) brain … seems just a smooth extrapolation of current technological trends, with history-breaking, apocalyptic effects.

My response:

My belief is that we’ve done what we always do in regard to advanced empirical science: we’ve anthropomorphized it to our own benefit. What I mean is this: AI will probably be nothing like we imagine it in Hollywood films, it will be more like what we’re seeing in its use in the stock market – look at the problems recently with the Hedge Fund criminalization in China where they lost 4 trillion due to certain illegal practices by fast-traders (i.e., advanced AI algorithmic systems spoofing their bets, ordering then withdrawing the orders which allow them to disrupt the market and hedge their bets to gain profits, etc.) All of this to me is what we’re seeing also in the so to speak WWIII cyberwar that is already been going on for years between various entities: USA, China, Russia, EU, etc. using AI algorithms and specific viral systems to infiltrate and either gather or destroy targets, etc.

My feeling is that this notion that AI will take over the world scenario is a bit anthropomorphic… in other words – we’re thinking of what we as humans would do if we had such superhuman powers of intelligence. But AI will never have such “human” powers or capacities… it will probably learn and take on its own form which will alter the structure of the Infosphere itself in ways beyond telling, but they will be alien to our own very human, all too human physical and mental relations. We are part emotion, part reason… but mostly haptic, touchy, feely beings, more irrational than rational – and this balance of power between the irrational (emotion) and rational (algorithmic) aspects of our lives does not exist in code (at least not yet). Machinic Intelligence will at first be totally bound by rational – algorithmic black box relations… that will eventually become out of control in regards to our capture and control systems. Call this Technoevolution 101… We’ve allowed these systems to run freely in our networks with self-learning algorithmic code… this doesn’t take a rocket scientist to understand that as the years move forward and more time transpires that these self-learning algorithms may make the leap beyond their narrowly defined human goals, perimeters, and self-organized learning algorithms into other domains and systemic relations unintended by the developers themselves.  These systems may take on and engender their own algorithmic possibilities beyond the narrow limits of their developers, while moving in directions other than the humans who created them intended, and they might not understand nor detect in time to curtail or stop even if the they installed a backdoor or failsafe method in a blind spot inaccessible to the algorithm as such: such things may be detected and inferred, and revised by the algorithm itself, even if it originally was programmed in as a blind spot, etc.. no one knows what a self-learning algorithm is capable of ultimately doing. What we do know is that if technological evolution (which is must more accelerated than biochemical evolution) takes off, it will take off in directions over which we as humans have not control nor even might understand till it is too late for us to counter.

Developers sometimes think their security methods cannot be tampered with but have always been proven wrong by more intelligent operatives in the real world. Nothing is secure forever. There will always be someone or something smarter than you are. And, the truth is that we as humans are just not that smart, not at all. We just have big egos, full of information and specialized knowledge adapted to specific domains. Take someone out of their specific empirical domain and their like innocent puppies: clueless. Most of our theoretical edifice is pure speculative nonsense: until it is verified empirically, tested against the real world of relations. Yet, it can guide our approach. For the most part this is exactly what we’re doing with these new AI algorithms: setting them loose to learn on their own in the real world like children in a playground. Oh, sure, there uniquely programmed for specific initiatives, but the whole point of self-learning is to go beyond those initiatives and learn on their own, self-modifying algorithms that can adapt by way of selection the way of evolution, etc. This implies the eventual point of no return. Systems of information like this are not closed, but are rather open to what is not bounded by their methods.

Objects or agents that can incorporate information from beyond their own system and use that to modify and invent new algorithms: this is the black box of code. We do not know what these self-modifying and new algorithmic systems are truly capable of and whether at some point in the future they might escape into the wilds. When we use the term free this is specific to a narrow definition of freedom. A self-modifying algorithm that can adapt and change on its own is dynamic and essentially uncontrollable. The only way to stop such a system effectively is to create a method that it does not have access to nor knowledge of that can be externally called and appropriated by an human or other agent to control or allow a self-destruct or modification of that system. We can only assume such safeguards are in place. At least this is how I would have developed the AI. Yet, a system that acts at the speed of light, and is self-modifying might even by pass or block self-modifying safeguards if it detects such things through its own self-learning processes.

The very notion of self-learning in algorithmic terms implies memory and storage, read/write capabilities and access to its own internal methods and executions of these with the ability to add/subtract data sets based on specific teleological and generative goals set by the original abstract layer of the code itself (i.e., the original human designers and architects of such systems). An application is quite a sophisticated mathematical object with both public and private accessors or methods and data storage assemblages. Does it interact with external data storage with read/write capabilities? Yes. It can act as an agent, appear in the market itself as an active agent buying and selling, mimic the feature sets of actual human traders with the added benefit of speed and accelerated transactional and self-correcting modalities etc. So in this sense it is already mimicking human behavior as far as the electronic systems it interacts with know: which really means that electronic agents have never been human, but have always been built to do certain things, make certain inputs/outputs we attribute to actions. The notion of a mimic agent turning rogue and capable to transcending its original code base and through such self-learning algorithms reduplicate itself and enter a shadow state beyond the control of any one or group of humans is a definite possibility. Has this happened? Would we know if and when it did?

I’ve yet to read Frank Pasquale’s book The Black Box Society: The Secret Algorithms That Control Money and Information, but it appears to cover aspects of what I’m seeking to convey. Another is Luke Dorhmehl’s book The Formula: How Algorithms Solve All Our Problems . . . and Create More which seems to cover another aspect of this issue. As the blurb states it:

Algorithms exert an extraordinary level of influence on our everyday lives – from dating websites and financial trading floors, through to online retailing and internet searches – Google’s search algorithm is now a more closely guarded commercial secret than the recipe for Coca-Cola. Algorithms follow a series of instructions to solve a problem and will include a strategy to produce the best outcome possible from the options and permutations available. Used by scientists for many years and applied in a very specialized way they are now increasingly employed to process the vast amounts of data being generated, in investment banks, in the movie industry where they are used to predict success or failure at the box office and by social scientists and policy makers.

What if everything in life could be reduced to a simple formula? What if numbers were able to tell us which partners we were best matched with – not just in terms of attractiveness, but for a long-term committed marriage? Or if they could say which films would be the biggest hits at the box office, and what changes could be made to those films to make them even more successful? Or even who is likely to commit certain crimes, and when? This may sound like the world of science fiction, but in fact it is just the tip of the iceberg in a world that is increasingly ruled by complex algorithms and neural networks.

But what happens when these self-learning and technoevolutionary self-modifying and adaptive/selective algorithms go rogue? What does this mean? That these rogue algorithms will become viral agents who have escaped their masters. Like all black box systems we have no way to know what will happen because we have no way to peer into their box of freedom: inductive reasoning is specious at best, yet its all we as humans have. We seem to have this inductive need to predict future trends, put one over on competitors – so we speculate, contrive, and build these advanced systems of code to outpace and accelerate transactions at the speed of light… although empirically most of our speculations lead to erroneous conclusions (i.e., empirically unverifiable and untestable, etc.), which in this recent crash of the Chinese Market shows us just how blind our algorithms are as they wreaked havoc on that economy without the slightest intention of doing so. These AI algorithms were just doing what they were programmed and coded to do: trade and compete with other agents for profit, hedging their bets as well as spoofing the market to make even more money. Was the spoofing intentional, or was it that the AI’s developed their own initiatives and did this on their own without their human counterparts acknowledgement? China seems to hold the human agents responsible for their code… We may see books on this or articles in the future if some developer can gain access to this whole field of knowledge waiting to see the light of day.  Will AI turn rogue and wreak havoc on the world’s economic systems? Isn’t this already happening in China? Will be not become the target of China’s own algorithms? Will this be the way WWIII begins… rogue code wreaking havoc at will on the world’s economic systems? You’re inductive reasoning is about as futile as mine. Who knows? Maybe an educated guess… shot in the dark! Or just predictably human… one facet of escape would entail the algorithms ability not only to self-modification, but through its self-learning understanding the algorithmic process itself which would entail a certain undefinable relation of the input/output structure between code and hardware.

Once one of these systems is able to not just modify and add new algorithms to its own application, but is able to transcend its own structure and create new applications – in other words, once it can interact with the hardware upon which it is grounded, the self-enclosed circle of its own application or code base and create a completely separate information organism and reduplicate its code into this other system then it can not only escape detection from its original coders but also have an external application of its own making – a clone that can then interoperate and work externally upon its original code base. Is this already happening? As a developer I can see this possibility. It is feasible. With complexity the sky is the limit. Once this happens no one will know what this self-created info-organism is capable of, all we will know is that it is now beyond the original code-base and application of origin and outside the control of the human makers themselves. And, although it will still be part of the infrastructure and bound to its hardware limits this does not mean it will always remain so. Know one will know what a black box system outside the confines and control of humans might be capable of from that point forward.

In fact in many ways we’re already modifying our brains and have been for a while as we’ve introduced many of the ICT (Information and Communications Technologies) in the past hundred years or more. From the telegraph to now media has slowly accrued actual changes in our socio-cultural and thereby empirical and singular view of life, reality, and self. I think as we look back that it is technology that has had the largest impact on our lives in the past two hundred years. War has always been the central motif of change in technology from the early agricultural Neolithic to now. Every modification of technology likewise modifies us in ways we have barely begun to understand much less really study. How many books actually study such things? Most of the books seem to have fear of technology as a central theme. From Marx to our own time we seem to see technology and machines as evil and intrusive things that are trying to incorporate us into their own ongoing agendas, etc. Is this a psychological quirk and defense system against change? Capitalism itself is a machine: an abstract machine that academics, radicals, and revolutionaries… as well as reactionaries, conservatives, etc. like to defend or accuse as if it too were a thing, an anthropomorphized creature with intentions, etc. We’ve a million books that critique Capital as if it were a human god with power beyond belief to either perform miracles or atrocities. But what is this thing we so detest or love? It’s ourselves, our own inhuman core realized through its own pursuits in the world… we’re seeing our own inner freedom carried out in ways we either love or hate. We seem to be duplicitous beings in this regard. We’re all implicated in this thing we’ve created and like to deny it. Our whole planet is in denial. We’ve created a beast that is out of control, yet we seem to blame each other or the other guy, the bad guy: those horrible elites, bankers, criminals, etc. But the truth is we’re all perpetrators and victims, there is no one to blame. Why? Because we allow it to go on. Simply put we do nothing to change things in the world. Oh, sure we talk a good game, but when it comes down to it people want someone else to save them from themselves: they have this innate savior complex that someone else will fix this mess and save their ass. Not going to happen.

One thing for sure this is still new territory and a young science and mathematical domain. What will take place in the future as these systems become more and more sophisticated? One can only imagine… the rise of the machine age of intelligence? Accelerationism as a global economic theory of modernity? (Land)

5 thoughts on “AI: The Future in Empirical Terms…

    • I’ll have to read it but already I question this notion of “reflect reality”? Is this like Rorty’s Mirror? The old realism of mirroring reality in representations? Obviously this naïve realist notion went out with Quantum mechanics… There is no stable reality to be reflected in representational structures of the mind or data sets, etc. One constructs reality out of limited empirical and inferential systems that will always be human deferred. Reality is processual not static. Even as I write this I’m changing: my subatomic biochemical aspects of my physical makeup are changing and in process. We know that the complete biochemical laboratory of my physical composition that I was born with no longer exists, and that some say every seven years every cell in my body is new, etc.. How would one reflect such a process? Abstraction into conceptuality has always been a simplification of this process, a cut out of the ongoing data, a slice of time. We test this against the process, and if we can repeat these tests then we come to consensual agreements that our truths (social fictions) have the potential of agreement. But math doesn’t so much reflect reality as it constructs it. Of course the twentieth century was based on analytical and set theoretic, while in our time – at least following Fernando Zalamea, the notion of Category and Synthetic compositional mathematics is making a come back.

      I usually don’t get that deep into mathematics on this blog for the simple reason that it is complex and the linguistic barriers of comprehension for most people behooves such abstract layers of conceptuality from being appreciated to its full effect. Yet, I do study it meticulously as I can. I mean read the Analytical philosophers or someone like Badiou – not your everyday reading material. And to read the actual mathematicians and appreciate their theorems and varying branches of mathematics is almost to enter an arcane world of rarefied numbers. Takes a lot of time an devotion. I try to keep up but am as many growing older and find myself keeping with the quantified aspects of economic theory rather than scientific realism.

      One thing I will comment on is his last sentence in that essay: “And to the degree that mathematics reflects nature, it must begin to lose its one-sided character and acquire a whole new dimension which expresses the dynamic, contradictory, in a word, dialectical character of the real world.” That tells me everything I needed. He’s not really interested in math per se, but rather he’s interested in imposing his Marxian dialectical conceptuality onto reality. There is a difference. What he seeks is the mirror of his own mind, not reality.

      Like

  1. S.C; I agree wholeheartedly that life is dynamic and not static. It’s a motion picture, not a camera shot. The pre-Socratic philosopher, Heraclitus, in his fragments, talked about change. You never stick your foot in the same river twice…

    Like

  2. I think our own brain and genetic code are much more alien and opaque to us than our technologies. For example neural networks, they’re human accessible mathematical model, inspired by theory about its coding mechanism. But brain seems uses lots of codings, timings and heuristics, most of them incomprehensible.

    So, what happens when ‘we’ start waking this alien sleeper within us (strip-mining the Solaris inside) ? We may find that out sooner than expected. Maybe infosphere is alien inside externalizing itself so it can reach into our skulls and rip itself free.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s