The American Cyborg: Neuroscience, DARPA, and BRAIN

Proverbs for Paranoids: You may never get to touch the Master, but you can tickle his creatures.

– Thomas Pynchon,  Gravity’s Rainbow

What if the Master has a steel face and looks something like the DARPA Atlas in the image above? When we discover the Master is a mask for the economic masters one need not worry about tickling any creatures whatsoever, more than likely they will be tickling you soon enough. That’s what I thought the first time I saw the White House BRAIN. Yes, yes… the new Manhattan Project of the decade or millennia is to unlock the secrets in the your skull – that three-pound loaf of grey matter that swims behind your eyes recreating moment by moment the words you are reading in the blips and bits of electronic light from your screen at this very moment. In the bold print we hear about the wonders that will be accomplished through such research: “…a bold new research effort to revolutionize our understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders like Alzheimer’s, schizophrenia, autism, epilepsy, and traumatic brain injury.” All good, of course, nothing wrong with solving the terrible problems of the brain that have brought so much devastation and suffering to millions. But then one looks down the page and notices where the major portion of the funding is going and realizes … hmm… military (DARPA) expenditure: $50 million for understanding the dynamic functions of the brain and demonstrating breakthrough applications based on these insights.

The Defense Advanced Research Projects Agency (DARPA) is the central research and development organization for the Department of Defense (DoD). It manages and directs selected basic and applied research and development projects for U.S Department of Defense and pursues research and technology where risk and payoff are both very high and where success may provide dramatic advances for traditional military roles and missions. DARPA sponsors such things as robotic challenges(here). Their mission statement tells it all:

The Defense Advanced Research Projects Agency (DARPA) was established in 1958 to prevent strategic surprise from negatively impacting U.S. national security and create strategic surprise for U.S. adversaries by maintaining the technological superiority of the U.S. military.

To fulfill its mission, the Agency relies on diverse performers to apply multi-disciplinary approaches to both advance knowledge through basic research and create innovative technologies that address current practical problems through applied research.  DARPA’s scientific investigations span the gamut from laboratory efforts to the creation of full-scale technology demonstrations in the fields of biology, medicine, computer science, chemistry, physics, engineering, mathematics, material sciences, social sciences, neurosciences and more.  As the DoD’s primary innovation engine, DARPA undertakes projects that are finite in duration but that create lasting revolutionary change.

Lasting “revolutionary change”?  At a Pentagon briefing, Arati Prabhakar, the current head of Darpa, announced the release of a “framework” that spells out the agency’s role. “Our mission is unchanged, in 55 years, it has been and will be to prevent and create technological surprise,” she said, in announcing the new plan. “But of course the world in which we do that has changed many times since 1958.” (BBC) Ah, not to worry, they just want to create “technological surprise”. Whoosh… I thought it was much more serious than that…! Who am I kidding, of course it’s serious…

As the DSO (Defense Sciences Office) tells us their department develops and leverages neurophysiological sensors, neuro-imaging, cognitive science and molecular biology to provide support, protection and tactical advantage to warfighters who perform under the most challenging operational conditions. DSO is discovering and applying advances in neuroscience to improve warfighters’ resilience to stress, increase the rate and quality of learning and training, defend against injury and enhance our warfighters’ ability to exert influence. DSO’s advances in neuroscience are leading to better sensors and novel neuro-morphic system architectures in the fields of computing, robotics and information integration, providing solutions to challenging issues. By harnessing the capabilities of neuroscience and fusing them with cutting edge electronics and the social sciences, DSO is bringing a new level of efficiency and situational awareness to provide warfighters with reliable information, training and tools to execute their missions.(see) “Exert influence”: is that a euphemism for command and conquer or what?

Welcome to the new militarization of the American Warrior as Cyborg. The future wave of bioengineering towards the Singularity… As we discover the DARPA SyNAPSE Program seeks “to build a new kind of computer with similar form and function to the mammalian brain.  Such artificial brains would be used to build robots whose intelligence matches that of mice and cats.” The first phase of 5 is “designing a multi-chip system capable of emulating 1 million neurons and 1 billion synapses.” As they explain the program:

Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors,  integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous  today.  Owing both to limitations in hardware and architecture, these machines are of limited utility in complex,  real-world environments, which demand an intelligence that has not yet been captured in an algorithmic-computational  paradigm. The SyNAPSE program seeks to break the programmable machine paradigm and define a new path forward for  creating useful, intelligent machines.

The vision for the DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels.  Programmable machines are limited not only by their computational capacity, but also by an  architecture requiring human-derived algorithms to both describe and process information from their environment.  In contrast, biological neural systems autonomously process information in complex environments by  automatically learning relevant and probabilistically stable features and associations.  Since real world systems  are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be  preferable in a host of applications – but useful and practical implementations do not yet exist.(here)

The final phase of the program will entail a deliverable metric that is the fabrication of a multi-chip neural system of 108 neurons (100 million)  and install this in a robot that performs at cat level.  Estimated to begin between late 2013 and late 2015. Estimated completion date, late 2014 to late 2017.

Is the singularity near? The Atlas Robot (image at top of page) was unveiled recently:

“The Virtual Robotics Challenge was a proving ground for teams’ ability to  create software to control a robot in a hypothetical scenario. The DRC Simulator  tasks were fairly accurate representations of real world causes and effects, but  the experience wasn’t quite the same as handling an actual, physical robot,”
said Gill Pratt, program  manager for the DARPA Robotics Challenge. “Now these seven teams will see if their simulation-honed algorithms can run a real machine in real environments. And we expect all teams will be further refining their algorithms, using both simulation and experimentation.”

“We have dramatically raised the expectations for robotic capabilities with this  Challenge, and brought together a diverse group of teams to compete,” said Pratt. “The progress the Track A teams have made so far is incredible given the short timeline DARPA put in place. From here out, it’s going to be a race to the
DRC Trials in December, and success there just means the qualifying teams will  have to keep on sprinting to the finish at the DRC Finals in 2014.”

With the parallel development of robotics and the advances in neuro-morphic technologies what comes next? As I looked at these eerie images I was reminded of an all too real Terminator scenario… in a recent NY Times article Already Anticipating ‘Terminator’ Ethics we read:

Advocates in the Pentagon make the case that these robotic systems keep troops out of harm’s way, and are more effective killing machines. Some even argue that robotic systems have the potential to wage war more ethically — which, of course, sounds like an oxymoron— than human soldiers do. Proponents suggest that machines can kill with less collateral damage, and are less likely to commit war crimes.

The discussion about robots and ethics came during this year’s Humanoids technical conference. At the conference, which focused on the design and application of robots that appear humanlike, Ronald C. Arkin delivered a talk on “How to NOT Build a Terminator,” picking up where Asimov left off with his fourth law of robotics — “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

“We all know that that is motivated by urban seek-and-destroy,” Dr. Arkin said, only half sardonically adding, “Oh no, I meant urban search-and-rescue.”

He then showed an array of clips from sci-fi movies, including James Cameron’s 1984 “The Terminator,” starring Arnold Schwarzenegger. Each of the clips showed evil robots performing tasks that Darpa has specified as part of its robotics challenge. Clearing debris, opening doors, breaking through walls, climbing ladders and stairs, and riding in utility vehicles — all have “dual use” implications, meaning that they can be used constructively or destructively, depending on the intent of the designer, Dr. Akin showed.

Dr. Arkin’s point is that humans are still very much “in the loop” when it comes to smart weapons, so human designers cannot absolve themselves of the responsibility for the consequences of their inventions.

“If you would like to create a Terminator, then I would contend: Keep doing what you are doing, because you are creating component technologies for such a device,” he said. “There is a big world out there, and this world is listening to the consequences of what we are creating.” – by 

As one robotic ethicist said recently:

Robots are set to change the way that wars are fought by providing flexible “stand-ins” for combatants. They provide the ultimate distance targeting that allows warriors to do their killing from the comfort of an armchair in their home country—even thousands of miles away from the action. Robots are developing as a new kind of fighting method different from what has come before. Unlike missile or other projectiles, robots can carry multiweapon systems into the theater of operations, and act flexibly once in place. Eventually, they may be able to operate as flexibly as human combatants, without risk to the lives of their operators that control them. However, as we discussed, there is no such thing as risk-free warfare. Apart from the moral risks discussed, asymmetrical warfare can also lead to more insurgency and terrorist activity, threatening the citizens of the stronger power.1

————————————————-

1. Lin, Patrick; Abney, Keith; Bekey, George A. (2011-12-09). Robot Ethics: The Ethical and Social Implications of Robotics (Intelligent Robotics and Autonomous Agents series) (Kindle Locations 2838-2844). The MIT Press. Kindle Edition.

16 thoughts on “The American Cyborg: Neuroscience, DARPA, and BRAIN

  1. Now this is the way to approach the mind/body debate! Nothing more material than a robot killing your ass…

    Seriously, though. This is the way to argue the post-intentional (or the way I find the most effective, anyway). Prognostication arguments.

    Like

    • Haha… yea, the notion that someday as these ethicists mention that there will come a time that these biomorphic chips will enable decisioning processes based on complex set rules are allowed, let’s say, in Drones to be free from Human interaction to in their missions, enabled to make field decisions based on those programmed criteria (facial recognition, hostile/friendly, etc.) is the day we lose control and become prey rather than the gatekeepers. It’s almost as if politically the Obama administration is fulfilling the Bush administrations ideology… they present all these white washed terms to grey out the truth behind the funny face smile. Obviously I’m pessimistic enough to realize it’s going to happen sooner or later, they’ve just invested $50 million to start the process down the road… after mini cats with brains, what’s next?

      Like

  2. I always thought of Ligotti’s short stories that featured puppetry and dolls to be the least frightening of his oeuvre…I feel like I should re-read them now that the idea of puppets gaining agency (or people becoming little more than neural puppets of a machine network/state org) has a more 21st century real life immediacy to me. I feel like Matt Cardin and Ligotti really need to keep up to date with the bleeding edge of neuroscience to create fresh new hells to marry to their truly horrifying void-flirt (or void-plumb, at their best) of a writing style.

    Like

    • Yea, Ligotti’s one novella on corporate horror was a strange work, not his best. And his philosophical natalogy was more of a tirade than philosophy… as an agnosian natalist Ligotti seems to belong to another age, a sort of Poe throwback out of place in our posthumanist zones of metalloid monsters… people like Shirley are closer yet they, too, are too positive.

      Like

  3. “Post-intentional” is an apt descriptor, where non-intentional robots are designed and built in order to implement an ultra-intentional military mission. To prevent/create strategic surprise, to create lasting revolutionary change, to provide tactical advantage to warfighters… there’s intention laced throughout these DARPA explanations. But whose intentions are being fulfilled by these strategic and tactical technologies? “When we discover the Master is a mask for the economic masters” — that seems right.

    Like

    • I think what I was really getting at is the first take of what the SyNAPSe group is doing in the mimicry and invention of a bottom-up approach to building a synthetic brain chip based on neuronal and synaptic gaps with a projection of – I do not have the figure off the top of my head – millions of synthetic neurons. With the outcome of a fully working model to be added to a cat size robot by 2017. If this test works then they project another five year movement toward more complex organization and miniaturization… one can see that this might truly lead toward a new post-intentional life form disconnected from human forms. Obviously the military wants it as cure and weapon, but with this kind of research that raises so many ethical, legal – both international and local questions.

      I see your point that true the “intentions” of the military-industrial complex is toward medical and military use without thought of the ethical dilemmas. The military envisions combatant or smart robots of the future that will be able to make their own decisions in the fields based on supposed computational models; yet, these new types of brain chip are not based on the older computational model, so where this will lead is anyone’s guess. I suspect that the military is already lost among its own outdated philosophical and scientific presumptions and has designs in place for a 20 year old outdated computational model that is already being replaced with a post-intentional model. What I mean by this is that as I said in my essay on Hume: it’s not consciousness that is important in this new field, and the intentionalism I’m speaking of is not the ethical intentions of the military in what might be their teleological goals for these machines…. it’s that there want be intentionality – in the Husserellian sense of “aboutness” connected to the way these brain chips decide or make choices in their decisionary processes. Consciousness is irrelevant to these types of machines, they will not have consciousness as we define it; yet, they will be operative and able to freely make certain defined choices on their own… the problem arises in the issue of constraint and regulation: obviously the military wants to control, to constrain and regulate these choices so that they carry out the specific instructions of a mission, but what happens when these new machines bypass the constraints and regulatory functions in place and begin to “act” outside the box of those constraints… Will this be a new form of machine thinking? Consciousness? The point of the decision making process for Hume above was not connected to consciousness, but to affect relations… ergo. it is not consciousness that matters but the actual actions of an agent. And it is this philosophy of agential action that is the post-intentional…. Karen Barad is getting close to this in some of her work, yet she is still bound to older conceptions of phenomenological intentionality which force her into the older molds. Yet the areas of her discourse on apparatuses is very close to the notions of agential action that I’m speaking of… she is working out of a vitalist set of notions that keep her thought bound to human intentionality. What I want to do is move it out of this phenomenological discourse on “intentionality” and see where it takes us… The elminative aspect is only one-half of the project… Bakker realizes this. There is another positive aspect that has yet to be manifested, and it to me connected to the notions brought up in Hume, Deleuze and others… and, for me it has to do with the decision making process itself, which truly goes on within the brains pre-conscious phase and is manifest and becomes conscious on as these decisions intervene and are impressed on reflection.

      Like

      • Gotcha — thanks, it’s an interesting realm. No question that non-human devices can do a lot of things better than humans, which includes much complex decision-making without invoking consciousness or worrying about conflicting intentions getting in the way.

        Like

      • “… or worrying about conflicting intentions getting in the way.”

        Not exactly what I was saying… turning my words to other intentions on your own? You missed out on what I said of the military mind’s use of bad intentional usages in trying to constrain and regulate these non-intentional beings toward ends they would if they had such human intentionality not act on.

        In other words its our own false sense of intentionality that gets in way of the truth of both political, military, etc. control and command structures. And, of course there is a whole literature of the neuroetchical or roboticeethical that is question just such assumptions.

        So a glib reaction to what is actually happening in real time at the moment in these systems of control we are making seems at best worthless and misappropriate. What we need is to critique the culture that has constructed such machines for its own dire purposes rather than the machines themselves. The point being that we could also bring up the other issue which is prevalent in the inhuman studies of human/animal such a Haraway and others… and incorporate that into the human/machine… how will we approach our relations to these new non-intentional entities that are beyond our control? Will we do as we’ve done for centuries and reduce them to some ill-formed humanistic discourse so that we can talk about them on our terms… are will we approach it form another angle: understand them as unique and ubiquitous and deal with them as wholly new entities with rights and legal needs of there own, a sort of machinic revolution in neurorobotic rights with a new constitutional codes built on this new relation?

        Like

      • I realized after posting the comment that I should have said “conflicting motivations” rather than “conflicting intentions,” presuming that motivation precedes and causes intent. But “twisting your words” seems like an overreaction. If the military wants to achieve its objectives, it’s going to meet with far less resistance if it doesn’t have to contend with agents who might have their own conflicting objectives. That’s the point of DARPA building smart non-intentional machines, is it not? Do you infer from my remarks that I approve of this agenda? I certainly approve of humans using tools to accomplish their objectives; I don’t necessarily approve of their objectives; i.e., of the intentions toward which they deploy these smart devices. You say that these non-intentional devices would not pursue the military’s ends if the humans weren’t commanding and controlling them to do so. That’s self-evident I think. Or do you presume that the non-intentional devices would come up with decisions and on their own?

        “In other words its our own false sense of intentionality that gets in way of the truth of both political, military, etc. control and command structures.”

        I fail to understand this remark. Are you saying that “we” misunderstand the military’s intentions? I presume that’s what you mean when you write about “the military mind’s use of bad intentional usages.” Or are you contending here that the whole notion of human intentionality is a misunderstanding — an idea that’s been bandied about here and elsewhere quite a bit lately, and to which I’ve taken exception on specific grounds on a number of comments here and on posts at my place. Sure there is a lot of literature proposing that intentionality is illusory; there’s also literature contending that intentionality is actual, some of which I’ve summarized here and elsewhere. It’s an open question; further research is needed.

        You said in response to my last comment that intentionality was only half the issue, and that you were writing specifically about machines to which intentionality could not be ascribed, looking toward the decision making processes enacted by such devices. Fair enough. How will the devices know whether or not their decisions are good ones? Based on what criteria, what motivations, are alternatives to be weighed? These are good questions, as I said, whether the evaluations are occurring consciously, unconsciously, or algorithmically by machine.

        “Glib,” “worthless,” “misappropriate”? WTF? That certainly wasn’t my (if you’ll forgive the use of the term) intention. I wanted to re-establish points of agreement, even to placate you, since you seemed to regard my first comment as somehow off-topic. Guess my gesture didn’t work out so well. So if I agree that cultural critique is also valuable, will you regard that also as a glib reaction? You’re probably right though: I am kind of glib at times, not as serious-minded as you are presumably about these blog discussions. I’ll move on.

        Like

      • “deal with them as wholly new entities with rights and legal needs of there own, a sort of machinic revolution in neurorobotic rights”

        My first reaction to this line of inquiry was admittedly dismissive, but you’re right. Though humans have extraordinary mental capabilities compared to other animals, it’s all on a continuum rather than a transcendent leap outside of evolutionary biology. And even though I think there’s good reason/evidence to support the reality of humans forming intentions, this capacity too is part of the continuum. Let’s say that by “intent” I mean something like formulating and enacting an action scheme predicated on a set of underlying motivations/drives and contingent on the dynamic opportunities/obstacles presented by the dynamic environment in which the organism must operate. There’s no reason that a robot cannot act with intent in this sense. In fact, they might even be better at it than humans, just as they’re better at many complex decision-making tasks. Robots could have self-preservation and reproduction and social solidarity as their root motivations driving their intentional actions, just as humans presumably do. So what if these motivations have been programmed into them — humans have been programmed too by evolution, genetics, and socialization.

        Like

      • The point Bakker is making in BBT theory is that this is a false conclusion. Intent is after the fact of decisioning has been done… it’s a reflection on something that was already done at another non-conscious level within the brain. Once the decision has been made we then begin to reflect on that decisioning process as if we had originally decided it: as in my Hume essay this is an intervention into the mechanics of the double-reflection produced not through consciousness but through our sociality; i.e., we begin to perceive ourselves as other, as subject, separating ourselves out from the very processes that have already occurred. We take on a historical view of past actions and assume we originally made the decisions rather than that they made us….

        Have you asked yourself this question: Why do I want to hang onto intentionalism? How does it truly explain the truth of anything at all? If I eliminated intentionality what would be left? If I naturalized the intentional rather than idealizing it what would that entail in my thinking?

        Those are the sort of questions that made me realize that Scott was on to something… what I’m doing personally is uncovering the history of philosophy and science that led to such conclusions in the first place. In that way I may better understand just what it means the consciousness is an illusion and why we are blind to the very processes of our reflective interventions in historical analysis, as well as the history of our western conceptions of self and identity against other cultures sense of this same set of concepts. Our concepts of self, subjectivity, etc. are even quite different from ancient Judeo/Christian and Greek conceptions. Why? What are the factors that brought about our conceptions of individuality: law, sovereignty, politics, social, economic, etc.?

        To say one believes in consciousness and intentionality begs the question rather than answering it. What we need is to question why we came to believe such things in the first place? And, whether such beliefs and concepts still hold water in light of the new brain sciences.

        Like

    • Since the tepid critiques of economic influence on European states penned by Godwin and some of the more radical French Revolutionary critiques (including quite a few of the women and landless peasant radicals who argued for truly UNIVERSAL suffrage right before post-Revolutionary France decided political decision making wasn’t for them), anarchists and libertarians and some socialists (and the less authoritarian of the communists) have argued that the capitalist-economic masters’ heads needed to roll right along with the feudal-political masters’. The Paris Commune made a major mistake in appointing/listening to Francis Jourde, the person in charge of the Commune’s Commission of Finance, and it was a criticism of the Commune leveled at them afterwards by both anarchists AND Marx ‘n Engels, and I think rightly so. The Commune’s total expenses were less than 50 million francs (most of which was the pay of the Commune National Guard, since so much of the Commune was operating for free and with volunteers, and food and basic necessities were being provided for free in many places and hence easily obtained for free, as it should be), and the total amount between francs seized and tax receipts (which were actually lawfully obtained in the wake of the pre-Commune elections that elected all the radicals to the City) was less than 30 million. However, the vaults of the Bank of France, which Jourde warned the more radical members of the Commune against storming and seizing, contained 88 million francs in gold coins and 166 million francs in banknotes. Jourde took out a loan from the Rothschild Bank, then paid the bills from the 50 million until he tapped them out. Ridiculous. Jourde justified it by saying that, without the gold reserves, the value of the currency would collapse, but they didn’t even need to dip into the gold reserves if they had only seized the bank notes, for one thing, not even to mention the fact that there were already Proudhonists, mutualists and anarcho-communists advocating alternative economic systems that did away with the authority of money value prior to 1871 in Europe, some of whom were right there in Paris straining their voices trying to help people understand the Leviathan they fed with their blind assumption of their foreground as built on a background of tacit capitalism.

      All this just to say: even in the most radical situations (the French and American Revolutions, the Paris Commune, Russia in 1917) authoritarianism, whether political authoritarianism as under Lenin and Stalin or economic authoritarianism as under literally every Western attempt to achieve revolutionary change we know of outside of the successes of the CNT and FAI in Republican Spain2, rears its ugly head and leaves the hopes for liberty squirming under its heel. Capitalism may be in a state of decay, but it is a titan of a zombie, with the rotting stump of a neck so far above the clouds we can’t even see whether it has a head or not from the ground.

      The best we can hope for, I think, is building alternative structures within the decaying ruin of the now, ones that can exist and provide the essentials to people for free. The most difficult of those is housing, since the landlord mentality is even harder to drill out of institutions than it is to drill it out of people, and it’s hard enough to drill it out of people. Even someone who was independently wealthy and a secret anti-capitalist/anti-authoritarian would have to pay property taxes year in and year out to maintain free stretches of living space for people to freely live in and be self-sufficient on. As long as capitalism and capitalist ideas of value (especially as the bottom slice of Maslow’s little pyramid of needs is concerned) maintain prevalence, it’s hard to provide these things for free to large amounts of people in the long-term. These are the problems that need the application of slippery genius minds who can plunder from the neglected spaces and create a garden of the new to grow on the rot of the old. Markets can go fuck themselves. The only problem with this paragraph? The article above. Maybe some anarchist paradise emerges, no gods, no masters…but with military tech at the level it is now, I shudder to think what it will be able to wipe out in 30 years. Likely any and all projects for real human agency, liberty, freedom, creativity and compassion are so much targeting data for SyNAPSE/ATLAS/BRAIN tech branch-offs of 2044.

      1Not in the American oxymoronic right-libertarian sense, but in the initial 19th century left-libertarian COINAGE of the term by anarcho-communist Joseph Déjacque that everyone outside of America means when they oppose libertarian to authoritarian, even the authority of unregulated capitalist markets. Some people call anarchism “libertarian socialism” as opposed to the authoritarian socialism which took root under Lenin. And yeah, I say Lenin and not Stalin. With Lenin smashing presses left and right, both figuratively and literally, and this in the first five years? On the left (and I think this is very instructive), not just anarchist presses but Marxist-Leninist Bolshevik presses who tried to hold Lenin accountable to his pre-1917 writings were shut down and in some cases outright destroyed. I think a lot of people are familiar with Emma Goldman’s initial hope and then complete disillusionment with Lenin’s experiment, but Russian revolutionary and anarchist G.P. Maximov goes into a lot of damning detail about imprisonments and authorized killings of those on the left who were fervent supporters of the pre-1917 Lenin and his promises. This all to say nothing of the Kronstdadt fuckup in ’21 (and before Stalin took over, Trotsky did some pretty heinous shit to workers in the East, including rescinding the hard-won limit on work hours in a day to make sure resource extraction and steel work production were at the rates he felt were necessary, with violent reprisals on people taking revolutionary socialist action against his bullshit, so Trotsky doesn’t get to say much about the authoritarianism of Stalin IMHO), so don’t tell me about Stalin ruining Lenin’s perfect worker’s paradise, authoritarianism is authoritarianism. But I digress. A lot. Hence the footnote.

      2Republican Spain, who the Western powers watched with navies at the ready as Franco crushed them, Franco who they let maintain a fascist grip over Spain along with his buddy Salazar in Portugal for decades after declaring “Victory in Europe!” over fascism and then in the WORLD in the 40s…shameful and monstrous, but understandable given the success of the anarchist project in showing another world than the authoritarian economics shared by “liberal democracy” and fascism alike. In fact, the anarchist agriculture in Spain, particularly Catalonia and Arragon’s anarchist farmland, was the most productive in recorded history until the Green Revolution of the 40s finally achieved it in the late 60s. Which means it took anarchists, without the benefit of science and technology of the post-war period, a handful of growing seasons to achieve what it took post-war science and tech DECADES. Imagine a world where the left in Spain wasn’t infiltrated by the authoritarian U.S.S.R., creating internal divisions that Franco exploited. Imagine a world where Franco was crushed instead of the anarchists. Imagine a world where the initiatives that began before Franco was on the move and carried on during the war, the initiatives where academics came down from their Ivory Towers and taught, for free, the chemistry and physics of what exactly it was the extractors of raw ore, the metallurgists and the steel workers were doing, directly to the workers who HAD NO BOSSES, no supervisors or managers or owners outside of themselves, who were getting university educations for free because they asked if that was possible and found that it was…imagine that world flourishing, a truly libertarian socialism spreading across Europe, bolstering the French Resistance and helping topple the Vichy and take the fight back to the Germans in 1940 when De Gaulle had first WANTED to and TRIED…ugh. I wonder when we’ll get the chance to imagine a world without masks or masters again, and what forces will marshal to crush it or wait silently in the wings, masks of grim shadow over sociopathic faces, eyes glittering with lust for power in the darkness…will those eyes even be human? Then again, was it “human” back in ’39 for the capitalist Allies to float in the seas around Spain while capitalist fascists tightened fists around the throats of freedom?

      Like

      • I think a better place to start is “Bretton Woods”. It was here that the modern financial systems that are still with us were forged through a series of political and economic – I want call them, debates, but will instead call them by the name they deserve: economic terror….

        As Benn Steil argues in The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order

        “The Bretton Woods saga unfurled at a unique crossroads in modern history. An ascendant anticolonial superpower, the United States, used its economic leverage over an insolvent allied imperial power, Great Britain, to set the terms by which the latter would cede its dwindling dominion over the rules and norms of foreign trade and finance. Britain cooperated because the overriding aim of survival seemed to dictate the course. The monetary architecture that Harry White designed, and powered through an international gathering of dollar-starved allies, ultimately fell, its critics agree, of its own contradictions. The IMF, the institution through which it was launched, though, endures— however much its objectives have metamorphosed— and many hope that it can be a catalyst for a new and more enduring “Bretton Woods.” Yet history suggests that a new cooperative monetary architecture will not emerge until the United States and China each comes to the conclusion that the consequences of muddling on, without the prospect of correcting the endemic imbalances between them, are too great. Even more daunting are the requirements for building an enduring system ; monetary nationalism was the downfall of the last great effort in 1944.(364)”

        In our own time we see much the same with a subtle difference, it is now between China and America with the U.S. in the spot of Britain in this ultimate scenario. As Steil argues:

        “The creditor-debtor relationship between China and the United States today is very different from that between the United States and Britain in the 1940s and ’50s. China and the United States are not allies, yet they are mutually economically dependent to a degree that political rupture would be dangerously costly to both. Whereas U.S. government holdings of British securities during the Suez crisis amounted to a mere $ 1 per resident, China’s holdings of U.S. government securities today exceed $ 1,000 per resident. 44 The United States in the 1940s and ’50s was therefore in a position to provoke a sterling crisis at any moment at little cost to itself; China, in contrast, cannot do the same with the dollar today . China believes that the U.S.-dominated international financial architecture is anachronistic and fails to provide adequate security for its economic interests. Yet it can identify no alternative blueprint that does not imply massive financial losses on its reserves, economic dislocation for its export industries and state-owned firms dependent on subsidized capital, and potential social unrest and political upheaval.

        It is tempting to fall back on eighteenth-century Enlightenment thinking, of Immanuel Kant and David Hume in particular, and to imagine that commercial entanglement gives China and the United States sufficient interest in a stable international order that neither would risk provoking a rupture in order to change fundamentally the balance of geopolitical prerogatives between them. This would include the monetary order, and not just the geopolitics of territorial sovereignty in the South China Sea and control of global strategic resources such as energy.

        Yet it is perhaps equally plausible that such a rupture is inevitable, in the same way that British Foreign Office official Eyre Crowe argued that it was between Britain and Germany back in 1907. Irrespective of Germany’s intentions, or stated intentions, Crowe argued, Germany had an unmitigated interest in creating “as powerful a navy as she can afford,” and the very existence of such a navy was “incompatible with the existence of the British Empire.” Britain could not abide it; the risks were too great. Diplomacy therefore had its limits; war had become virtually a matter of time. 45 Though Britain emerged on the victorious side in the two world wars that followed, the financial strain ultimately brought about the liquidation of its empire.
        In a 2005 Foreign Affairs article, longtime Chinese government policy adviser and Communist Party intellectual Zheng Bijian insisted that China “would not follow the path of Germany leading up to World War I”; it was dedicated instead to a “peaceful rise.” 46 Yet a modern-day Crowe might see the same dynamic at play today between a rising China and Britain’s even more dominant successor, the United States. Whatever Zheng or others in the Chinese leadership might say, or even believe, China is going to expand its naval capacity in the Pacific dramatically in the coming years, and this is going to undermine the bedrock of America’s security posture in the region and further afield. The United States will therefore be obliged to counter China’s rise through new patterns of engagement with Pacific countries that China will inevitably find threatening. Deadly conflict, in this rendering, is unavoidable. Former U.S. Secretary of State Henry Kissinger, for one, believes that such a destructive dynamic is avoidable, but nonetheless deeply worrying.”

        Steil, Benn (2013-02-11). The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order (pp. 362-363). Princeton University Press. Kindle Edition.

        Like

    • Yes I understand Scott Bakker’s view on this, and I have interacted with him here and elsewhere as to why, based on scientific evidence from a variety of disciplines in addition to neuroscience, I think intentionality is not merely an after-the-fact misinterpretation. I.e. there are substantive bases for disagreement that aren’t “reducible” to guys like me hanging onto our human distinctiveness.

      “Have you asked yourself this question: Why do I want to hang onto intentionalism? How does it truly explain the truth of anything at all? If I eliminated intentionality what would be left?”

      This is precisely my objection to Neuropath. If intentionality is an after-the-fact illusion that doesn’t explain or affect thought and behavior, then eliminating it surgically would not turn the postsurgical subject into an impulsive psychopath. It would have no effect whatever; the subject would think and act the same way pre- and post-surgery. It would be a much shorter, less exciting novel that way, but it would make a more persuasive case as a thought experiment.

      “If I naturalized the intentional rather than idealizing it what would that entail in my thinking?”

      I regard intent as a natural capacity of humans, and eventually also a mechanistic capacity of robots. Idealizing intent would entail regarding it as somehow able to transcend cause-effect, introducing free will via some sort of skyhook. That ain’t me, babe.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s