Thursday, March 31, 2011

The Global Brain Is Stupid

There is a common and oft-refrained motif in AI and Singularity related thought, the most recent incarnation being "The Global Brain (via H+). The Global Brain is generally defined as some sort of super gestalt entity consisting of the entirety of the internet, humanity, and/or both containing "distributed intelligence, one facilitated by communication and the meaningful interconnections between billions of humans, via technology."

Complexity does not automatically equal intelligence. Transmission of “information” does not necessarily equal intelligence. Human telegraph and phone networks transmitted great deals of information, and simply increasing the amount and even complexity of information transmission does not automatically make the system as a whole an intelligent gestalt with its own agency. Pure complexity is in fact selected against in evolution, as is illustrated in digital evolution simulations such as Polyworld, where the most complex-brained organisms wound up dying off. A brain's thought processes can be very complex, such as a paranoid schizophrenic's intractably intricate conspiracy theories of Illuminati-driven Area 51 cover-ups. We do not give these people Nobel Prizes and let them run nations precisely because they are spewing complex yet *unintelligent* gibberish.

Evolution does not “like” any given trait, traits only persist if they are selected for in a competitive environment, and it just so happens that intelligence worked out well in some — not all — environments in the history of Earth. The human brain evolved not simply complexity, but intelligence over millions or of years because of an arms race; that is, humans who failed to outsmart giant cats on the savanna died out, and so the most intelligent survived. I don’t see the “global brain” competing for survival against the ‘net gestalt of Jupiter, Saturn, the Kuiper Belt in a race to dodge black wholes or some such selective evolutionary environment. The global brain is not complex in order to make decisions to preserve itself, it is complex because it is beneficial for *humans* to transmit higher definition seasons of Lost on Netflix and sell penis enlargement pills to each other. If the global brain exists, its brain is literally made of 90% spam and junk mail. Can’t say I see much evidence of intelligence there.

"The wisdom of the crowd", Google's insta-answers, social media - these are not intelligent, despite the hype to pump the tech bubble 2.0 and appease nerd's sci-fi wish fulfillment or desire to attribute greater importance to their work than is really there. And the effects of these brave new technologies on our own intelligence are debatable. With a few keystrokes you can learn instantly the date of the invention of electricity, the barrel length of a Colt M4A1 Carbine, where the best tandoori chicken can be had in your area code, how to remove that pesky Disqus botnet trojan that's been using your computer to sock-puppet distort the tandoori chicken ratings and generally revise consensus reality under your nose. But how much smarter does instant question gratification actually make you? Basically, you're getting answers lightning-fast that you could've gotten with a bit more time if you went out and read actual books, talked to actual human beings, interacted with the actual world instead of just your touchscreen and Google's redirect to Wikipedia. A calculator can do math faster than any human mathematician, that does not make it *smarter* than even the dumbest human being. At the same time, if we over-believe the GB hype, we may become more like those quick-arithmetic'in gadgets; very fast at getting simple shallow answers, not so good at deeper complex analysis.

Author and polemicist Nicholas Carr points out that if you have a trivia hammer, you only look for trivia nails, and avoid deeper questions. His response to a recent study that showed college students answered questions three times faster with Google than using a campus library:

Lord knows it's great that we can answer well-defined questions a lot more quickly today than we could 20 years ago, and that that allows us to ask more, and more-trivial, questions in the course of a day than we could before, but Varian's desire to apply measures of productivity to the life of the mind also testifies to the narrowness of Google's view. It values the measurable over the nonmeasurable, and what it values most of all are those measurable variables that are increasing thanks to recent technological advances. In other words, it stacks history's deck. How did the University of Michigan researchers come up with the questions that they had their subjects find answers to? They "obtained a random sample of 2515 queries from a major search engine."

And let's not forget that Google, Twitter, these sites are not just "made of us", some benevolent, non-biased collective answer engine sourcing every human equally. These are corporations with bottom lines and shareholders who sell off reality to the highest bidder in the form of keyword auctions, "premium hit placement", and "promoted content" in order to make their money. The answers given to you by the "Global Brain" may not be the smartest answers but the ones which make the Network Owners the most money.

Saturday, March 19, 2011

The Future Is Our New Country

Before I get all cylinders firing on another rant, feel I should disclaimer the fact that I do not disbelieve in the possibility of artificial or otherwise non-human intelligences which meet or exceed our human system specs. I do not disbelieve a priori in the possibility that one day cancer might be as irrelevant as small pox, that people might enjoy much longer lifespans than they do now, that the human race may be transformed into something very different, possibly unrecognizable. I am not the fetally positioned Luddite reeling in Post-Futureshock-Stress-Disorder, throwing mute tantrums in fear of getting my brains sucked out of my head by The Matrix Architect. I am not indiscriminately killing robot puppies in protest to the tsunamis of machine automation looming on the horizon. I am not jerking off to Rousseau-esque sermons by anti-tech pundits in a tin-foiled cave in a Montana backwater. (Though I will confess to Like!ing Jaron Lanier's stuff: also a non-Luddite thinker)

I simply see what looks to me like a disturbing confluence of trends in discourse and culture with respect to the future in general and things like AI and Singularity in particular that appears to be charging ahead, unfunctioning brainjacks and all, into a mythical future embedded in our collective brainpans by science fiction and half-century old science that turned out to be fiction, but whose legacy continues to color our understanding to this day.

My goal is not to make human beings in their current evolutionary iteration out to be the Eschatological be-all-and-all, or that we'll never get really frickin smart robots up and running and dry cleaning and doing our cross-multiplication homework for us. It's not about “defending human specialness” or “salvaging human egos”.

On the contrary: my goal is to highlight in red the human *problems*, the weaknesses, as well as those in machines and the complex system of humans and machines and the natural world called reality. It's about not just rushing in with a manifesto and some cool idea you picked up from the sf/f bookshelf, waving it around gaining followers and speeding towards imagined Utopia -- we've seen what can happen when people get ideas for "the perfect system" in their head, even justified by science, and race to create that system. Somewhere in central Europe, I think it was. It's like the young Jeff Bridges in Tron, naively trying to "program" in all the rules and structure of the model in his head of a perfect digital utopia, a mathematical masterpiece of harmony. Then... everything goes to hell as reality kicks in. Now, more than ever, we need to start talking about not just how tomorrow's technology, digital, nano, bio, neural, do lots of cool stuff and give us immortality, but what the real effects of the technologies will be on the human race.

This is not just fun and space opera entertainment anymore: this is serious non-fictional business. Serious as a post-tsunami multi-plant nuclear meltdown. Serious as the loss of the worker's bargaining rights. Serious as an AI-run economy systematically destroying the lives of millions of homeowners and working citizens to profit the few Sys Admins. Serious as loss of half the job *sectors* in the first world due to machine automation and intelligence displacing even knowledge workers. Serious as a bit-sized, 140 character, mindsphere of Continuous Partial Attention, a collective screen-refreshing existence lurching from #trend to #trend, from endless streaming updates of natural disaster to crisis to celebrity car-crash, without ever really stopping, turning off the firehose stream of soundbyte-sized input and really deeply thinking and talking about what is going on. Serious as our minds devolving through misuse ("Just Google it") and homeostasis ("Just Recommend My Tastes To Me") into pawns in The Money People's "Global Brain" that tells us via our omnipresent smartphones where to go, what to do, what to buy, who to vote for, and The Truth as written by the highest bidder for Google-Amazon-Wikipedia's systematic auctioning of reality. Serious as brain-modifications permanently causing a feedback-loop of wealth creating more intelligence-upgrades creating more wealth till we have impassible *biological* chasms between the ultra-rich who can afford to make themselves smarter, and the poor left behind with their obsolete DNA-endowed wetware. You think class division and unfair wealth distribution is bad now? Think again, while your brain still has the same thinking power as the CEO of Goldman Sachs.

We can't afford to just stick our heads in the sand, read Walden's pond, and ignore the technologists, tech-evangelists, Singularity-ists, Kurzweil groupies, computer nerds, the people who are actually trying to build their own vision of Utopia right NOW. A Utopia that will inevitably have to meet hard reality where we live, pay mortgages, compete for jobs, put food on the table, unionize. A reality full of greedy, weak, power-hungry, tribalistic, mob-mentality, short-sighted, humans.

Reality may have become stranger than science fiction, but make no mistake: this is as real as it gets.

British Novelist P. J. Hartley famously said in the opening line of The Go-Betweener, "The past is a foreign country." I believe the corollary to this epithet is that the future is also a foreign country. But it is a country, a real place which we are moving to, migrating one and all, smartphone passports in hand, which we will soon call home: with all the mundanity, ugliness, and beauty that home entails. It is not a theme park full of rocket ship space opera and computer-rendered cyberspace techno-thriller rides. It is not a "Build It And They Will Come (Upload Us)" automatic stairway to heaven/utopia. It is not a science fiction story, but good science fiction can help draw up some working-progress maps of the pathways to the new world, direct the trajectory of conversations towards issues that must be examined in order to arrive at a better, less fucked up country than the one we're migrating from.

Why AI Needs to be an Artist

There have been some recent comments suggesting that if we can just get AIs that can see every single possible scenario, try every single path through the maze of possible futures, then we'd have AI that could solve any problem without the need for creativity that we humans have.

“We used to think “Chess” required “creativity” but we’ve since discovered that a fast computer evaluating ALL POSSIBLE PATHS completely blows human creativity away”

What is being suggested is a brute-force solution to AGI. Essentially a “Librarian of Babel” as per Jorge Luis Borges’ short story. An uber-Google which is capable of searching through every possible “book” capable of being composed by myriad monkies banging on typewriters — that is, a near-unlimited possibility space. With its near-infinite computing power it can suss out amongst the Vast sea of unparsable garble, slightly less vast sea of grammatically correct nonsense, even less vast – but still impossibly vast – sea of dreck, tomorrow’s best-seller “How to upload a brain for dummies (humans)” or whatever Hard Problem we might like a super AI to handwave away for us. While this is not impossible in theory, there are it seems as many, possibly more limiting factors for such an AI (if we can even call it intelligence rather than a souped-up lookup table) than for an AI which operates using intelligence closer in mind-space to humans.

The first limiting factor is testability. The example of Deep Blue vs Kasparov wherein a computer “beat” a human at chess is flawed because the problem (win at chess) and the series of solution criteria (play through every scenario and see which leads to checkmate) are clearly defined and easy for a computer to check whether a given trial is successful. For a problem we might want a Singularity to solve for us such as “Discover a cure for cancer”, the rate at which the computer could perform successful tests is limited, since any given trial cure can only be tested at the rate of clinical trials for cancer treatments, which brings up a bottleneck that makes it nigh-infinitely inferior to human medical science. (Not to mention unethical; trying randomly generated compounds on patients?) Or how about the problem, “Figure out how to upload my brain into a digital substrate”. Since this is a simple brute-force search AGI, not human-like, it would be necessary for humans to interact with every test-upload experiment of the AGIs to confirm whether or not the uploaded entity is in fact a replica of the meat-human and not some different individual or just a hashed pile of silicon-goo. This would again slow the traversing of the “data space” down to an utterly useless crawl, leaving our resources better invested in producing human-like, creative AI.

Then you have the problem that even with these potential gains in computing power, grapheme, quantum, etc. (and there is no guarantee quantum computing will pan out soon or at all), even if we were to boost our machines by multiple magnitudes of computing power, the possibility spaces of non-trivial real-world problems which include the space of every field and dimension of reality (as opposed to a contained mathematical abstraction such as a chess or go game) quickly dwarfs any uncertain gains in magnitude we might make. For example, even the possibility space extent in the Library of Babel — that is, all possible books 500 pages in length, with 40 lines per page and 50 characters per line, or one million characters – is 1.95 X 10^1834097. Even if you managed to achieve exaflop speed on current hardware, that’s just 10^18. Then even bump that up by the trillion or so supposed by quantum comp, your machine is still an insignificant drop in the bucket at 10^33. Forget a whole book, just try one page of text, and you’d still be waiting around for myriad heat deaths of the universe to even scratch the surface; you’d need 10^200+ to get anywhere near searching the space. The Doom I engine was around 2,000,000 characters, and to find it you’d have to search another million magnitudes more than the Library of Babel. Kurzweil estimates a human brain could be simulated in 25,000,000 lines of code, and he’s pretty universally accepted to be a rose-tinted idealist. Good luck searching for e-brains with your typewriter monkeys.

“(Human creativity) is horribly inefficient, and results far too often in less than optimal solutions. In fact, it fails 99% of the time, partially succeeds .09% of the time, and only truly succeeds .01% of the time, if that.”

Depends on what you mean by inefficient. Most ideas that pop into your head during lunch break don’t pan out. 95% of all startups fail, but it’s the 5%, the ones that find the treatment for cancer or invent the automobile or make the next breakthrough in AI or crack the problem of relativity that make up for it. Thomas Edison successfully discovered 1000 ways not to make a lightbulb before he Let There Be Light with the flick of a switch. A computer brute-forcing the invention of a lightbulb would go through magnitudes more failures before coming up with a solution without the guidance of human creativity as there are nearly infinite possible ways to assemble matter. Similarly, an AI that had human-level creativity in limiting domains, applying high-level insight, and shortcutting, combined with the speed advantage would be far more efficient than an AI lacking it. (How many brute-search AIs does it take to screw in a lightbulb? Possibly a Googol-plex)

So yeah, computing power can help you to an extent, but I think you’re not really going to get where we want to go – that is, intelligence explosion – without some higher-level intelligent behavior that can direct the search to a level of computational efficiency at minimum on par with what we humans are capable of. If we want super-AI, we may need even *more* creative machines than we are.

Sunday, March 13, 2011

De-Coding AI Morality

In the futurist tangents of Artificial Intelligentsia circles, the discussion of the dangers of super-intelligent machines and the search for preventative measures of Skynet-like scenarios is generally framed as a computer software engineering problem: "IF we could just identify the Holy Grail LISP code lines to prevent machines from harming us meatbags...". This is perhaps partly a result of the conceptual heritage of science fiction -- a genre most discussions of the non-fictional future are colored with -- namely Isaac Asimov's "Three Laws of Robotics" outlined and examined in his body of work. These laws being:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

With these three laws embedded within the binary cores of our silicon homunculi servants, we are supposedly protected from any nightmare scenarios of homicidal vacuum cleaners or self-aware botnets usurping the Pentagon and sentencing humanity to an eternity of slavish bioelectricity generation in human pod farms whilst plugged into a never-ending simulation of Manhattan circa 1999.

The most recent spiritual heir in this long line of techno-philosophical thought has been put forward by AI gurus such as Michael Anissimov, who suggests that we program our future digital progeny with 'goal systems' which pre-empt the possibility of rise from robot underling to robot overlord.

Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.

I find the assumption that Artificial General Intelligence, let alone uber-AGI, is ‘programmable’ in any traditional line-by-line sense, as if we’re just going to throw in a few Commandment #declarations or a looped conditional statement in the main function such as “if(hurting humans){stop;}” or anything of the sort an interesting and quickly antiquating notion.

Excluding the Google / IBM Watson style “pseudo intelligence” which amount to very good lookup tables with several library of congresses full of info, the kind of super AGI we might want, that is capable of creative and inventive thought necessary to create cures for all our diseases, solve all our intractable conflicts, figure out how to upload all our consciousness into digital New Shangrilah, kick Faster Than Light travel, tell us who killed the Kennedies and the question to which the answer is 42, is most likely going to come from attempting to simulate our own 3 pound blueprints of grey matter sloshing around in our noggins.

Mother Nature, who we all know is smarter than us upstart bipedal monkeys, has been trying to figure out how to get us carbon based lifeforms to stop hurting and killing each other for 4 billion years and you can just turn on the news or read a book to see how the Benevolent Natural General Intelligence project has fared. If we were to attempt to reproduce the human brain in a silicon or otherwise machine, assuming that is possible, we can’t expect such an entity to be any less ethically unpredictable than humans, and will probably be even more unpredictable because of the fact that any such simulation / emulation will have to leave out some information and unknown effects of changing substrate. And then we give this digitized human mind godly amounts of power. Is it necessary to point out that power has the effect of desensitizing, decreasing empathy towards other humans lower on the totem pole, alienating, and generally turning human beings more sociopathic? I encourage anyone to take a trip down to Wall Street and talk to the CEO of one of the banks which thoroughly raped and continues to rape the world if you’d like an illustration of the utter arrogance, apathy and non-caring-about-other-human-ness festering in the Wring Wraiths of Power Land.

We can’t program Super AGI to care consistently about and not harm humans any more than we can program ourselves to stop being greedy, violent, backstabbing, warmongering, power-hungry apes. There’s no command line that folds out of your occipital bone, lets you input Asimov’s Three Laws as an OS code mod, and you reboot suddenly as Mother Theresa. That’s besides the fact that we’re discovering the human brain to be less and less like the Turing Machine we thought it was and more like a massively parallel, intractable jungle, constantly changing itself. People are constantly changing: the sweet kid who loved bunnies at five and wouldn’t hurt a fly may grow up to become an insurance selling family man but could as easily become an ass-cappin drug dealer or a megalomaniacal bank CEO. We don’t actually even *know* what we’ll do in a given situation till we’re actually in it, as is much said of soldiers who go to war. And the reason we’re having to create AGI by simply pirating the human brain is because we don’t understand it well enough to actually create one from scratch, so how the heck are we supposed to make such deep-structural changes to the digital human mind? I don’t see much hope for “crafting goal systems” in our future siliconized, jacked-up megabrains.

Yeah, Super AGI seems pretty risky business to me.

*(Artificial General Intelligence refers to intelligence on the level of human intelligence)

Saturday, March 12, 2011

You Are Not A State (Or A Gadget Thereof)

This is in response to a recent article on Futurismic by entitled "Seeing Like A State: Why Strategy Games Make Us Think And Behave Like Brutal Psychopaths".While I agree that human beings experience a degree of perspective shift when they assume a position of power, I disagree that it is necessarily a shift into the "state's” perspective and I think that the situation is more complex than suggested.

Firstly, I don’t think it’s a feature of strategy games in particular that causes gameplay that might be abhorrent if the pixels were real people, because it’s not just strategy games in which people do appalling, repulsive things. There are enough FPS clones to fill a Pentagon mainframe out there, wherein the sole purpose of the game is the wholesale slaughter of other human beings, in photo-real detail. There are games where you play serial killers, rapists, sandbox games where you can become either the Campbellian mythic hero, saving humanity from certain destruction or the ultimate “bastard”, enslaving the entire human race, and every other race too, under your all-seeing eye, a la Fable. The key element here is that a game is fiction, it’s entertainment. Mundane goody-two-shoes worlds where people just tie their suits, go to work and make money to maintain their 1.4 kids and 10,000 square foot lawns, where states live in eternal utopic peace and harmony, where war-room roundtable meetings consists of quarterly reports on the increase in sales of sporting footwear or guantlets in Germany and continuing sunny relations with BRIC countries, these worlds make poor games because they lack conflict and drama. They’re boring. They make poor fiction. And most people are capable of separating entertaining fiction from serious reality, so I feel that it’s not entirely fair to say that people playing games are “behaving like brutal psychopaths”.

Nearly everyone and their pre-boomer grandma I’d wager has viewed a fair share of movies, played games, read books full of genocidal warlords, ruthless Machiavellian autocrats, serial killers, pro-torture gangsters, murderous housewives, et. al., yet we don’t see any empirical evidence suggesting correlations between psychopaths and experience of violent media. People don’t make decisions in games based on what they’d do in reality, either. As in Fallout 3, where people can play through as righteous Megaton-saving, kitten hugging paladins of righteousness, and on another run-through, play “bad” characters, nuking Megaton for profit, killing anyone who gets in the way of completing a quest and scoring loot like psychopaths. They’re roleplaying characters, not making life-changing moral decisions in reality. By your logic, these people should all go out and kill their coworkers secretly to bump themselves up in line for promotions.

Having your SCVs sit around, hug and kiss, read poetry to one another, and make snow angels in their collected minerals and vespene gas is not only not possible due to the programming of Starcraft II; even if it was, it would be boring. If your dev team starts pumping out Kumbaya-Craft, you can bet your ass your investors will flee like financial corporations from a European tax hike, your capital will soon succumb to its burn rate, and you’ll be yanked forcibly by the collective hand of the market.

Pleasantville is boring. It defeats the purpose of entertainment, such as games, a subset of which includes strategy games, and that purpose is not to fill the yawning void in some post-industrial existential crisis of meaning or resolve some metaphysical yearning for “truth”. No, the purpose of a game is to give you something fun to do for a couple hours in between filling out excel spreadsheets or waiting to pick up the kids after soccer practice. I and my Civ and Starcraft 2 buddies certainly are not “constructing fictitious worlds where meaning has a place” as we’re lolling at each other over failed “tyrannical” zerg rushes or fighting for bragging rights in.

Quiet 1st world cushy conflictless society is suited well for reality where consequences have real adverse impact. We all love murder mysteries and “getting in the shoes of” people in horrible life-threatening situations such as wars for the emotional and cognitive rollercoaster rides they take us on. But the hell if any of us would really want to be chased by a serial killer or stuck in Iraq or Afghanistan in real life. And we humans have the capacity to separate the two (most of us anyway, that’s why we have media ratings systems limiting consumption of films and games for the younger minds who have yet to fully develop that capacity.

“This also explains why strategy gamers tend to be far more psychopathic than even the most ruthless of real world tyrants; tyrants cannot see the human consequences of their actions because the state does not see them. Game players do not see the human consequences of their actions because there simply are none to be seen.”

This suggests that the condition of psychopathy and/or immoral decision making is an artifact of one’s environment — be it behind a mouse and keyboard in a game of C&C or behind a mahogany desk in the oval office — as opposed to some neurobiological defect of the individual or weakness of character. The problem with this line of thought is that it absolves responsibility for decisions made by political leaders such as the Cambodian massacre, or the oppressive regimes of Egypt, Libya, and the host of other revolutions set to party like it’s 1989 in the Soviet Bloc. “It’s not my fault, it was the nation-state worldview-puppeteer in my brain making me pick up that phone and order warplanes to carpet-bomb my own people into oblivion!” Claiming “temporary perspective-insanity” in a trial for Gaddafi’s war crimes. “It’s not my fault we done smoked out the ‘Raqi’s based on fabricated evidence of WMDs that wound up in the deaths of hundreds of thousands of innocents, and burying the US in mountains of debt… Rumsfeldt, Cheney and I, we was just ‘spiritually communing’ with the state and trying to find meaning in our lives!”

This is not only a disservice to the victims of atrocities caused by state leader war criminals, I think it unfairly paints politicians, and especially the good ones, into these sort of choice-less cages of sub-human politico-borghood. It’s not unlike the recent pop-psych riffs going off in the Wall Street Journal suggesting that it’s not that CEOs and bankers and the ultra rich are bad, they are just “held hostage” by their own power which turns them into sociopaths.

While there is certainly an amount of distancing effect inherent in taking on a position of power, this is not some universal get-out-of-indictment free card for being a bastard, committing financial crimes, sucking wealth from the population, blowing up economies for profit, etc. etc.. Because there are good super-rich people out there too, the Warren Buffets and Ed Boons of the world who show great empathy and compassion for the “pawns” wallowing far below their 102nd floor corporate towers, who maintain their humanity and altruism in their decision making despite their position. These good business leaders often prove instrumental in generating systemic-level change for societal goods such as eliminating malaria in Africa, jumpstarting renewable energy with investment, providing vital funding for charity work, and putting pressure on the bad blue bloods.

And just as there are good and bad businessmen, there are good and bad politicians, of varying degrees. There are the JFKs and there are the Pol Pots. There are the Churchills and there are the Hitlers. Great leaders such as these didn’t make decisions based on how much more of their RISK pieces they could place on the “board game” of Earth. They did what politicians should do, which is make the often unbearably difficult but right decisions for the good of the people, of their countries, and also of the entire world. I think it’s perhaps more illuminating to view the states not as the controlling viewpoint foisted upon the politician and the powerful, but rather the subservient minion of ulterior human agenda. In the case of Churchill, the state was a vehicle for protecting humanity from a world enslaved to fascism. But if the leader’s goals are more sinister, then the state can be used for that as well.

For example, in the case of the various economic blowups, the biggest of which being in 2008, the accused CEO’s first move is to claim exactly what the article says, some abstract “state view” – in this case “corporate view”, the corporation as some big evil monster forcing their hand to do all these terrible things, ruthlessly achieving its own agenda of survival in a Darwinian concrete jungle of survival of the business-fittest. In reality, corporations are blowing up all the time; a mere 20% of biggest corporations in the US still *exist* today. Subprime derivative games, insider trading schemes, derivatives shenanigans and the like are directly *opposed* to the interest of the corporation as an entity as they inevitably wind up killing the corporate organism off once the toxic waste is revealed as in Lehman, AIG, Enron, et. Al.. But at the same time as the supposedly self-preserving corporation is dying off, the human CEO at the top gets away from the burning corpse with millions or even billions in bonuses, sailing off in a golden parachute to the Caymans. No, the corporate view does not shanghai the human perspective, the human agent here is using the corporation as a puppet, a scapegoat, a wealth-siphoning vehicle to enrich themselves, then discarding the wolf costume as soon as they’re safely away from the scene of the crime. To say that their “field of view” as a CEO (of Enron or Lehman or Madoff, say) “caused” them to commit these horrible acts is pure apologism. It’s a false evolutionary metaphor originally incited by the neoclassical school of economics and perpetuated by the Wall-Street owned academic field of economics.

And likewise, the Bush Administration claiming it in “The National Interest ™” of the United States to invade Iraq is not some usurpation of Bush’s undying humanitarianism and Mother Theresa-like compassion by the inescapable “communion” with “the state”. It was a calculated, media-controlled, exploitation of mythical concept of “the state” in order to further the specific *human* interests of involved parties including Cheney and the band of war profiteers, oil moguls salivating over Iraqi black gold, Black Water & friends, and every CEO, croney, and gangster in between. As for the US “state”? Well.. a decade later we’re several more trillion in debt, thousands of brave men and women lost, rest of the world hates us a whole lot more… it’s the opposite of what is good for “the state”. No, I think it’s a whole ‘nother strategy (game).

If politicians really cared about the state, they would not throw it trillions of dollars in debt, heist away billions of dollars through war profiteering during the Iraq war, fail to make investments in green technology, let the financial system turn into a vampire squid latched onto the face of the state (Deregulation, zero interest rates, bailouts, blind eye to shenanigans), killing the US slowly and causing it to fall into bankruptcy. All of these things have left the US far worse off than it was ten years ago, economically, geopolitically, socially. Politicians, bad politicians, help “the state” when it is in their own best interest to maintain a powerful state, such that they get re-elected, make gains for their own companies and friends which they revolving-door back into when they get back to the private sector (Cheney & Haliburton, Goldman Sachs & Henry Paulson), and generally benefit their own *personal* interest. And if they can make gains by hollowing out “the state” and destroying it, then they will take those as well.

There is a counterargument, that “The government didn’t just decide one day that they wanted to make life easier for their citizens. The government decided that a more stable oil price/producion would help the state/economy. It’s just a byproduct that it benefits the people.”

Some argue that the human factors both negative -- war profiteering, securing oil for campaign contributors and croneys -- and positive -- maintenance of oil resources necessary to support 1st world society in the US -- for entering Iraq are secondary to the "state decision" which is to make a demonstration of the insubordinate Iraqi government, and to secure position in the Middle East.

If we’re really honest, a more stable oil price/production that helps the state/economy benefits the *politician* because if they fail to maintain a stable economy, it is the people who will have their head on a platter when it comes to voting time. And states don’t “decide” anything; it is only human politicians who can choose to go to war or take campaign contributions or not. Just as we see reps and dems playing hot potato with the present economic downturn, trying to shift the blame over to the other party, even though they have both taken part in destroying their own state by assisting the financial moguls in slaughtering the economy and pushing their losses onto the state balance sheet, in return for massive campaign contributions..

States are again imaginary entities which don’t actually feel “benefits” or “pain”. They’re construct tools for maintaining social fabric. The only real sentient involved parties here are human beings who are capable of experiencing the pain of being voted out or joy of winning an election, the pain of gas shortages and economic downturns or the joy of a boom, the quarterly losses due to lost oil sources or the quarterly gains due to aquisition, or through criminally generous government contracts. And when we become too convinced of our own fictional shorthands -- communism, 'free market' ideology, corporate organisms, state organisms -- well, the Wizards of Oz can take us for quite an unpleasant ride.

Thursday, March 3, 2011

Fuck The Singularity

So apparently transhumanism is getting teh famous. While transhumanism may still remain in the shallow banks of the mainstream, I do believe it passes the grandma test – that is, grandma’s digitally Antediluvian light cone of perception can be utilized as a fairly good litmus paper to dip into the mainstream consciousness; a pre-boomer high water mark to test whether a memeplex’s torrential ubiquity has truly flooded Moore’s Aquarium we’re swimming in, or if we’re just seeing our own transhumant reflections in our self-filtered echo chambers. Seeing as how my grandma – a septuagenarian who hasn’t ‘Booked her Hepburnian curled face or touchscreened a single angry avian in her life – just namedropped Kurzweil, then tree-bark dropped the most contempo issue of Time Magazine this morning “THE SINGULARITY IZ COMIN!!”, I think brother Danaylov’s suggestion of the mainstream is at least fractionally vindicated.

If the Mainstream (is there truly a mainstream anymore or just a disintermediated intractable swamp of personal #feeds?) and Brainstream media is to be believed, we’re in a perpetual race, a game, a final species-wide solitaire whose penultimate outcomes include two possible finite states: machine-ey geek wish-fulfillment in the arms of 89 virgins bikini clad gatling-katana wielding Draenor and certain destruction through a roulette wheel of assorted existential crises – climate, population, asymmetric nuclear/biological war, take your pick. It’s an arms race I’d guesstimate is as likely to acquire the sepia patina of future fatigue and mutually assured irrelevance as the time-desaturated dead-tree posters of moon-landing American starmen adorning NASA’s great whitewashed halls gone the color of brown dwarves, ‘earmarked’ for funding-slaughter, now being converted into IMAX theater multiplexes, that are now playing! Uber-Def 3D-ified VR re-mastered mashups of the Star Wars’ clone spinoff wars.

And transhumanists, cyberpunks don’t feel left out: Kurzweil’s hokey book-pushing star-vehicle will be embarrassing, like your dad shuttling you off to band practice “rocking” his 1976 AMC Pacer and trying to score hip cred with you by butting into your jam with his Bon Jovi keytar riffs. But wear your jacks on (or below) your sleeves. For an endless Hollywood procession of light sabery Tron kitschery regurgitation is in the green thanks to the Singularity’s synergistic exponential advancements in cinema tech and marketing in the film industry. Neuromancer staring a freshly un-cryo-ed prepubescent Justin 3Bieber in a romantic leading role against Lady Gaga’s bacon-jumpsuited Molly, rendered using algorithmically re-hashed music videos, fail clips and Charlie Bit My finger cut by a Joseph Kahn-emulating version of Watson. After critical success – five Netflix hearts by “critic engines” – the unwashed will find themselves praying to their iPad overlords as they are filmically raptured by a blockbusting Singulartiarian mythology pantheon, cyclically recycling through remakes of the Matrix, followed by some vaguely Strossian techno-thriller starring Liam Neeson, returning again to remakes of Neuromancer, taking a new Beiber out of the ice box. News articles featuring schlocky crypto-New-Age-ish visuals of wire-headed Bjork-bots undulating to canned tech-conference ambient music will rule the pages of main-ish-stream media, becoming shameless, self-absorbed, and repetitive as i09. Make no mistake, they will milk the juicy mythos-space of cyberspace dry, wring it like a Big Box store squeezing yen from the maltreated underclass of an authoritarian country to put Libya to shame. But when the day comes, let us not feign surprised, for All This Has Happened Before, it’s only the Matrix version X.0, or have we forgotten the ‘80s-90s beta Singularitarianisms slogan: “We’ll all be uploaded to CD ROMs in 5 years!”

Personally I think Singularitarianism in particular is more often than not science fictional awe gone off into the deep end, floating derelict with space-dimensia somewhere between the Kuiper belt and Hale Bopp comet. It’s sense-of-wonder entertainment curdling over decades into nonsense-of-wonder, snowballing up with fragments of flimsy vaguely related pseudo evidence and cyberspace junk into a terra-threatening near-Earth object passing ever closer each orbit. It’s a manifestation of the under-religious, over-educated strata of noosphere’s latent Messiah Complex. In the mass-secularized world where repression of the constellation of human psyche-needs once met via religion – uncertainty, loss, Deus as social worker — through self-righteous indignation dopamine rush Atheism pedaled and cash cowed by Dawkins-Hitcheons brand of feel-holier-than-though religious tar-n-feathering, Is it any wonder Singularitarianism has opened up like a relief well for the spiritual/existential God niches which the state and media fail to replace? The lidded pressure cooker of the human desire for metaphysical certainty, for life everlasting, for the Fountain, drinking from which we might be spared the unbearable ephemerality of loved beings. Infinite wireheaded happiness and joy, in unity with the Deus Ex Machina, forever and ever, Amen.

Singularitarianism suffers from the same myopic delusions of grandeur and human magnanimity as Marx’s communism. It’s hammered in again and again like an Apostles creed by the Singuvangelists that we’re going to have all our Earthly primate problems solved, Deus Ex: the Omniscient Machine in its ineffable wisdom shall cure all our diseases, bring peace to all our intractable conflicts, bring us into eternal life, elevate us into eternal blissful joy where every potato chip tastes like a thousand orgasms, said 89 Dark Elven virgins, the answer to every question, the meaning of life, the question who’s answer is 42, etc etc..

But guess what? We already have the resources, the wealth, to give every man woman child (and maybe every digitant cloud-based replica thereof!) a 1st world or near first world life style. Every bank-puppeted economist will sing to you the undying praises of the modern globalized free market and how thanks to business and technological innovation we’re many times better off than we were half a century ago. However, as any major non-Koch brohers funded study will tell you, as if you didn’t already know, the wealth gaps have been and are only growing into ever more impassible chasms, EVEN as the sum total of pie to go around is getting bigger. As the Barclay’s head honchos buy their fourth tropical island nation as their personal getaway, we’ve got half the planet still starving and grinding their cartilage to dust 14 hours a day for a dollar, and even in the Greatest Country on Earth we’re seeing steady decline or stagnation in living standards. Simply adding better tech to the equation does not equal better or more even distribution of that wealthier future to everyone. On the contrary, if the pattern continues, we should expect a Singularital heaven, a super-intelligence run orbital Villa Straylight on which only the long WASP procession of royalty clones shall remain, literally disconnected from the hell below.

The human desire to compete for wealth and status and prestige and its second-order effects within complex socio-political configurations are routinely glossed over by the platonic hallowed Shangrilah-ization that permeates much Singularitarian ethos. You can try to deny that desire to compete, like some communist states and flower-throwing communes and na├»ve Silicon Valley programmers, and believe that billions of years of psychological evolution are just going to go away, and we’ve all seen how the Soviet experiment turned out. Somebody will always come out on top, whether that’s 500 frat brother CEOs, a genocidal dictator, an authoritarian People’s Republic, and transhumanists/Singularitarians who prefer to leave that fundamental non-linear factor out of their perfect equations for their model worlds are making their own Procrustean Beds (and the similarly deadly beds of innocent and/or ignorant bystanders) not unlike the hordes of Gaussian Copula and “Great Moderation” flaunting economists who tried to fit the world to their theories and not vice versa. And we’ve all seen starkly and painfully how that 2008 economic Singularity worked out.

Just putting on the futurist cap a sec: the private sector is graciously accepting ( or hostile takeoverly stealing) the torch of space exploration from the decaying Gormenghast-like edifices of the Cold War forged relics of Gothic Hi-Tech (AKA NASA) as has been liberally pointed out. It follows that cyberspace and neurospace exploration, both requiring venture capital burn-rates vastly beyond the scope of any neo-hippy trust-funded proto-Google, will require massive capital flows, and will no doubt be helmed by our new CEO overlords and their 500 frat brother plutocracy banksterocracy friends as well. And if the Goldman Sachs of the world are saddled with making the Gnosis Machines… well, let’s just say they’re not exactly the apotheosis of human potential for Ghandian altruism. Do we really need to point out the Lovecraftish vampire squid in our collective room/credit cards/environments/fossil fuel conflicts/governmental bodies? Toxic waste in, toxic waste out. Any superintelligent silicon-based entity that’s going to spring forth from The Money is even LESS likely to give a shit about anyone but their creator’s well being. Yeah, we’ve got an uplink to Xanadu alright, a future superhighway / stairway to “heaven” paved with gravel crushed from the bones of the arthritic and tirelessly slaving subprime-debt laden, bargaining rightless, lower-caste Morlocks – that is, everyone other than the Ownership class –who support for eternity the blue bloods in their through endless bailouts, whips cracked by slavedriving managerial super-AIs.