Philip Glass’ original composition for Sesame Street.
Explains a bit about my subconscious, I suppose.
Philip Glass’ original composition for Sesame Street.
Explains a bit about my subconscious, I suppose.
Hearing the commotion from the hall, the designer puts down his scalpel. A security officer comes in the swinging, double doors of the lab, out of breath, as the designer stands. “No problem here, zir. One of them writers broke out of containment. It took the full charge on two tasers, but we got it wrangled now. We’re taking it back to the tank, and then we’ll be back to clean up the mess on the floor. Wish they didn’t always void their bowels like that….” The officer was gone.
The designer sits back down on his minimalist steel stool, and picks up the blade. It might be part of the realities of doing design-fiction, but an interruption is an interruption. Increasing the magnification on the goggles, the designer brings the scalpel low over the text for another slice. The page shrinks back instinctively, as the sharp edge parts its fibers…
I couldn’t consider myself much of a young writer knowledgeable about the technological zeitgeist if I couldn’t preach to a particular choir about the particular concept developed in the last five years known as “design-fiction”. Like anything else these days, the truth no doubt resists easy categorization, being multi-faceted, and having different characteristics and attributes at different times and in different settings, depending who is measuring, and from where they are looking. Luckily, abstraction is my chosen art form, and building characters that are easily readable is a skill fundamentally component to my nature, almost as much as design-sense comes naturally to those who can afford Adobe Creative Suite. Without too much beating about the bush, I’m going to weave a little narrative about design-fiction; just a couple of multi-touch gestures on our collective interface here.
Let me begin by unilaterally defining design-fiction as the theory and practice behind conflating design, “building things that exist”, with fiction, “making up shit that doesn’t exist”. Design-fiction–either through its own limited fictional proposition or on the back of pre-existing works of fiction–links a fictional narrative regarding a proposed object, with some image, shadow, ghost, dream, or otherwise hologrammically-real design of that object. It could be a mock up of a car from Blade Runner, it could be a functioning hologram like in Star Wars. It could be the proposed features of a cell phone that could exist, if only the technology was available as specified. Or it could be the working prototype of something entirely useful, if certain fictional conditions were true. Most generally, design-fiction take “the future” as the generic narrative for its activity, and uses only enough fictional glue as is necessary to prop the designed object up upon that plane. No doubt, the makers of design-fiction experience a bit of perceived freedom in this activity. With this tool, they can give context to design ideas that wouldn’t otherwise be taken seriously. Fiction was something that reality merchants used to avoid, but now it is a new territory, just waiting to be settled. The designers and engineers, after decades (or centuries, depending who is doing the counting) of attempting to maintain their privileged control over the domain of reality, have suddenly noticed that there is an entire new world available in the realm of unreal, and are building new colonies as we speak to tap these fictional deposits.
The resource of fiction has proven invaluable to the design community. It is a fertile land for farming new ideas. It is a forest of raw timber, just waiting to be processed into something profitable. It is a mineral resource: a treasure trove of value just underneath the soil, which the natives refuse to profit by, at least until they are put to work mining and smelting it to store and back the value of the new economy of this land, in which fiction creators are now lucky enough to participate.
We, the fiction makers, used to do simple arts and crafts. Little stories, films, and comic books. Did you know that when we used to be able to freely hunt the elk of imagination, we’d use every part of the animal? We’d use the hide for plot, the bone for characters, and the antlers would be our lifestyle. (We’d even eat the genitals, for the sexual content which we believed it imbued our fiction.) We had a true respect for the environment of fiction, when we lived in harmony with its spirits. But that time has past, and we’ve been woken up to the new economy. Now we sell to the tourists along the highway, and if we’re lucky, get a job in design-fiction’s factory lines, hopefully with enough time to still practice the fictive arts around the fire, at home in the evening. We show off the goods that we have as the designers come around on buying tours. A positive nod from a designer, a mention in a bibliography or a name-drop in a project… well, that could make a career for one of us. Our fiction could be discovered, and we could be whisked off to the lab, to have our fiction milked for years-worth of homogenized product-fantasies, and our genetic material cloned into sterile keynote after keynote. If we are good and docile, we might even find a privileged pet position as “Director of Visionary Hype” at some publicly-traded corporation. We could be the monkey that gets to go home with the scientist.
Today, the magic no longer exists in our fiction, but in what they can do with our fiction. By the manifest destiny of design, the wonders of the future have been created in real life, with the subjugation of fiction to the anvil of reality. All classes have indeed benefited from this abundance. What wonders we have, on the bleeding edge of this economic extraction! We have “cyberspace”. We have virtual reality. Augmented reality. We have billions of phones that would be no more than simple radios if not touched by the magic hand of design, transmuting them into “cyborg” appendages, and we celebrate them for the virility they imbue within us. The value of everyday things like touch-screen interfaces, environmental sensors, and vehicular transportation increases exponentially when inseminated with “design-fiction”. It is the ultimate gamification, the hand of design-fiction, turning what would be ordinary stuff into exploding, plinging, gold coins, making all of technology and fiction seamlessly function For The Win. What once was merely the artistic present, is now the valuable future.
Cue the Disney-produced GM animation. Or rather, cue the Vimeo cut. Or even better, just play the entirety of Minority Report. Or, let us crowd-source a film version of Neuromancer, so we can slip once more into a sweet visual fantasy dimension, of endless flowing tides of VC and Kickstarter love and dollars.
I stretch the truth a bit, of course. Because I am a writer, and this is what I do. I make stuff up, at least to a certain degree. I invent worlds that don’t exist, for other people’s amusement. I simplify and I abstract to make a point, and to write something hopefully concrete and understandable. I draw the lines that no one else is willing to draw, and then give it away free: my own little bit of folk art. To get these bothersome ideas out of my head, and onto the web. Just doing my part, as a serf of fiction. Carrying my little crowd-sourced bag of fictional dirt up the wall of the pit mine that is the internet.
But I must answer for my quota of cotton; I need to bring you something for re-sale, and not just my little straw men. I can’t just spin fiction off into the wind, and so it must mean something. So I must ask, seriously: when it comes to the reality of design-fiction: what is it that we are doing here? How is it–and why is it–that fiction is actually being taken “seriously” when it is conflated with cool little technological gadgets, with visionary architecture, with high-profile names in the design world? Why is it only now that “fiction” is allowed to become almost “real” when printed on a design pamphlet or wired to an Arduino board, minted into the coinage of design-fiction? Should we who create fiction accept this colonization? What was fiction before design-fiction? Is design-fiction merely the modern extension and the next prototype of fiction: the future of fiction?
It seems that many people thought books and literature were only ever entertaining side-pursuits in our cultural history; that literature only came close to science in the form of library science. But fiction has always been a part of historical reality, long before design-fiction so kindly discovered the power of future-affirmation to it. Fiction has a very human purpose: it is the singularly important task of assembling, what I would call, a “mechanism of desires”. Fiction expresses the raw, chaotic power of human life through its material components. Through its own technology of imagery, thematic archetype, language, and other media forms, fiction expresses the depths of our species’ life in the continuum of past, present and future, and indeed, it is the only way we ever have. We talk about ourselves via the form of literature, or fictive writing, and also in music, film, art, and any other expression in which we might be able to conceive or perceive a narrative. Sure, often it is, strictly, “made up”. But this is the creative element–in order to better express those dark human desires underlying our societies, to project the hard-to-define emotions that pulse within our living existence, we must not be constrained to the plane of reality that those in the physical sciences hold themselves within. And in this way, fiction is entirely real–as real as emotion and thought, as real as our egos, as real as the mutable species-entity known as “humanity” that unites all of us with a similar genotype. It utilizes as its energy the chaotic reality of human life, and constructs a branching, cultural pipeline for this energy to flow within. And all this time, you thought you were just reading words!
Apart from this deep, underlying function, fiction is also useful for a great many other things as part of its expressive nature. We’re aware of the general humanistic good of consuming fine literature, of the entertaining feature of films, of the social aspect of music. Fiction can motivate and inspire humans to “real-life” activity in a variety of arenas, and physical design and technological invention is surely one of these. But over and above inspiration, design-fiction’s functionality has what could be considered to be a more insidious mechanism.
What is the purpose of attempting to design a cyberspace deck? What do we gain from building a Minority Report display interface? Why work on a product that only will ever exist within a story, pre-existing as separate narrative, or written specifically for that gadget? When we assume the design-fiction mantle of Future-Vision, what is the motivation? It is four-fold: 1) We believe these devices would be cool or otherwise meaningful in real life. 2) We believe they would perhaps be successfully marketable products, if they could be created. 3) We want to see if it can be done. 4) We buy into the fictional fantasy world of generic future-tense, and we commit to design-fiction as a way to express our mental investment and solidarity with that forward-leaning worldview. These reasons all have a common thread: once a technological gadget can be identified in a fictional way, a part of us wants to port this fiction to reality.
These are the reasons behind the majority of design-fiction, and as such, design-fiction is no more than steampunk. I don’t intend to drag steampunk through the mud by association, either; steampunk is a fine hobby. There is no reason not to port fiction to reality, as a prop. Play-acting is a form of fiction consumption, and always has been. A prop, just its progenitor the classical theater mask, is simultaneously real and not real. But design-fiction is kidding itself if it believes it can simply make the fictional real, to make it less than a prop. And that to do so is any more than gluing gears to vests for sale on Etsy, to sell shit by calling it Shinola.
Play acting is all well and good, but when the props are treated as real, there is a psychotic sort of commodification underway. The psychosis is a disavowal–a forced rejection of the entire fictional mechanism except for that one value point, “to make the future real”. It is a cauterizing excision of a segment of the fiction, cut out and fused into an independent object with only one quantifiable dimension. Ripped out of its context, the purpose of fiction as a whole is conveniently forgotten, and the gadget object is reduced to a commodity, existing only in terms of its market value. The expressive component of play-acting is dead. Design-fiction is a fetish pushed to the point of absolute objectification; it is no longer a node of pleasure, only a dried and homogenized portion of the original fiction, ready to be sold in consumer-ready packages. The future is no longer a vanishing point of progress in a real-unreal network of invention and art, but a quantified MSRP. It is to reduce all speculation to the assumption that what could exist must exist, and would, in existence, be valuable. It is to make this supposed value the end-all of all creativity. You can hook a disembodied dog head up to a blood pump, and watch it try to live. But why would you do that? Design-fiction has such questions to answer.
We don’t celebrate Neuromancer because it contains the idea of cyberspace; we celebrate the idea of cyberspace because it is part of Neuromancer. Neuromancer is less about the actual proposition of a virtual realm called cyberspace accessible through communication technology, and more about the feeling of micro-gravity. It is about the human wish to fly. Cyberspace gets the press, because it is an easily identifiable term, and not a more ethereal thematic concept. The coined phrase is its own commodity value. We recall that the end of the book take place in earth-orbit, as the cowboy of the virtual space is forced by physical circumstances to take his metaphorical combat into the world. The book is about dimensions that are unreal, and no less real. It is about manufactured space in general, and the new physics that we must learn to live within. It is about the new thermodynamics of information, and such immutable laws that would birth the sublime triple point of black ice. It is about the life that develops in unreal physical environments, life that is both human, and non-human. In the time since the book was written, the Internet has come to life. Cyberspace is now an actual thing, different than the cyberspace in the book. But the human desire, and ultimately, the need to fly through our invented territorial realms is still real, both in reality and the original fiction.
Design-fiction reduces the mechanism of fiction to one more corporate R&D department, convinced that it’s products are something more than just products. The fictional, thinner-than-thin, design-fiction smart phone is a product of dimensional flattening, reducing the real environment of information technology and communications to point at which it is just another virtual icon, that we flick across the surface of our real phones despondently: the killer app of the week. Such so-called “fiction” downsizes the network assemblage of human creativity and desire-engineering, replacing it with the boring repetition of the start-up model. How it works and what it does is less important than how quickly it can be pushed to market, or more likely, to the blog. It minimizes the desire that drove creativity to express itself through dynamic fiction into no more than a meter of quantitative investment and click-through interest, that can be channelled as is liked for best returns. So you’ve stimulated the nerve endings with desire for a phone that will never be sold. It’s creative output is made-you-look. The fiction might as well have never existed, and all that was manufactured was the lie. It’s thinking you don’t have to feed your dog as long as you keep ringing the Pavlovian bell. It’s inventing the Happy Meal toy before the shooting the film. At best, it’s bad fiction. At worst, the most you are affecting your audience with is lead poisoning.
Design-fiction would have you avoid the vast mechanism of real fiction, and invest in what is made up as a secondary commodification. It would have you forget about the book, and concentrate on the deck. It would sell you an Ono-Sendai T-shirt, not to bring the book to life, but in order to brand you into the fan club. The book is alive already, and its position as a classic work of fiction is the proof. If there was a cyberspace deck, it would be a piece of memorabilia to put under glass on a shelf. Something to sell online, if you were lucky enough to have an actual box to ship. What would be the purpose of a cyberspace deck today? We already have the interfaces that best conflate our needs to connect to our networks with the technology we have available. Design, without the fiction, is already delivering on the dream. It may be an interesting exercise to consider why we have smart phones rather than cyberspace decks–but this is a theoretical exploration between the work of fiction and reality, and something for writers to bother themselves with, rather than designers.
And then on to the next one. Remake each book into a film, and each film into a phone. What can you quantify the rights to, and convert into a design-fiction option? How about Minority Report? The Minority-Report-Interface (MRI) is now a completely isolated, flat piece of fiction separate from the fiction from which it is derived. Amputated from the work of fiction, in which it is an important image of the thematic import of the work–a larger theme of truth, evidence, and the foreseeable future–the device itself is now only a milestone about technological progress. When will we have the MRI? When, when, when? And how much will it cost? The future will only be here when we can gesture in space and construct a narrative of the future at our whim. But this forgets the point of Minority Report as a work of fiction: the idea of the work is that the future cannot be predicted, and cannot be constructed at our whim. In our manic gesturing towards the gadget-of-the-future, we’ve missed the whole point. The reality of fiction has been replaced by an urge towards false, isolated commodity.
Are objects pulled into the “real” world and isolated from the assemblage which invented them, even to be considered real? These simulacra of fiction seem to double down on the fakery. In fact, the entire woven mechanism of fictional meaning from which these objects grew before they were harvested like clones, the question of the worth of technology as an element of human existence from which have the fruitful discipline known as Science-Fiction seems more real. In open speculation and the intricate programming of fiction, I see more reality than in the commodification of potential-product. What is more real: the cyborg in the horror film, or the hardwired, uncanny horror that causes us to invent cyborgs in fiction, to keep us looking even though we wish we could turn away? What is more alarming–uncanny human subjects, on the border point between humanity and object, or uncanny objects, on the border point between creativity and capitalistic exploitation?
But let me call curtain. Enough with my own play-acting here, and philosophical slight-of-hand. Let me end this fictive fantasy I’m spinning, and return to reality. These post-colonial memories–they aren’t yours. This was a nightmare, from which we all can easily wake up. Fiction and object design are both equally real. They are all real, but only together, united as they always were.
I’ve been giving design-fiction an especially hard time, trying to seed its practitioners with a horrible dream, in which they are the enemies of the future, rather than its saviors and heralds. As the brainwashing super-villian in this narrative, I speak for an a-vocal, imagined constituency against a trumped up enemy. Us designers of fiction (not designers of design-fiction) are, in general, so pleased to finally be taken seriously that we almost forget to take our newly discovered importance as an insult. And so, I’ve lobbed the perceived insult playfully back towards my characterization of the design-fictioners, if only to have them finally look up into the sky for what might one day actually condense in reality with enough weight to hit them in the face.
Behind this little bit of territorial posturing, the relationship between the real and the fictional is the same terrain that we’ve always traversed. Our ideas, both of fiction and of physical invention, grow as nodes in the network–starting independently, connecting, separating, and eventually fading in importance. The lasting effect of anything, technological or artistic, is its ability to network with everything else in a connecting, transmitting relationship, rather than as a cancerous, pooling sink of resources. Both fiction and reality are simultaneous. Isolation and consolidation of nodes will occur, and there is nothing wrong with picking particular pieces of fruit as they grow. But reality only occurs simultaneously amid real-world praxis and the extensive networks of the creative production of fantasy. Keep your hammer in one hand, and your dreams in the other.
And in the end, recognition of this truth is my fantasy of the future. We who create fiction don’t have to view the design world as an expropriating, gentrifying force. We can work as a team with the designers. The designers are no doubt just as interested in our characters and the overall fictive headspace as they are in our marketable gadgets. And the world of engineering can be the same fertile ground for creativity, as fiction can be for design. They can let us into their studios and laboratories just as we let them into our heads. This was the origin of Science-Fiction, of course; and it is the continuing legacy of speculative fiction of all categories. Writers, artists, and creators of all media continue to be informed by the world around them just as we inform it with our work, and in this society of continual connecting networks, we ought to turn up the bandwidth, and upload as much data to the commons as we realistically can.
But in that effort, design-fiction: I urge you to remember who constitutes reality in this relationship. I may write on a computer, and access the cloud through the prouduct of your brilliant, visionary interface. But your imagination, your creativity, your humanity–you read these inscriptions off of the broad back of fiction. This world and its aspirations were built by fiction, and fiction keeps it. Remember, design-fiction, that when you dream, you are in our hands. We are you, while you sleep.
If I tried to combine every thought that came in my head while watching this video into a coherent essay, I would have something book length, so instead, I’m just going to spit it out.
Wow. Mind blown.
First of all, great job, Grand Rapids. Sincerely. The city put together a mammoth effort, even without the help of Kickstarter, and came up with an Internet video that was not only successful, but put others in the category to shame. I tend to think with art of a more casual sort, if you don’t have a concept that in itself is necessarily going to knock it out of the park, at least go big on the effort. Done and done.
And in throwing their hat into the meme, white America reminds the internet that it exists. The internet is not just pro-democracy fronts and third-world music blogs, folks! It is possible to have a good old fashioned Main Street parade online. No taco truck reviews, no workers’ rights, no sex, no militant screen printing hacker collectives. Football, American made cars, and, well, apple pie.
I don’t say this simply to be facetious. Main Street America does exist, and it would only be a publication as idiotically outdated as Newsweek, (see link for back story on that) who thinks that it is somehow more an arbiter of taste, more up with the times and pace of the internet than a city on a river in Michigan. All those “real Americans” you saw in the video have Internet connections, and you better believe they cancelled their subscriptions to Newsweek, if they even had any.
And isn’t it somewhat refreshing, to see the meme of America rescued from hate-filled invective, pulled out of the politics for one minute, to mug for the camera in a way that makes us seem welcome in “real America” once again; to make Chambers of Commerce look like nice community organizations, rather than the money behind union crushing, the propping up of corporate property rights, and anti-gay legislation? I mean, it is almost enough to make me forget the experiences I’ve had being called “fag” while crossing Main Street, USA, and make me think about living in the Midwest again. Almost.
Not that any of these nice folks in Grand Rapids would do something like that. They all look like nice people, with nice lives. And with the sort of effort necessary to put a project like this together, the goodwill and support for the community provided by local businesses, lawmakers, and everyday people alike, they might have a different sort of town that defies the norm, where people band together and form a community, indeed, the only thing that’s ever formed community, unlike many so-called defenses of “family, community, and small business”.
And so I wonder if, after raising $40,000 to make this video, the next weekend they all got together to put in bike lanes. Or to build low income housing. I just wonder, I don’t mean to imply that they should have done this instead. They can do whatever they like with their time, and there’s absolutely nothing wrong with making a video, any more than there is anything wrong with putting in a statue of Robocop, as elsewhere in Michigan. But a community is defined by what they choose to do with their community. And a definition is not only what is said, but also what is not said. This might be the cornerstone of self-expression, whether you are a city or a person, or any other entity.
This juxtaposition between the little that is said and the lot that isn’t said is not an accusation in my mind, but when I watch two videos back to back (as the vicissitude of the Internet decided for me), the question is automatically posed. And what is the question, anyway? I’m not entirely sure. But when there’s a nice singalong going on in the streets of one town, when somewhere across the world there are beatings and worse going on in the streets of another town, there should be a question asked, shouldn’t there? Even if we can’t quite bring it to our lips.
I wonder if, maybe not unlike in the classic song Grand Rapids decided to sing, this video could be the moment that something died. Not in a fiery plane crash, of course. But in the sense that when something is memorialized, it in its reality is somewhat ceased. You don’t plant a gravestone for something that is still living. Don McLean reacting to the 60s with nostalgia for something that people wanted to believe still existed, even though that sort of Americana was now a ghost. The ghost of Main Street America, in a world of Tahrir squares. And yet they can still sing this song, with help from their platinum sponsors. That’s something, right? Isn’t it? To whom?
Lastly, in a fit of SF splendor, I imagine this clip resurfacing after a number of years, and discovered by some disaffected youth, longing for the way the continent “used to be”. In a saga reminiscent of Damnation Alley, they set off across whatever this terrain will look like then, attempting to find the promised land of Grand Rapids. What is it that they will find? Probably not radioactive, mutated cockroaches. But other than that, I can’t say that I know with certainly in any direction.
There is an article going around about Chinese prisoners working as World of Warcraft gold farmers. It has the hallmarks of a hot Twitter link: World of Warcraft, new virtual economies, China, and social outrage. But surprise of surprises, this retweet fever is… well, xenophobic. As it turns out, when viewed from a perspective of profit-taking off the backs of the workers, US prison labor is far more exploitative.
First of all, the claim in the Guardian piece is that the guards make £470-570 a day off the mining. I’m not sure if this is supposedly per prisoner, or for all the prisoners, but either way, it appears to be a disingenuous way of presenting the figure. This article claims that the average monthly wages of a “free” gold farmer are about 145 USD a month, working 12 hours shifts, or 40 cents an hour. This source claims the average Chinese gold farmer makes 0.30 USD an hour, while management makes about $1 an hour gross off that worker’s labor. So, with 300 prisoners (as cited in the Guardian article) working 12 hour shifts, we could imagine the prison bosses are pulling in $3600 a day gross if they are the top of the management structure, and $1080 per day if they are merely reselling the prisoners’ labor. Either way, we see the £470-570 sum is closer to the combined profitability of all the prisoners, (subtracting subscription and computer costs), and not the work done by the individual prisoner.
But even now that we’ve straightened that out, how much money is that, really? Gold farming only exists because there are economies in the world in which 30 cents an hour is a wage that someone is willing to work for. It is widespread in China, because of the size of the population and what that money will buy. In the United States, even working as a illegal farm laborer for half minimum wage is more than ten times that rate.
But don’t trust me: let’s look at some statistics. Federal minimum wage (the absolute minimum, as some states mandate a higher wage) is $7.25 an hour. The lowest minimum wage in China (China’s minimum wage is set regionally, not nationally) works out to 33 cents an hour, figured with 12-hour days. So gold farming in China is actually almost as lucrative for a worker as a minimum wage job, whereas in the US, it doesn’t even come close. This is why the Chinese bother to do it, whereas in the US, we hope for jobs in food service. Keeping in mind, of course, that “minimum wage” is an abstract figure in itself.
As it turns out, the US has the highest prison population per capita, at 756 prisoners per 100,000 people. We also have a tried and true prisoner labor economy. For example, in Arizona prisoners work as farm laborers, earning $2 an hour, 30% of which goes back to the prison for “room and board”. That is pretty good pay, considering that prisoners are also used as “out-sourced” call center workers, for an average of only 92 cents an hour.
Now, there are two ways to look at this.
One: the Chinese gold farmers are probably (the article is not clear) paid NOTHING for their farming. The prison bosses pocket 100% of the gross after equipment, with zero labor costs. The workers are making 0% on their labor, and 100% of what would be their minimum wage is being stolen from them on account of their incarceration. Whereas, US prisoners keep at worst (figuring 92 cents an hour) 12.6% of what would be their minimum wage, 87.4% of their due as workers being taken from them on account of their incarceration. In other words, it is better to earn something rather than nothing, and the American prisoners are doing better than the Chinese.
On the other hand…
Two: The surplus value is what matters. It is not so much the percentage that those workers could have earned at a “real” job farming, gold farming, or whatever. It is the work that their bosses are getting out of them, and in this case, the money they save by using prisoners. It is the comparison between the money the bosses might have spent to pay free workers, versus money that those bosses save at the expense of their workers’ incarceration. In this case, per working hour, the Chinese prison bosses are earning $1 off each worker per hour, because this is the largest price they can get from the farmed gold, even when paying their workers absolutely nothing. While the American boss who out-sources prison labor is earning a full $5.25 extra per working hour in pure profit by skirting minimum wage requirements. That is on top of the profit that boss would already collect, from phone orders of products, or harvested produce. In avoiding the necessity to pay workers a minimum wage, US bosses pocket 525% more surplus value per prison-work-hour than their Chinese colleagues with the gold farming scheme. The Chinese prisoner may get the shaft when it comes to being paid. But as far as saving money on labor, the US prison boss is doing much better than the Chinese prison boss.
While our first instinct might be to compare the two instances as in approach One, it is crucial that we compare them by approach Two. A prisoner is a prisoner, but the value of that prisoner to the economic system of industrialized prison labor, shows exactly what stake that system has in keeping that laborer a prisoner. A US worker in prison is worth 525% more to the economy than a Chinese worker farming gold in prison. The Chinese prison bosses would make a little less if they couldn’t steal free labor from their prisoners. But that is small potatoes, compared to what US corporations make off their prisoners. My instinct is that the Chinese gold farming bosses are working on their own, just trying to extort a little bit of labor from their charges (the prisoners also officially work make products for export, which I expect are far more lucrative). To compare gold farming, a little bit of exploitative pocket money gathering, to the worldwide system of prison labor, is merely to make an internet-ready article, and not to even begin to comprehend the injustice done to incarcerated workers by surplus-value economies.
The real story, therefore, is not that it is so crazy that in a Chinese prison, prisoners are made to do some meaningless task for their bosses’ benefit. When measuring the profitability of the prison-industrial complex within the working economy, the US is still #1, baby.
Oh, and the story is also that we love to imagine China is the great economic Satan. But the US has been outsourcing exploitation since there was a trade deficit, and extracting surplus value from workers since time immemorial, so don’t think we’ve forgotten how to fuck over the lower classes.
The zine I proposed to make, is made.
The title is “Apopheniac Communiques”. Along with seven fantastic contributors, I’ve put together 28 pages of art, poems, short stories, and commentary. It’s full of low-fi awesomeness, pasted together by hand in the “traditional” zine style. Is there a pattern? Is there a theme? That will be for the reader to decide, but suffice it to say, we’ve already put a call in to the proper authorities who deal with such miracles.
In keeping with the classic tradition, I’ll be offering copies in the “mail-art” format: for $2 in either fungible currency or un-cancelled postage, I’ll mail you your very own printed copy, on cream-colored paper, in beautiful 4.25″ x 7″ format.
Mail those monies here:
4835 SE Sherman St.
Portland, OR 97215
AND… because it’s totally crazy, I’ll accept Bitcoins as payment. In fact, I’ll let you name your price if you choose to pay in BTC. Email me to get my public key and to give me your address.
The zine is licensed under Creative Commons (Attrib-Comm-Sharealike). And hey, if you just want to see what it looks like, even though it would never, ever compare to having a real life zine in your hands, here is a link to the full PDF.
A long article has been making the rounds, which at first catches the eye because of the copious (if mis-directed) use of a great many technospheric buzz words, popular smart phone app titles, and a splattering of post-modern philosophy, but then when unpacked devolves into all-too-typical post-Baudrillard simulacrap. BUT, just because it is misdirected, doesn’t mean that we can’t learn something from it, and take this opportunity to redirect.
The author of the above has a problem with a particular sort of digital photo. It is a sort of digital photo that somehow violates the glorious rules of reality, by mimicking something from a time that it is not. Time has come unstuck, and not in a good way. A bad, fake, inauthentic, faux-vintage way.
It might sound similar to another buzz word: “atemporality”. The author of the above link didn’t use the word atemporality. But, the words he used are responsible for directly the sort of miscommunication that obscures what atemporality is, and how it works. His notion of the faux-vintage, meager on depth as it is, is the scum that floats on top of atemporality, and keeps us from seeing the clear waters underneath. I hope to skim the scum off in this essay.
Part of the trouble with a concept like atemporality, is that it sounds right. Much like post-modernism, this makes it easy to put out on the table like a bowl of butter pats, without taking the time to think about what it is we’re having for dinner.
It’s not such a big word: “atemporality”. We know what that means, right? Something about time getting all weird on us, and the past, and the future, and maybe the sort of technology through which we imagine both the past and the future. Sounds good… type it up.
But atemporality is something with more nuance than time-getting-all-old-timey by way of a digital picture. To define it myself in short terms: atemporality is the act of refuting the order of temporality, through the means which temporality is usually applied. We all use an interior sense of time, or temporality. It’s, you know, Time! We keep track of the order in which things happen, and form a baseline t axis by which we keep track of the world. (For a greater exposition of this concept, see Kant, Bergson, Heidegger, Deleuze, and many others.) Temporality: we know the past, and we can only guess at the future; we know something just happened, while other things are mere traces in our memories; we “remember the 80s”, even though what I remember as The 80s no doubt differs from your memories of it, and we can debate when the 80s supposedly began and ended; we may remember last Tuesday, but the details could easily be suggested to us, and our “memories” might be proved false once we see the pictures. All of these things are involved in our sense of temporality: a big, flowing river of time in which we float.
Atemporality is the point at which this temporality begins to break down, though still in a temporal way. We still have a sense of time, but the wide span we call “history” begins to get weird loops, whorls, and whirlpools in it. The usual cycle of fads booming and busting grow eccentric, and spin oddly off-center. The idea of what is “current” begins to break down. We have trouble remembering if something used to be common a long time ago, or if that was today but maybe in Japan, or if maybe someone simply suggested that it would happen soon in the future. The river of time spreads out into a brackish salt marsh delta, and we know time is still flowing, but we don’t remember where it was we were trying to go. Were we trying to go? What does that even mean?
Maybe it’s because of the internet, maybe its because we all carry computers in our pockets, or maybe it’s just because there are so damn many of us we can’t see over the heads of our immediate friends to get any good “big picture”, and mainstream media is only as existent as the last meme that we saw. But there are people who aren’t old enough to know that record players went obsolete, out there buying records, as if there was nothing odd about it in the world. Wearing Victorian fashion is a now subculture, not an attempt to mimic something so uncool as “real life history”. And, pursuant to the article I had linked to at the beginning of this essay, cell phones can take pretty pictures with weird, livid color achieved through simple algorithms. No big deal, except that someone thinks those digital pictures are “old”. And what’s more, “fake old”.
Using a word like “nostalgia” is such a desperate sign of being out of touch, out of date, and so awfully-temporal in an atemporal time. “Nostalgia” assumes that there still was a temporal order in which someone could purposefully choose to “rewind”. It implies someone wants to “turn back a clock”, as if all our “wrist watches” weren’t synced to regulated network time via cell phone towers. Hilarious! You are the Encino Man of epistemology. Accusing an iPhone app of being inauthentically faux-vintage is about as cool as reminding your kids that some dead guy originally recorded the song being sung on American Idol way back in the 20th century. Pipe down, old man! The only people worried about what is correctly nostalgic or otherwise faking it are people who, for some reason, need to cling to a sense of permanent history that is not fluid, crowd-sourced, and always on instant remix mode. They probably still buy paper encyclopedias.
But the kids aren’t idiots, just because they won’t buy into your historical temporal-subscription business model. With a single Google search, anyone could tell you more about Kodachrome than you could, even if you used it yourself for over twenty years. As if they didn’t know that an antique is found on eBay, while up-cycled vintage is found on Etsy. They haven’t forgotten history. They’ve Gutenberg’ed history, if you pardon the zeitgeisty historical reference. Rather than re-write out the Old Story again and again in expensive, illuminated manuscripts, they’ve made their own printing presses, and they are distributing their pamphlets in the street. Or, if you prefer, they’ve pulled letterpresses out of the scrapheap, and they are printing comic books/novellas/vintage stationary that re-writes the story of Gutenberg as if he were an out of work Ph.D grad with a blog, or they’ve 3D-fabbed lost typefaces reassembled from scanned Library of Congress volumes, or they’ve… dammit, I’ve lost the metaphor, but that is the point. Atemporality is not your 20th Century post-modern critique. It is no longer enough to wrily point out a bit of irony that no one else caught, and think yourself Zarathustra for doing so. We leverage the networks, man. We access all recorded time periods with equal veracity and reach, until time periods cease being temporal. Anything that we can do with anything is only Now. Any of us, all of us, one of us. The temporality that anchors us to reality is atemporality.
When I say kids, I mean me, you, any of our contemporaries. The cutting edge is level, because the most amount of experience any of us can have with brand-new technology is none. Not all of technology is brand new, but that’s why we network. If someone finds a swell photography blog, or a scanned guide to restoring old typewriters, we pass it along. The best way to learn is to find someone who knows what they are doing, and help them. We’re all kids about some things, and many of us are experts in at least one thing. We come to the networks with certain abilities, certain likes and dislikes, and all the many facets of our personality. When we connect, reality happens. We’re all faking it to a certain degree, and all of our fabrications are realer than we know. There’s not a single person who isn’t surprised when their ____ goes viral, because the only thing one can attempt to understand about viral media, is the ridiculousness of the claim that one has identified and understood an epistemological hierarchy of network culture. “Pop culture” didn’t go obsolete, it splintered into more pieces than anyone can count, keep track of, or catalog and interpret. There is no such thing as un-cool. You just haven’t found the other people who think it is awesome yet. The topology of culture is similar to the technology that propagates it, in that culture only works. Technology and culture do not not-work. There is no plateau other than the niche, and if something is surviving, it is because it is crossing somebody’s spark gap. If something is replaced by a better tool, that former tool is either sold online or goes into the free box, where it is quickly grabbed by someone who could totally use it, or take it apart and make it into something else.
And this is how you know that the sort of person who uses the word “simulacra” with disdain doesn’t use tools, and only inhabits the realm of ideas as one inhabits a titanic, steam-driven airship; a fictional craft that never lands, never makes contact with the industrial revolution changing the world down here on the surface. There is no “inauthentic” in the machine shop. There are only tools, better tools, and tools that need to be fixed. What is it that Instagram does as a tool? It makes cool pictures. What do the titles of the filters mean? I don’t have the first idea. I swipe at them with my thumb until it looks sweet, and then I send it to my friends. Then I put down my iPhone, and go back to trying to un-stick the shutter on an old medium format camera. If I can make it work again, it might take cool pictures. And if I left it in that flea market where I found it, some asshole who uses words like “authentic” probably would have pulled it up into his airship and stuck it on the wall of his wine bar. I use all kinds of things. The reel to reel is next to the turntable on which my laptop sits, which is processing scanned 35 mm slides for filtering and reprinting, so I can reproject them with an overhead projector, and trace over it on a piece of tossed-out plywood. Where is the authentic in my living room? I couldn’t give a shit. Where is the “era”, the “epoch”? I couldn’t tell you. All of these technologies function today, and work Now. I can tell you that my 6 year-old laptop is probably more obsolete than the reel to reel player, because the reel to reel works like new, whereas the laptop often struggles with simple tasks.
Anyone offering authenticity has something to sell you, and likely, a something you do not need. They try to convince you that the way you are doing it is not as “real” as something else. Funny–because reality was just fine before they came along. Before they tried to monetize a particular world-view, to increase the value of a certain temporal commodity by claiming to be the exclusive arbiter of what is authentic and what is forged and fake. And we wouldn’t want to fool ourselves either; this is a capitalistic world, and everything ends up bought and sold. Any particular atemporal trend will end up named, stamped into a commodity, and sold, until stretched into a thin veneer of shiny, zombified goo. But that’s okay, because we already have a friend that we met in a comment thread, that can get us that real shit. The Real Shit, because it is the stuff we want and nothing else, and because we’re getting it from the source that we know and trust. That is the network, and that is atemporality. All real shit. No authenticity.
I have an idea for an entertaining essay about Hitchcock films, so I’m compiling a list. I’ve seen a few of them in my life, but I’m by no means an expert. Perhaps you’d like to help?
I’m particularly looking for films in which part of the suspense is related to whether or not the murder has/will actually occur(red). The existential quality of a murder, in other words. When do suspicions graduate into an actual crime?
Films I have so far are:
North by Northwest
I know there are more. Got a suggestion? Leave it in the comments!
There has been a drama of mis-attributions on the Internet lately, which, if you recall the anti-Wikipedia-style hysteria of years back, would seem forewarned. But the dramatic element is that the loose, crowd-sourced, volunteer aspect of the Network has been exposing and solving mis-attribution errors, not causing them.
There was a Martin Luther King Jr. quote falsely attributed, or perhaps better described as misinterpreted, and then corrected.
The last might be blamed on the haste of internet users to re-post a message without fully reading it and/or verifying it. However, in an unrelated intrigue, Wilko von Hardenberg and Tim Carmody got to the bottom of a falsely-attributed Decartes quote, that not only has been live on the Network for years, but printed in several books, going back to the 1970s.
And then yesterday, in consideration of these events, I decided to repost a quotation mystery that has baffled me for years: that in the University of Minnesota edition of Deleuze & Guattari’s Anti-Oedipus, there is a endnote left absolutely blank, with the quote unattributed. William Ball jumped on it, and solved the mystery by figuring out that the quote was oddly translated, the citation left blank, the punctuation misprinted so as to obscure the context, and perhaps the passage creatively-recalled to begin with. Like that, my mystery fell away, like scales from the eyes.
Perhaps it is apt that we can watch these little sessions unfold over Storify, because it is not really the work of one person that uncovered the puzzles of these mistakes. One person could perhaps fix a mistake, not unlike editing, if he or she had the knowledge to rectify an obviously perceived error. But it was because these were mistakes echoed wide and far throughout the network, or because one person’s doubt could be shared and extended by other interested parties, that there was a conclusion to these, and a narrative of the puzzle could be established. They are dialogs. One person has a doubt, and expresses it outwardly. From the topology of their Network, no doubt established by a previous acknowledgement of similar interests, comes the response: yes, I share your doubt. Then the synthesis: let’s see if we can’t manipulate the Networks to find our solution.
As I mentioned to @exstasis, who pretty much solved my mystery single-handedly as I watched in awe, it was amazing that he solved the issue using only online sources that were readily available. Search engines, scans of books uploaded (perhaps with dubious copyright conformity), and the various versioning that the wide duplication of resources on the Network can provide. While the puzzle could no doubt have been solvable with standard academic resources at hand, such as a good academic library, he didn’t need any of this. And considering the fact that we found no mention of the existence of the puzzle at all, it is entirely possible that no one has ever bothered to track down the solution. Therefore, the Network allowed a couple of “amateur” scholars satisfy their curiosity, without needed to avail themselves of the resources of the standard fact-checking institutions. Those institutions that through their mistake, created the puzzle to begin with; but we won’t blame them for that, as what all these cases make clear is that the Network is in fact equal to “higher” academia when it comes to creating these intellectual puzzles, as well as solving them. One wonders if in fact, the Network has further merit in that not only does it allow access to anyone with the basic ability to connect and the will to participate, but the pace of both mistake and correction is incredibly rapid (perhaps related to the scale of participation as compared to academia).
All of this being fairly apparent to anyone who is more than a casual user of Twitter, or some other tight-knit soft-network.
But here’s what I wonder, and what I’d like to suggest. The theory of gamification states (my own generalization here) that a motivation strategy for behavior could be an assigning of points and reward structure so that a person can more easily visualize their progress towards a goal. But perhaps rather than gamification, we should be considering puzzlification as a strategy for utilizing these sorts of soft-networks.
The difference is this: a game is designed to structure goals via a definition of quantified points and winning conditions. A puzzle, on the other hand, is itself a structure of a qualitative and logical quandary. A game can be cheated when the points structure is manipulated to achieve the winning conditions without necessarily achieving the goal. But a puzzle can only be solved, or not. However, a proposed solution to a puzzle can be at first accepted as seemingly correct, and then later found to be incorrect. A puzzle can be be the goal, structured into part of game. And in a sense, the strategy for winning (or cheating) a game can be thought of as a puzzle. But the difference is quantitative/qualitative.
To more directly contrapose the two: the game structuring numerous small problems together via a generalized quantitative network, the puzzle an isolated network structure of specific logical quandary. Both are ways of structuring our assessment of reality, and so neither is more “real” than the other. The facts of gamification are not about the ability to cheat, so much as what that ability entails. A poorly-described puzzle is in no way superior to a well-designed game. Nor is a properly-apportioned game necessarily worse than a clever puzzle. They are merely alternate ways of describing a goal, so that the mind can attempt to guess what its move should be to satisfy that goal, so defined.
Furthermore, I would venture to say that in addition to simply quantifying the issue, a game’s rules are more generic and abstract from the actual tasks at hand, bridging beyond one issue to a whole set of issues, wired in series as it were. While a puzzle, in addition to quantifying the situation, is specific and concrete to the issue, considering everything holistically. Once the puzzle is solved, that is it. It may be intricate in its layout, but a puzzle is entirely self-contained.
Game: Quantitative, Bridging, Generic, Abstracting, Network-Extensive, Structural Assessment
Puzzle: Qualitative/Logical, Holistic, Specific, Concrete, Network-Inclusive, Structural Assessment
Because these are similar ways of assessing problems, both have merits which are no doubt applicable to different situations. But to a soft-network such as Twitter, I think we should look towards the puzzle. I am calling it a soft-network because Twitter is not meant to organize any particular process or activity. Sure, it is based around 140 character messages, but clearly the point of Twitter is not simply to create 140 messages. Twitter represents language itself, in a way. Language is for communicating, but that’s not all we do with it. We also grunt, express emotion, think, act, commune, organize, and many other things through language. With Twitter we send messages, but also network (in the verb sense), share, express, link, and a bunch of other things. It is that there is a very basic framework without explicit purpose that we are able to do so much with it, extending it outward from its premise. On the other hand, a hard-network is defined by specific tasks. The html structure of a website, for example, is designed to render information via a browser, and provide programmed functionality. There are different ways of doing this, and one can do an incredible number of things with such a structure. But it is a specifically-defined system and outside of its core task, has no other function.
Hard-Network: Specific, Network-Inclusive, Rigid, Concrete, Defined Structure
Soft-Network: Unspecific, Network-Extensive, Flexible, Abstract, Interpretable Structure
And this difference is ironic, when it comes to interacting with these structures. Problems with hard-networks, those structures that are very specific, are perhaps best solved by quantified assessment. HTML ought to render fast and error-free, and be coded simply and quickly. With a specific structure to act upon, we can take an abstract and generic method of assessing those actions and still assess very effectively. On the other hand, with a soft network like Twitter, it is very difficult to generically assess a “winner”. Rather, with such an open-ended structure, it is better to assess our actions within it logically, only according to a concrete and specific set of qualitative parameters. Do you “win” Twitter by tweeting the most, or fastest, or having the furthest reach? It all depends on the specifics of the particular puzzle you are trying to solve. You can see with the traits I’ve identified above, that a puzzle has certain attributes of a hard-network, which a game has attributes of a soft-network, and yet I suggest they should be oppositionally aligned.
These interaction pairings are antithetical to how we might think of them. Wouldn’t a well-defined hard-network structure benefit from an assessment system specific and concrete to its limited definition? And wouldn’t a more flexible soft-network require a general, far-reaching assessment? The answer is no–because assessment is not about mimicking what is being assessed. It is about control through overlap. Reality and our conceptual schema are always, in a sense, in opposition. We can’t think that our mental conceptions of the world will ever catch up with the detail of the mechanisms of the world. Instead, we need to model and simplify. The best model is one that overlaps the boundaries of what it attempts to model, rather than mimicking the subject. It looks at the difference between the object and field, rather than the undifferentiatedness of the middle of the object or the edge of the field. If a system is limited, a generic model will cover more of the extent of the system. If a system is more fluid, specific samples will gain a better sense of what needs to be observed. The model is part of the system it observes and assesses, and therefore it ought to fit in as component, rather than attempt to draw a map of each grain of sand.
Consider again, these checks of attribution error conducted via soft-networks. Should we award each of these people who succeeded in correcting an error “points”? Why? What would these points mean tomorrow? What do they mean in terms of the errors themselves? The game, as it were, is not “winning the Internet”, as the joke often states. The puzzle was identifying and correcting an error that no one knew existed. These puzzles were each solved in their entirety, and no doubt many more lie out there waiting to be discovered. If we awarded these players a number of points, how would these points help them prepare for the next puzzle? On the other hand, if we congratulate them for solving a puzzle, we can trace the steps that went into the solution. We read back through the dialogic steps of the Storify, and see the moves they made. We don’t attempt to replicate these moves exactly, but we recall the strategy they imply: collect a network of intelligent, like-minded individuals; keep a sense of what search tools are more helpful; locate resources for finding illicit copies of otherwise un-retrievable texts; and when you think something is amiss, why not say so out loud, and see who responds? “TRY TO GET MORE POINTS” is not a helpful tactic here.
The club is the mediator or frame through which the music is communicated. The band literally plugs into the technology of the club in order to magnify the sound, turning a possibility into actually, making what is heard by the musicians themselves accessible to an audience. People pay to see others believe in themselves.
So how did the attackers gain entrance? Around two weeks ago, Sony was defending itself against constant denial of service attacks, and it seems the entirety of their online team was busy dealing with that threat.
“Detection was difficult because of the sheer sophistication of the intrusion,” Sony wrote in the letter. “Second, detection was difficult because the criminal hackers exploited a system software vulnerability.” A company executive had previously stated that the hacker gained entrance through a “known vulnerability” that the company was unaware of. Sony also claims that because its team was so busy defending against the denial of service attacks, detection of the hack was even more difficult. Sony claimed that this was “perhaps by design.”
Okay. But that is not all. Sony also claims to have found a smoking… well, not a gun, so much as a business card.
Sony also claimed it found a files on its server named “Anonymous,” with the text “We are Legion.” The document also places the blame of the denial of service attacks directly on Anonymous.
The ludicrousness of this claim is also the basis for its complete possibility of being true. Anonymous is anyone who claims to be Anonymous for any purpose, unless Anonymous claims that someone claiming to be Anonymous was not Anonymous. Both parties of which could be anyone, of course. While incredibly unlikely that a banner most often used for pro-democratic and free speech hacking activities would be waved by data thieves, it is also entirely possible, because, the nature of that banner is that it can be held by anyone. Except, that it is equally suspicious that such a banner, specifically called “Anonymous” and championed for this unique group-subjectivity under which anyone can feel free to speak as part-leader, would be purposefully chosen as ideological-zombie for a false flag attack, because they might as well have chosen the name “John Doe”, for all the malicious effect this will have for anyone actually named John Doe, or any claim to political purpose such a circumspect name might imply. It is is absolutely as equally likely that someone would actually attack Sony under the guise of Anonymous, as attack Sony under the fake guise of Anonymous.
And yet, Sony was under attack by Anonymous. And also not under attack by Anonymous. It was both under attack and not under attack by Anonymous at the same time. And in response to the claim by Sony that Anonymous had a hand in at least helping the data theft take place if not leaving it’s real/fake business card at the scene of the crime, a different business card also denies that Anonymous had anything to do with it, while also admitting that Anonymous could have both had something to do with it and had nothing to do with it, because of the multi-party membership of the non-organization.
All of this, of course, depends on the assumption that Sony really did find a business card of Anonymous on their servers. Which, of course, is probably about as equally possibly true as possibly not true.
To sum, let’s review the equal possibilities:
1) Anonymous attacked Sony and stole data
2) Anonymous did not attack Sony and steal data
3) Someone claiming to be Anonymous attacked Sony and stole data
4) Someone claiming to be Anonymous did not attack Sony and steal data
5) Anonymous attacked Sony while someone else who was not Anonymous stole data
6) Anonymous attacked Sony while someone else claiming to be Anonymous stole data
7) Anonymous attacked Sony while other Anonymouses stole data
I think that exhausts all possibilities. But, I logically conclude that every single one of these possibilities is true. What we know for sure is that Sony was attacked, and then that data was stolen. Because of the unique nature of the status/banner known as Anonymous, as soon as the name “Anonymous” is mentioned, we must assume that Anonymous was involved, was not involved, was fake-involved so as to be a patsy, and that more than one particular instance of Anonymous was involved/not involved. The invocation of the name of “Anonymous” is akin to “The Game”: the point of which is to win, by not being the first to mention the existence of The Game, and thereby losing The Game. The similar paradox is that by using the name Anonymous as a subject responsible for a verb, Anonymous is suddenly involved in the action, explicitly not-involved, maliciously and falsely implicated as being involved, and split into two or more facets that are involved/non-involved. The unique constitution of this non-organization lays bare the philosophical implication of the word “anonymous” (lower-case), and by giving this philosophical non-subjectivity a face (as it were), radically gives this disorienting effect of real anonymity a place in the world. And also a non-place, if you get my meaning. Anonymous might be the most existentially interesting subjectivity position/non-position since the theory of the unconscious. Before the theory of the unconscious, thoughts in our mind that were not consciously available to our mind were emotions, demons, or alien intrusion. But by popularizing the idea of a non-conscious realm of thought, we can have unconscious thoughts, which are thoughts/non-thoughts to the conscious part of our mind that we recognize as ourselves. Similarly, by invoking Anonymous, we have subjects who are simultaneously non-subjects, fake-subjects, and multi-subjects. Anonymous is the un-ego to the ego, and simply by speaking its name, it can create these doubles, fissures, inverses, and multiplicities.
But, we should hardly expect the media to be ready to grip this complicated state of affairs. Note the title of the Ars Electronica article from which come my block-quotes: “Sony: Anonymous provided cover for PSN attack.” While the headline is phrased to make it clear enough that this is Sony’s contention and not fact, it does not allow for the host of simultaneously contradictory and yet accurate possibilities that are immediately implied by such a statement. What is sure is that someone attacked Sony, and someone stole data. Sony, and by extension, this media outlet, have found whom they will blame. A person named No One. If this is a sign of things to come, in a time when Anonymous is a new subjectivity position now technologically able to exist, (and I believe it is) this is not the first crime that we will find No One at least partially responsible for.
Perhaps it is also not insignificant than Osama bin Laden, as public enemy number one, is now a dead letter (excuse the awful pun). Perhaps the whole left by the disappearance of this negative, will find its new subject in Anonymous. The age of Anonymous-Humanism: a time when we hunt No One, and by extension, Everyone.
This is the second in a series of many reports. Each entry in the report represents a pattern.
Places for Secrets – Just as certain sorts of knowledge and information lend themselves to a desire by their holders to have their facts be kept hidden from some, certain places also lend themselves towards those that would seek to hide. Low light, obscure vantage points not in the typical lines of sight–these are ways to visibly hide. But a game of epistemological hide and seek is constantly occurring. What places have background noise that would cover a whispered conversation? A crowd that would make a meeting between two subjects seem less than intentional? Light that obscures the work of cameras, that would seek to record a person being in a place as time-stamped, cross-referenceable fact? Weather conditions might play a factor; places that are known to often be socked in by fog or made unpleasant by rain so that a potential spy would have no reason to loiter could be valuable. Any sort of sensory or epistemological interference natural to a place, whether affecting the senses, technological recording devices, or the media of recording itself. What could augment a place so that secrets could be hidden there? Dead drops for paper or other recording media. A single tree in the middle of a field could be a landmark, so that a thing could be hidden a set distance from it. Maybe even a library could be a place for secrets. Amongst a plethora of information, secrets could be hidden as if in plain sight.
If/Then – This linguistic and logical construction is known as an antecedent, and a consequent; in other words, from one proposition, logically proceeds another by way of their connection. This is also a form of hypothesis. If a condition forms, we posit that then we may expect a conclusion. It can be a description of causality, but–and this is a large caveat–only if the two things being described are coinciding in time. It is impossible for a causality to occur between two things not coincident in time. Because, time is resolutely causal.
Past/Future – Another pairing, because one denotes the other. Just as causality denotes a temporal coincidence between two things, any sort of temporal singularity, that is to say a moment, automatically implies an extension of similar moments preceding and proceeding from that moment. What is the past’s relationship with the future, outside of metaphysics, and the simple number line of physics’ fourth dimension? Does nostalgia for the past imply hope for the future? Which is more optimistic, and which is more pessimistic? Does positing a time-shift between a “now” and “then” make us less, or more beholden to any standard of truth? And is causality, like history, only written by the victors in the past tense, and like prayer, only proposed for the future by the victims? If we acknowledge trouble in our apprehension of the past and future, what does this mean for our perception of the present? Is there a present?
Live feed – The live feed is closely linked to technology. Telegrams gave way to telegraphs, which gave way to radio. The 24-hour cable news cycle is no different than radio, where the truth occurs as fast as information can be pushed to the announcer on camera/microphone. But the time of absorption has changed. There isn’t additional information to fill up that extra space, there is just a willingness to “clue in” those who are “only just tuning in”. The message repeats, not for mimetic purposes, but to constantly be current. Contrapose this to the live blog, that assembles like a timeline, so that anyone may log in and check the current development, and then re-create this currentness by rewinding as necessary. The consistency of these always-on feeds means that they don’t have to be always on. One can click on and off as they like, filter even. They can binge and purge their information’s currentness. But what is the point? What is the benefit of current? Current information is not always better. But the ability to have it there, is an ability. An epistemological ability to access time with a wide eye. Like a back-up for one’s data–the data that is epistemological awareness. Perhaps it is no coincidence that Apple coyly named their automatic data back-up system the “Time Machine”. Time travel through data is possible, but only to the referential data points of awareness that are of interest. And interest, is currently, taken with currentness. Call it time travel without moving.
Half-tone screen – When printing with a single color of ink, it is possible to create different tones by printing a pattern of dots of varying sizes, rather than a flat expanse of ink. This dot pattern, which blurs to the human eye at a normal distance, is called the screen. Dots of black on white paper make a gray. When two different dot patterns of two different ink are combined, the colors are perceptually blended, e.g. red dots and yellow dots appear to give a space the color of orange. This is called a half-tone screen. Most commercial printing combines four colors, cyan, yellow, magenta, and black, and from these can be created nearly any color of image, including photographic prints that are nearly impossible to distinguish from reality at the typical viewing distance. What is referred to by a customer as “full-color” printing, is most often known to the printing technician as “four-color” printing. One last detail to complete the possible metaphor: when ink is printed in a screen pattern, the ink will bleed into the paper a bit, increasing the size of the dot in a condition known as dot-gain, that is pre-calculated by the printer to make sure the dots end up being the correct size for the material being printed upon, so that the colors don’t end up shifting in tone. Now, this could be a metaphor–a pattern for thinking about the combination of ideas, data points, and reference values. For something involving the mix of two alternating concepts. But then, remember that everything that is printed, anything that you will read or look at and recognize a pattern or a symbol or a word, takes advantage of this same trick upon human visual perception. In every idea there is a bit of difference, and in any text there is the difference between white paper, and black text.
National Epic Media – We propose that Fox News is as close to a national epic poem as we can get in this current era of fragmented culture and alternate viewpoints. According to Bakhtin, the past is the epic’s subject, the national tradition is the epic’s source, and what is epic is the distance between the world of that epic and that of reality. The epic, constrained by those things, cannot be changed by current conditions, and what is current can only be interpreted by the epic, and not the other way around. The position of the epic “is the environment of a man speaking about a past that is to him inaccessible, the reverent point of view of a descendant.” Even the law of the land is reinterpreted on a daily basis–but the national epic is viewed as immutable, and wielded as roughly as if it were so. But how does this happen? Does any nation with a significantly strong sense of self purposefully develop an epic media as some sort of literary ur-ground? Or does that past and national tradition solidify only with enough time gone by, enough tradition built up that the patterned strata of it can be referred to obliquely, and yet be nevertheless as foundational as it is inaccessibly vague? What are the motivations for a constant reference to such an epic media? Clearly, money is a primary. But epics developed before there was such money to be made, and if the form is similar, then oughtn’t the cause be as well?
Modernism – An epoch of art, of architecture, of literature, and less definitional but with no less certain utility, history. What is it about this genre or time period that deserves an “ism” suffix, as if it were less a style, and a belief? It isn’t the only genre to win such notation, and yet, it is a noun, and not an adjective. Such philosophies and ethos often have manifestos, but Modernism is applied only from historical perspective, even if we claim to be part of its age.
Modern – This is the adjectival version, describing the former period. But it is also a temporal adjective, meaning a certain sort of currentness. Is everything that is current also modern? Is everything that is modern also current? Post-modernism, an epoch with an even more oblique set of reference points than Modernism, somehow debilitates the adjectival effect of “modern”. After all, how modern can it be, if something is known to come after it? If the subject of modernity is in the past, then what does “current” mean?
Punk/Not-Punk – The inflection point in a spectrum between what is attractively, authentically agonistic, and what is not. Punk is a genre of many things, but it most often described by rebellion, against a certain “mainstream”, as it were. There may be money in Punk, there may not be. There is ego in it. It often finds its subject in the past. What is Punk against? Ronald Reagan? Disco? Alternative Rock? Victorian History? How defined must something be in its agonism for it to become a full-fledged expression of Punk? How watered down and mainstreamed must Punk be to become Not-Punk? The violation of cultural norms in the search for the authentic. The institution of norms for the violation of cultural norms. A noun, and an adjective.
Sub-Culture & Alt-Culture – If culture was a narrative, this would be the subversion and the alternative-generation presented to that narrative as counter-narrative. The antithesis, rather than the synthesis. It can be defined in a certain hegemonic separation. A neighborhood full of hip individuals, marked in their individuality by all dressing in a recognizably similar way. A trend is only a pattern, until it becomes a noun, rather than just an adjective. A subject, manifesting creativity, by manifesting imitation. Not for mimetic purposes. An authentic sub-culture cannot be altered by the present. It is locked in the past. It can only be corrupted, and de-authenticized. Like the waxing and waning of the moon, sub-cultures pass from authentic in full, to inauthentically dark.
I’ve read several reports of the celebrations that spontaneously occurred after the announcement last night. (One, Two, Three, Four, Five) And while I respect the effort that goes into writing about something that is not easy to write about, I must say I’ve been disappointed by all of them.
It is far too easy in the face of a tough situation, to remark upon the fact that it is a tough situation, and withdraw with that as lackluster synthesis. “There’s a lot going on here.” The five essays I cited above say more than that, but in the end it boils down to this: calling a crowd a crowd.
I’m not writing this with the intention of saying that a crowd is not a crowd, or that the death of a particular person is politically/historically/culturally/emotionally relevant in a way that everyone has missed, and that I will grace you with that revelation. I’m writing to say that from the perspective of the human species, to throw up one’s hands and murmur something about the wisdom of crowds is precisely the problem. This is exactly what has been going on for the last ten years, and what appears to be continuing.
I could call it a post-post-9/11 line of thought, because I have been calling it that, and it sounds a bit clever. It is the emotion at the end of the film The 400 Blows. After all that happens, all that the main character has done and hasn’t done, he runs away from the juvenile work camp. What begins as a somewhat exciting escape attempt, draws out into a single, two minute shot of him running along a road, having easily eluded his pursuer. Where is he going? We imagine that he just wants to escape, he has no destination. And then the camera changes shots, and we see him running towards the sea. He must have seen the sea from hundreds of yards away. He knows it is there. And yet he keeps running. All the way across the barren length of sand, and into the waves. Once he steps foot in the waves, he completely soaks his shoes. To me it looks uncomfortable; it does not appear to be a warm day, and wherever he walks now, he will have wet feet for hours. As if in the juvenile recognition and regret of this fact, the same down-turned countenance with which he has conducted his poorly-managed misbehaviors throughout the length of the film, he leaves the water’s edge, but doesn’t move to leave the beach, either. The camera zooms in, and freeze-frames his face in the breeze. “Fin,” the title reads.
In 2003, in the depths of the War on Terror, a college acquaintance of mine made some unfortunate comments on a community web site, that were taken to be terrorist threats. He was charged with felonies. Anyone who knew him could tell the comments were not serious, but this didn’t matter. In fact, that he was just a teenager from the Midwest with an odd sense of humor seemed to steel the resolve of the police and college administrators in persecuting him. The question was not whether or not he was a likely terrorist or capable of committing or planning to commit terrorist acts. The issue was that he had the gall to joke with the assumed understanding of such a possibility being ridiculous, and this itself was a crime. The presumption of being innocent of terror was a terrorist act. That there might have been a joke was akin to conspiracy to kill. As the chief of police said, “in a post-9/11 environment, there are no jokes.” We, those who knew better, wrung our hands, cried to the heavens, beat our chests in frustration. Could they say anything more revealing, more tinged with Orwellian anti-humor? Could there be anything more of a joke than to ruin the life of this young man? Except that it wasn’t funny. It was reality.
Last night, the jokes returned. After the immediate tension of the revealing of the truth passed (about five minutes in Internet-time) the jokes began, and roiled back and forth across the surface of the info-sea. The jokes never left, of course. How could they, when they are the only response anyone has been able to muster to cowboy presidents, to color-coded death threats, to security theater eroticism? The jokes are here, like bricks, and from them we have built this reality we’ve come to know.
My fear is that jokes will only ever be our only response. Is this it? At the end of ten years all we can do is mill about holding up our electronic eyes, as if with these networked gaze-of-crowds we could somehow evoke the significance that we cannot find. It used to be called irony, back when it was a unique take on a normal situation. Now the uniqueness of the alien crowd is normal. What is normal? Normal is not knowing what is normal anymore. As things get less normal, the petrifying ossification of normalization only becomes more all-encompassing. And not a singular nomalcy. Chaotic normalcy, with all the drowning, soaking uniformity of the tossing molecules of the ocean. A thousand points of light/flowers blooming, and then catching alight in a single wind of flame. Each meme is another brick in the wall of making everything seem just as uniquely odd as the next thing. And it only gets weirder/more normal from here.
And we are still surprised that our feet are wet, even though we saw the sea at a thousand yards. Blinking at the crowd may be all-too-human, but a teenage, irritated exhale through the bangs at the sight of shirtless men climbing light poles, and women staring at them expectantly? Can you honestly say you never expected this? Ten years may have seemed like forever in 2001, but in 2011 it’s just another mini-epoch to reflect upon. Covers of Wired Magazine are made on such petty units of time. Would we really keep not finding him forever? And what did you think would happen when we did? Did anyone expect there to be a trial? Peace? Even a second’s serious reflection on the wars (or more than 140 characters’ worth of thought)? What else was there beside a bullet in the head, a DNA test, and a burial at sea? These sorts of narratives are wrapped up in an hour, less commercials, on prime time TV. We can excuse reality for doing it in 24. The flags, the flags, the flags. College students looking for an excuse to be late to Monday morning classes. Breasts dangling. Let a thousand Flickr feeds bloom, and burnt out my eyes with the lily-white skin of 20-something America. What did you expect? Nobody expected anything more than this. That’s why the most erudite thing anyone could think to say is U-S-A, U-S-A, U-S-A. And hold out a cell phone, into the night. Obama wrote remarks. Everyone else spelled acronyms repeatedly.
The only worse thing than the sullen, confused teenager is the lecturing, patronizing parent. And yet, I’m no prophet, and no doctor with a prescription, either. I’ve been a teenager though. And while I had my wet-footed moments as I learned how to see through the jokes, I also learned to shout. I think that is what I want from people now. Not a whimper. Not a shake of the head, a self-conscious close of the eyelids to block out what they are doing in the street. Not an ironic, snide comment under the breath. Not a pleading complaint.
I want shouting. Anger in the street. To release these feelings that have been building for ten, long years of idiocy. I don’t want catharsis. I want it to build. I want the sound reverberating from the buildings to make people uncomfortable. I want it to hurt their ears. I want them to stop talking and stare at the guy shouting in the street. They’ll probably hold up their phones to capture a picture of the crazy guy, they might even shout back. But enough is enough. They’ve had their blood now. Now I want mine. I want the sort of blood that will reclaim ten years of lost history. The sort of fluid that runs out of sliced books. The kind of event that closes prisons, that turns wiretaps into hissing static, that makes the people who decided to do this actually see what it is that they’ve done. I want the sort of blood that doesn’t exist, that runs in veins so thin and rare around the surface of the world that it has hardly ever been spilled, except occasionally, only ever in the tiniest, most effervescent of drops, which quickly boil into nothing when seen by the eye. But I’m going to shout for this blood anyway.
Our feet are wet. Ten years passed so quickly, and another ten will pass the same. And we’ve run out of ground to pound our feet against mindlessly. It’s time to pass through that crowd, rather than stand on the periphery. I don’t need to ask if anyone is with me. Because that’s not the sort of question that has a correct response.
So no one knows the words to the “Battle Hymn of the Republic” (see link Five, above). At the May Day March I was at, which also happened yesterday, some union organizers tried to start up a rendition of “Solidarity Forever”. No one knew the words to that one, either. But all of us know how to cry for blood.