No longer will we hang our heads in shame. We will lift our heads high, as we hang ourselves in effigy. Politics, art and culture no longer aspire to self-representation, but to shining insincerity, pixilation, the debris of our denials and our basest caprices. And “greatness” is not merely a twentieth century joke, a fleeting gag, it is the agreement for our eternal soul, devil’s contract and all.
“Art & Literature,” as we think about it now, is a few hundred years old. “Great men” and “great women,” and “works of great art,” are post-Elizabethan concepts. The “artist as hero” didn’t take hold until the nineteenth century, and wasn’t defacto until the twentieth century. But in less than a hundred years, the artist as hero has become mainstream: music, art, literature, fashion, film, everything. In Western schools, we’re taught that the artist has won a revolution. Mozart, Beethoven, Joyce, so it goes, insisted that the artist had more cultural cachet than the aristocrat, and it changed the world. Maybe-ish, but in our century, the heroism of artistic pursuit is primarily concerned not with creative freedom, but with the sale of stuff, and we attain self-actualization not through art, but through the purchase of identities we’ve dreamt up for ourselves.
The self-centered creator, the ego that self-defines, is not only the rock star, the novelist, the couture chef, it is the consumer. You need to need stuff, need recognition, need definition, to be you. To be successful, you need a fancy watch, to be Hip-Hop you need this music, to be environmentally conscious, you need this hemp thneed. You can’t be what you are, even if it’s a gender, even if it’s a race, without buying something. This is the mindset, the ecosystem, that media fosters in order to sell advertising space.
The problem for media is that this model, this story of the artist, is limited and immature, and losing its audience to an array of alternative stories, and alternative ways to find and experience culture. The internet is vast, and a mode of distribution—an egalitarian one—unto itself. The big media answer is to convince advertisers that their demographic, let’s say the readership of a book section of a major newspaper, is small but targeted; their readers also buy Mercedes. Which leads to another problem: to make sure that the readership buys Mercedes, the content is again compromised (to entice Mercedes buyers), which again shrinks the readership, which again necessitates the insistence that advertisers will reach “the right people,” which again compromises the content. Ad inifitum. Micro targeting markets is an old idea, but in old media, like print, it’s barely better than self-immolation.
In its favor, the “artist as hero” is an appealing construct. Creative people, all people, like to hear they’re important, and different, and definitive, or at least somehow included among the “elite.” We like to think that the world is small; that there’s a direct line of ascension from this great artist to this great artist to this great contemporary to, hmm, “me.” These illusions, “greatness” and “a small world,” work together to generate an endless stream of propaganda; as we elevate the artist that has been deemed culturally acceptable (deemed so by the categories and philosophy of the distributors far more than the market), we elevate our vision of ourselves as tastemakers, and at the same time shrink the giant world to make the whole rigged process less ludicrous. All of which makes us easier targets for salesmen; we are vain and petty.
Politically, this model of exclusion and hierarchy is also a justification of existing structures, and existing injustice. A Western culture of exclusivity and hierarchy is bolstered by a creative culture that presumes resources are limited and that the world can be and is understood only by a certain class of people. (So if you find yourself talking about “greatness” all the time, you’re a propagandist.)
As long as the distribution is controlled, i.e., one major book distributer, the message of sublime artistic merit can be protected and perpetuated. The content, the artistic output, can be fashioned and conformed to specs, and the bar for inclusion is: creative content that’s good enough to pass as meritorious. And, like the content, the casting and presentation of the artist can be tightly maintained. The stories are predefined; as are the cultural archetypes of the artists who tell them. In fact, a story that’s presumptive of a Western model is better, more easily approved and accepted, if it’s told by the dispossessed. No matter how bittersweet, the coming home story, the story that delivers the outsider to the cultural mainstream, is the stuff of big awards and, capitalize, Artistic Merit.
But the centralized economies of the arts are collapsing. The art world is ever broadening; no longer is it just Soho or New York or London. The big auction houses, overselling their wares and losing market share to smaller auction houses, are dipping in stock value (see it on MarketWatch); people can cherry pick their interests across the spectrum, and sidestep hegemonic bullying. The book world is competing with Amazon, which is far more inclusive than traditional publishing; and even more daunting to the big five publisher is this seemingly unstoppable proliferation of self-distribution, i.e., the Internet. William Shakespeare, by the most generous tallies, had a few hundred contemporaries writing in London; the total population of the city was 200,000, and 70% of the population was illiterate, and literacy itself was not an education, and the only outlet for imaginative writing was poetry and the stage, so it’s not terribly surprising that the creative pool was so limited. In fact, the only “profession” associated with writers was “scrivener.” To elevate Shakespeare from the rabble was not a task ponderous with subjectivity. But today, with what? a few million writers, and picking the best one can be only an act of oligarchy, or exclusion, or folly. Last year, Poets & Writers stopped ranking MFA programs. Why? Because there are well over 600 Masters programs in the United States alone, and that number is growing exponentially, and to tally the best 10 or best 100 Masters programs is to willfully stand by a process that is arrogant, corrupt and stupid (the New Yorker called it back in 2011).
The current solution for traditional distribution? Bunker down, take firm control of the message, put out less, and back your bets with astronomical figures: make this particular painting worth way more; pay a huge sum for this particular first book; make just a handful of movies, each of which has the budget of a small nation. Again, a problem; with fewer offerings, you open the way for competitive models. And not just on the side of populism. Not long back, former New York Times editor Jill Abramson announced a startup that would pay $100,000 per short story, and publish and promote only one such story a month (via The Guardian). A model like that could out-propaganda the propagandists, at a much lower overhead. Additionally, a hierarchal model self-proliferates; while the hierarchy must be continually bolstered by awards, etc, awards lead to more awards, and as much as people are inclined to bow to what is “great,” they are prone to bridle at obvious bias, and to dilute the hierarchy with their own hierarchies. Museums (case in point, via Holland Cotter in the New York Times) don’t know what to do with themselves, and with auction prices what they are, can’t afford to do what they’ve done before. And what does that mean? More, smaller museums, or, uh, museum-like entities, like DIA or Union Docs or The Tenant Museum, which cater to the audiences museums have lost. The transition will not be painless—a shift to smaller/local venues will value proximity over quality—but hierarchy is inherently inclined to diffuse.
And the biggest problem, of course, with this artist as hero, troubled soul in a drafty garret, all-alone vision of creativity, is that it is incorrect. People don’t work in vacuums, and the greatest of the great artists who exemplify this model, let’s say William Shakespeare, worked in a time without copyright, and with massive collaboration, and with royal sponsorship and endorsement. If Shakespeare were to work today the way he worked in his own day, no major theater or publisher would ever have anything to do with his patently and ineradicably plagiarized works.
We’re drawn to collaborative arts—whether it’s a wiki or fan fiction or satirical treatments of pop culture or big-budget television or whatever—because collaboration is intrinsic to creativity. And the way people now work via the Internet—crowdsourcing, information and techniques readily available—is but an indication of what we’ll see in the next forty years. And not just in the arts. Whole identities, whole professions will end. There will be no scientists; if you want to be a biologist, you’ll upload the expertise and then participate in a groupthink with other interested people. If you want to be an architect, or a geologist or an historian, same thing. Already, innumerable fields have been replaced by software. In video-editing, there used to be people who made computer graphics flames; that was all they did, FX fire. And they were well-paid, in-demand people. And as of three years ago, with your discount code, that’s a 20-dollar plugin. Advertising? You can target advertise from your Facebook page, or your Twitter account, or your Amazon author profile. In other words, marketing has become an add-on.
Piotr Uklanski Nazis, 1998.
Jeff Koons, Titi, 2004-2009.
If the twentieth century, as Walter Benjamin characterized it, was the Age of Mechanical Reproduction, the twenty-first century will be the Age of Simulation. Increasingly, there are no fields of expertise, because so much of what is “expert” can be downloaded, and even if it has to be learned, the information is so accessible—even micro decisions, like, do I want an H-pipe or an X-pipe on my 1967 Camaro—that to be anything, any kind of professional anything, has become, and will progressively become, little more than a commitment to pretend to a given status. And that, of course, can only last for so long, before people realize they can’t really adopt permanent professional identities. We will each be, in our own way, simulations of however many identities we have the time or patience to pursue.
Josh Kline, Cost of Living (Aleyda), 2014. 3D‑printed sculptures in plaster, inkjet ink and cyanoacrylate, with janitor cart and LED lights.
And the simulations have already begun; as of 2016, our celebrities of popular culture are as likely to be simulations, for example, of musicians or actors, as they are to be musicians or actors. Our stars of reality television, our pre-packaged youth bands, and even our politicians, whether Sarah Palin or Barack Obama or Donald Trump, arrive, oddly, as if by the force of central casting. History is no longer made, it is arbitrated.
Donald Trump, and Alec Baldwin as Donald Trump.
This entertainment-oriented history is something that we can customize to our preferences. We see this feed, we follow this person, that person. We can, and do, favor the people close to us, making ourselves and our circles appear, in the grand scheme of things, more important than they are. While the cult of I will persist, technology will allow users to further shrink the world, to make their hero’s journey, their community, which is to say, their epic, appear that much more central to the “now” of the human narrative. Along with professions, fame will become the purview of simulation. We will cease to have perspective enough to know who is famous—and we will revel in the misconceptions we engineer for ourselves. “I am famous.” “My best friend is famous, and, oh no!, has been embroiled in a scandal!” Quite willingly, some of us will live within our own simulations, while others among us, driven by aversion, will “opt-out” of heroic creativity (which is to say, creativity that can be marketed and profitable). There have always been artists, incredibly talented artist, who have drifted into town, gotten their notices, and either flamed out, or moved onto a nice teaching position somewhere. In Rowling Dord’s take, via Artenol, when artists want to be plumbers more than they want to be famous, art is dead. Yet Dord’s portent dire is DOA itself, arriving 40 years after Arthur Danto’s terminal diagnosis via his 1984 essay, “The End of Art.” But isn’t Dord kinda right? When we want fame more than we want culture, all culture ceases to exist; culture becomes the simulation, while fame, the drive for fame, the experience of fame, is what’s real.
The arts present a microcosm of how the dynamic plays out in the macro (as Douglas Coupland, author of Generation X, recently pondered in Artsy). The scales are tipped: the desire to be famous is a more critical prerequisite to fame than is the talent to be famous. And at the same time, the expression of talent, the expertise to realize talent, has become elementary. Once, painters who sculpted, sculptors who painted, for example, were looked at askance, but now, artists may fabricate in any medium without being too onerously penalized by critics, as William Deresiewicz recently discussed in The Atlantic in his essay, “The Death of the Artist—and the Birth of the Creative Entrepreneur.”
If we date the epoch of fabrication with Jeff Koons’ Puppy of 1992, Deresiewicz, only 30 years too late, is a decade more current than Dord. That said, Dord and Deresiewicz may be comfortable with the assignation “simulation of critic.” I’m fairly comfortable with it myself. TheBeliever, as an entity, may be equally unworried that it is a “simulation of literary journal.” Our apparent ease in our own illusions, and the smallness of creative worlds—the “artstar” mentality of artists and writers who almost nobody has heard of outside of a peer group of a few thousand—typifies the accentuation of our own “hero” status in an epic tale that is self-manufactured. It will—alas, hooray, regardless—happen to all of us. And already, we seem to revel in the distinction of having become ersatz; the Republican defense of George Bush Jr., our first simulated president, is that he was not intelligent, and perhaps not much of a leader, but he was a good “manager.” And—this is more important than we’ll ever admit—he was a marvelous prank. The presidency of George Bush Jr. was the best dark comedy in years; and he did, to top it all off, in delightful simulative flourish, look just like Will Ferrell. Perhaps the election of Donald Trump, brilliantly mimicked by Alec Baldwin, speaks to a new presidential mandate: without a comedic doppelganger, thou shalt not be president.
Left: Cindy Sherman, UNTITLED #355, 2000. Right: Cindy Sherman, UNTITLED #360, 2000.
The recent proliferation of fake news, and journo bots may be dismaying, and perhaps it may be mitigated, but it can’t be reversed. The inclination of news organizations to abandon what is “objective” in favor of what is “balanced” has allowed for reportage and news discussion that is not factually based; the argument of “balanced” reporting has allowed for two sided arguments that pit fact against, um, the other side, even if the other side is a fantasy, misconception, or lie. It should come as no surprise to us that such reportage has made palatable the non-news news; and when such news appeals to our politics, it may be, of the “balanced” alternatives, chosen.
This process of choosing fantasy over reality carries over to our personal lives, whether via social media or just flat out self-deception, as well as to all aspects of culture. Art in the Age of Simulation seeks not to represent reality, or to reproduce it; rather, Art in the Age of Simulation fashions preferable realities. Realism will cease, has ceased, to be the baseline of creative representation. In film today, special effects make living fabulous and skin perfect. No matter that isn’t what the sky looks like, and that isn’t what skin looks like, and that teal color that’s a perfect compliment to vibrant skin, that isn’t what every background looks like. The world, if you take a moment to look at it, isn’t all green and orange, hmm, like it is in this still from Transformers:
Glenn Ligon, I Sell The Shadow to Sustain The Substance, 2011.
We don’t want perfect representations, not of our experiences, or ourselves. A rule of thumb in humanoid representation, whether virtual or robotic, is that the approximate human form shouldn’t be too perfect: by way of an effect known as “uncanny valley,” a near-identical resemblance to the biological human arouses revulsion. A body that is too exactly a body causes us discomfort. Whether in an issue of a mid-century Playboy, or in the figure of a modern-day sex doll, it’s in part the burlesque of human proportions that grants the permission to fantasize. A human body that is too real arouses sympathy, morality, etc; whereas the fantastical representation can be purely carnal. And our impulse to create better conduits for fantasy are not reserved for robots and virtual reality; we have, as well, chosen to move the reality of our own bodies further toward exaggeration, and fantasy.
Lisa Yuskavage, Big Blonde Jerking Off, 1995.
We want, in our flesh, what is unreal, bodies that appear air-brushed or imagined, via anime, let’s say (note, in the below example, the South Korean trend of facial surgery). We go so far as to reshape ourselves, with surgical sculpture, to conform to our fantasies.
Our sexual fantasies, in our pornography—silicone and pixels—preview our future. The profit incentive, too, keeps pornography at the forefront of technology; throughout the twentieth century, our cultural proclivities, our deepest desires, have been catered to in porn before we can even articulate them. The earliest porn films, stag films, captured events as they happened; viewers could watch something that had once been, i.e., representation. Later porn films took on the distribution model of cinema; porn went out to theaters, where audiences, for the most part, watched reproductions of just a few originals. With video, the distribution of porn was decentralized, so there were more choices, albeit at a lower production value (proximity over quality). And now, on any given porn website, there are an infinite number of categories and tags, and almost every option is garbage. As porn moves into visual 3D, via technologies like oculus, and physical representations of sexual beings, via cybertronics, we will invariably be able to “live” our fantasies. As Artificial Intelligences, or simulations of Artificial Intelligences, become more convincing, the vestiges of the age or reproduction will fall away. The categories, the tags, will become irrelevant, as whatever porn manifestation you’re looking at takes on the ability to adapt to your wants. In the end, just a few, or even one porn interface will flawlessly service all of us.
However long it takes for porn to become experiential, or rather, simulative of experience, art will be not too far behind it. In the future, the medium of art will be “alternate existences,” not paint or clay or any physical materials. Artists will manipulate, in its entirety, experience itself.
Paul McCarthy, Spaghetti Man, 1993.
Cesar Voinc, Soubrobotte, 2014.
Pipilotti Rist, Pixel Forest, Installation at The New Museum.
In the sciences, an odd notion has entered the mainstream. Perhaps we and our known universe are the holographic preservation of information at the event horizon of a black hole. At the edge of a black hole, everything is synchronously destroyed and infinitely remembered, and we might very well go on as holograms without knowing it. Scientists around the world have worked out equations that prove this, however bizarre, is mathematically possible.
The hypothesis, if disquieting, is metaphorically apropos. We may well be in the midst of designing our own Armageddon, which we witness, or, hmm, watch, in a state of torpor—it’s entertainment, but not very good entertainment. In seeing what lies in store for us—if we can arouse ourselves from our acedia for long enough to be bothered with prognostications on our destiny—we have to draw upon everything we know, all of our experience of the past. But we also have to know that everything we know may be irrelevant, illusory. We might fashion a future that is no more than manifestation of our own denial; a totally false simulation, however whole-heartedly we enter it. And however enthusiastic we are, the simulation that we enter may not remember us. We will merge with our technologies—in cybernetics, and cognitive augmentation and countless other ways—and we will become our own simulations. And it seems rather doubtful the simulations will become us.
Oskar Fischinger’s Raumlichtkunst (Space Light Art), a recreation of his 1926 multiple-screen 35mm film events, The Whitney Museum.
Hito Steyerl, Factory of the Sun, 2015. Immersive installation.
Michele Basta, Sphynx, 2010
Can’t we envision it now? The crossing over? The instant when Facebook has more dead profiles than live profiles? We will simulate loved ones, first in AI, then in robotics, then in both. We will simulate ourselves, in clones and cyborgs and incarnations yet unimaginable. And then we will simulate humanity. Whatever of it we deem necessary.
Yadegar Asisi, Great Barrier Reef, 2015.
Do we proceed, or drop all of this tech nonsense and get back to the Gaia? The one argument is that we’ve reached the end of heavenly virtues, that all Satan lacked was a proper platform to advertise, and that now, in the digital age, he has it. The other argument is that we’ll take to the stars, that through technology we will become free and enlightened. Whatever is coming, it will probably take a bit longer than we think, and in the slowness, will be less of a revelation than our overly dramatic hypothalamus has been portending. In the case of first generation human cybernetics, it will take at least one human generation, a birth to death data recording of experience, conducted and data processed for a sampling of test subjects that would preferably number in the hundreds of thousands.
Human life, in the beginning, will greatly benefit from a protocooperatic relationship with technology. We’ll be able to feed ourselves, enjoy our time, have longer lives, repair the environment, etc. (In Art In America, Carol Becker, discussing John Gerrard’s 2014 installation, Solar Reserve, considers the upside of a “simulated reality, one we have imagined into being and are continuously recalibrating”). But, inevitably, somewhere down the road, we’ll have the technology to transcend the tribulations of our biologies, and we’ll decide that the perks of being monkeys are insufficient to hold us in our flesh. We’ll want to do things that our physicality can’t do—and we’ll already be, perhaps mostly be, by way of our upgrades and technological integration, simulations of humanity, and not humanity. We’ll be human until we decide not to be human, until we decide we’ve already crossed over. The bots will supersede us, but they’ll be our children. Quietly, we will pass into the afterlife, holding the hands and looking into the eyes of our descendants. And we will wonder if, perhaps, with the end of human biology, such niggling intransigences of social and financial inequities, murderous cruelty and greed, mass environmental destruction, will finally meet the ultimate solution: no more us.
Teamlab, Floating Flower Garden, 2014.
Mariko Mori, Birth of a Star, 1995
Dina Chang, Flesh Diamonds, 2013
Perhaps that’s no comfort. And perhaps, if you’re an artist, it’s no comfort that, in the near future, the distribution model of the arts that we grew up with will be relegated to the add-on, the app, the widget; perhaps it’s no comfort that the world is not small, that there are seven billion of us, and that more than a few of us have stories, and that more than a few of us have talent; and perhaps it’s no comfort that we are not as special as we supposed, and that the revolution we have long touted will be championed by a collective army, groupthink, and not a great gladiator, and not any one of us personally. Not me, not you.
But if we can take comfort somehow, in this age of plugins, it’s that there is not yet a plugin for writer, there is not yet a plugin for artist. Computers beat humans in Jeopardy, but we still need a human to say who won. At this moment, we can’t teach a machine to know, 100%, if the answer is right, and we can’t teach it to be a novelist or a film director or a painter. Computers beat us at chess, but not via creativity; they beat us with the sheer force of their computation skills. And, yes, there have been attempts to make computers into creative beings, and there is already evidence, if somewhat pathetic, that is contrary to what I’m saying. A computer is “self-aware” (sort of, says gizmodo). A computer “passes” the Turing test (but not really, if you care to read about it on Vice.) A computer paints “emotionally aware” portraits (also in Vice); a computer writes science copy (the BBC); a computer composes classical music (Slate). But if the computer can’t yet make convincing creative decisions, it’s because we’ve barely taught it do so—because we’re still trying to understand our own creativity.
Ed Atkins, Performance Capture, 2015–16.
So, we have, maybe, a lifetime. Maybe a few lifetimes. Isn’t that comfort enough? And what a glorious, sparkling moment it is, this moment until then; while our every field of expertise splashes into a sea of microprocessors, while our last labors are cast to the capable attentions of robots, the artist, the writer, will persist. Writer and artist, these final professionals. In an era when the hierarchies that have impeded creativity since, hmm, the onset of history, will fail in their fetters. In this time, brief but radiant, creativity will hover on the last razor’s edge. And long after the bankers and prostitutes are chiseling, groaning digital facsimiles of themselves, artists will still be working, still be useful. The sculptor will work in the burgeoning aesthetic of porn robot; and the writer will bot-script his/her, as discussed by Vice, “skank mode” persona. And what then? When the creatives are gone? Well, when that happens, we’ll be gone too.