Jaron Lanier in Conversation with Tim Maughan


Jaron Lanier in Conversation with Tim Maughan

Jaron Lanier in Conversation with Tim Maughan

Nehal El-Hadi
Facebook icon Share via Facebook Twitter icon Share via Twitter

I attended a graduate urban planning program, during which I researched the interplay between virtual and material spaces. When I started, Google Maps was just becoming a thing, and my early research looked at how planners could design streets for hyperspace. Very quickly, I shifted to investigating the relationships between online and offline spaces and how they inform each other, and I researched the ways online user-generated content could affect the material world. Jaron Lanier’s writings were—and remain—crucial for anyone trying to understand how technology affects us, and his work has very much influenced my own. I met British writer Tim Maughan when we were both screenwriters for the Screening Surveillance short-film series, produced by his wife, sava saheli singh. Through different approaches, both men investigate the complex entanglements of humanity and information technology. Lanier writes philosophically about the ethics of virtual reality, the internet, the slowly eroding notion of personal privacy. He’s also interested in the proliferation of open-source content and decentralized software, and, in a 2006 essay of the same name, he coined the term “digital Maoism” to codify the phenomenon. Maughan’s work considers the complex relationships between people, their environments, and the technology they use (or which uses them), often with a focus on cities, class, and the future. His short film A Model Employee, for example, explores the implications of monitoring, incentivizing, and disciplining employees for the purposes of increasing efficiency and maximizing profit. The film focuses on Neeta, an aspiring DJ who works at a restaurant where the owner uses tracking software to assess his employees’ performance when they’re on—and off—the job. The film equates access to data with power, while pointing out that the software becomes the arbiter of trust. Maughan and Lanier offer divergent, though complementary, ways of thinking about what it means to be human, amid the increasing importance of machine intelligence, Silicon Valley’s influence, and the continued exploitation of people’s labor, intellectual property, and data.

Lanier is best known as the man behind virtual reality. He is also an esoteric musician, a collector of instruments, a composer, and a thinker. With titles like Ten Arguments for Deleting Your Social Media Accounts Right Now (2018), Who Owns the Future? (2014), and You Are Not a Gadget: A Manifesto (2010), Lanier’s books argue that we must insist on our humanity in the face of ongoing technological change.

Maughan is the author of the post-internet science fiction novel Infinite Detail (which was longlisted for a 2019 Believer Book Award). His first book, the short-story collection Paintwork (2011), is a triptych of post-cyberpunk tales. Its subjects include an augmented-reality street artist, a virtual-world paparazzo, and a high-stakes video game tournament in Havana. He has reported on technology for Vice’s Motherboard and BBC Future, among other places, documenting the advent of the civilian drone industry in Dubai; a toxic lake in Baotou, China, filled with the by-products of gadget manufacturing; and advanced, almost hidden global supply chains that involve Danish shipping companies and Korean mega-ports. Maughan recently published Ghost Hardware, an online-only short-story collection that reckons with moments of violent conflict between old and new technologies, flashpoints of the struggle between the past and the future. Titled after the Burial song—in a nod to Maughan’s musical proclivities—Ghost Hardware looks at what happens in a world after the internet.

Throughout the interview—which was conducted over the phone in early May 2020, while Maughan was in Ontario and Lanier was in California—we spoke about the weird times we found ourselves in. Though we didn’t mention it often, the coronavirus pandemic was central to our conversation. It provided the impetus for fast-forwarding our relationships with online technologies on a fundamental, societal level.

—Nehal El-Hadi 


THE BELIEVER: Hey, Jaron. How are you? Tim is also on the line. I don’t know if you guys have met before. 

TIM MAUGHAN: No, we haven’t. How are you doing, Jaron?

JARON LANIER: I am doing well.

BLVR: “Well” is a really good answer these days.

TM: It is a good answer!

JL: To be blunt about it, it’s unfair that I’m doing well. There’s a lot of people out there who are on the front lines of this thing, and I’m privileged to be able to live in a place that’s reasonable for staying home. I’m very grateful for that. I wish it wasn’t so difficult for people overall. What’s being asked of those of us who aren’t losing our livelihoods and who have a nice place to live is not really that much: stay in your nice house in order not to get sick. It’s not a sacrifice, and I recognize the differential between the very small adjustment we have to make compared with a vast number of other people. So, yeah, I’m doing well. I think that’s the correct answer.

TM: Exactly. I work from home anyway, so this is not that different in some ways, weird in other ways. At the moment I seem to be getting more work than I normally do, so I’m also incredibly grateful about that, but I feel weirdly guilty about it in some ways.

JL: I know what you mean. It’s this country cousin of survivor guilt or something like that. Being basically not as put-upon.

BLVR: Jaron, you often write and speak about the relationship between technology and humanity. With all that’s going on, how do you feel that this relationship is being affected?

JL: There are a few things to say about that. I’ll start with the relatively good stuff, and then I’ll go to the other stuff. If we want to find a silver lining in all of this, it’s that people are using the internet better during the pandemic than they typically did before. For instance, there are more people who have to be aware of one another enough to set up appointments on video. There’s a bit more self-reflectiveness involved in that than in following a feed. There’s also a sense of connection. I think the pattern that concerns me is people accepting whatever diversion of their attention is calculated by algorithms to maximize the interests of people who are paying for those algorithms. The more that people are self-directed, the better the internet becomes, and it feels more self-directed during this period. Part of that, I think, is that it’s what people need; part of it is that there’s fewer advertisers because of the economic trauma. Between those two things, the internet is a little bit more like it was supposed to be, for those of us who were around at the start.

And so, very weirdly, I feel like there’s vindication for people on the internet that the whole thing makes more sense when the commercial data incentives are lessened, which has kind of happened. And so that’s a good thing. A bad thing is the simple observation that the level of unemployment we’re seeing is similar to what it’s projected to be in ten to fifteen years, by the most optimistic economists trying to understand how robots and AI will displace people’s jobs. The ones who are more pessimistic are looking at even more unemployment. So the economy we’re looking at now is a preview of the better scenarios that will happen in not too long. If this is a rehearsal for how we will deal with rising information technologies in a decade and a half or so, we’re not doing very well, and perhaps this period will help us prepare better for them. It’s been a little dispiriting how poorly set up our society is to care for itself in difficult times, and I think we all kind of knew that was the case, but to actually live it is a whole other level. So that’s my good news, bad news.

BLVR: That speaks to what you’ve said previously about dedicating yourself to the craft of learning how to use technology. Tim, can I pose that question to you as well, about the way the pandemic has affected our relationship to technology?

TM: I might be a little less optimistic than Jaron about how people are using the internet. But that’s me and that’s my brand, to be cynical about this stuff to a certain extent. We see things through our own lenses and bubbles, as we all know. For every story I hear about Zoom reinventing working from home, I hear a story from a high school about its children shouting the n-word at one another on the platform. It’s horrifying. I certainly echo Jaron’s thoughts about labor and automation; it’s very concerning. And it’s not just the number of people that are being displaced potentially or would be displaced that we’re seeing; it’s the class and the demographics of those people. So we’re seeing essential workers and many gig workers actually being pushed to the front, being more in-demand than ever, but their lives are put at risk. We’re not, at this point, seeing an impact on the sorts of people like us, who can work from home, although arguably those jobs are threatened by automation as well. It’s a really interesting thought exercise, what Jaron’s raising, about how this might be a rehearsal for the future. I just don’t know that we know how to deal with it quite yet. In a way, I feel like the pandemic is exposing the limits of technology and intimacy via technology. It’s like being in a permanent long-term relationship over the internet, which I’ve done, and I know a lot of people also have that experience. But now, that’s how you interact with every single person you know, every friend. 

BLVR: Tim, as a journalist and author, you are literally in the business of prescience. Do you have any particular predictions that you’re horrified by?

TM: I would never say I am in the business of prescience or prediction; that’s not what I do at all. It’s the by-product of what I do, but it’s not something I set out to do. As a science fiction or speculative-fiction writer, I’m very much in the camp that science fiction is about the present; it’s not about the future. It mirrors our concerns and worries about the present and amplifies or exaggerates them in ways that we can analyze and talk about. I mean, I’ve had a few bull’s-eye targets as a result of doing that, but it’s not what I set out to do. And anytime I have done it, it has come from following trends and arcs. Most of my stuff comes from analyzing the course of capitalism and its history and its trajectory and looking at its fundamental views. And many of these outcomes—whether it’s Trump or the gig economy or automation or elements of capitalism failing to be resilient enough to deal with a pandemic—are there if you look for them. They’re nodes on the horizon; you can see them coming. It’s one of the reasons you have people like [William] Gibson and other cyberpunk writers who nailed so much stuff in the ’80s: because they could look at how capitalism was developing and at media saturation and all that stuff, and it’s not hard to then make predictions about networks.

There are a couple of older stories of mine that seem really pressing at the moment, like “Zero Hours,” which I wrote in 2013 about gig economy workers, before we’d even started using the term gig economy, and which seems scarily prescient now. And a story called “Transmission,” which I wrote in 2015 for a charity in the UK that commissioned me to write about the failure of antibiotics to work in the future. It’s about people working from home and living in quarantine, but it’s also about the class divide between those that are able to get testing and medical equipment, and those that work from home, and those that have to go out and work. The story even mentions people working and dying in Amazon distribution centers. So that’s been scarily weird to see. “Transmission” is a very dystopian story, and I exaggerated stuff in some ways, but the core issues seem to be quite accurate now. Both of those were commissioned pieces, and the prompts for both were: “Can you talk about these topics?” I shouldn’t say this out loud, because I get paid by clients and private industries to try to make predictions, but even when I’m doing that, I say to them, “That’s not what I’m here to do.” I’ll talk about these issues, and I might give you some guidelines for how you might look at the future, but I’ve not got a crystal ball.


BLVR: Jaron, in a 2017 Guardian article, you are quoted as saying that we need to “double down on being human,” which I find a fascinating turn of phrase, because not all of us are considered human in the same way. In the world-building that you do, what does it mean to you to be human?

JL: Well, that’s, you know, a huge question. In all of history and all cultures and all families and relationships and just everywhere, there’s apparently a tendency for humans to put one another down in order to feel whole. It’s a nasty business. One imagines it has something to do with evolutionary origins, or with some cruel side effect of some kind of system, or maybe it’s just part of the challenge that makes our lives vital that we have something that we need to overcome. I don’t know quite what the best way is to think about it, but at any rate, it’s there.

And so a long time ago I came up with a formulation for this tendency called “the circle of empathy”: the idea that you draw a circle around yourself, and the things inside the circle are the things that you consider to be worthy of respect and the things that you give faith to, and then on the outside of the circle are those things that you don’t, like maybe rocks or bricks or something; I don’t know. Stuff where if you had to do an ethical calculation, you would say, “I’m willing to see three bricks smashed to save a human” or something like that, so the circle delineates that distinction. The problem is that people don’t always have the same circle, and the circle is always blurry and sometimes quite difficult to resolve. A lot of issues have been rooted in the question of whether certain classes of people belong inside the circle or not or are maybe somewhere on the edge. Classically, in the US, of course, it was slaves and the descendants of slaves who were put outside the circle, and also Native Americans. And that’s a particularly dramatic example that, unfortunately, we have and continue to have, but there are things like that all through history and all over the world.

The twist lately in tech culture is this idea that computers can belong inside the circle. That the computer itself is becoming a living thing. And the problem with that, I think, is that it’s a stealthy way of actually continuing the exclusion of other people. Because if you’re saying that the computer deserves to be inside the circle, what you’re saying is essentially that it’s the people who own the computer that deserve to be more inside the circle than the people who don’t own the computer. I think it’s a stealthy form of supremacy politics. The better way to do things is too complicated to describe in this conversation. But I’ve come to a position that I don’t think a lot of people agree with but I think more will: it’s a process of elimination that leads you to the idea that you have to treat the very existence of information, the existence of data, as a form of human labor, and you have to say that human preferences and human examples and so forth that drive artificial intelligence are the new kind of labor that people need to be compensated for. In a strange way, it’s a radical expansion of capitalism, but the reason for that is enfranchisement. 

Let’s continue to use the descendants of African slaves who were brought here. After the Civil Rights Act, the legacy of slavery continued through disenfranchisement: for instance, through the redlining of neighborhoods. And so the thing is, even if on paper people have rights and are protected and there are laws and regulations, whatever—ultimately, it’s those who really participate in the ownership of the society that benefit from the society. And if you have a data-driven society, people who don’t have rights to their own data are disenfranchised. It’s really as simple as that. Well, it’s not; it actually gets quite detailed and complex. If we wish to have a form of computer science that doesn’t disenfranchise people, we must have a moral, inseverable right to our own data that has value in whatever form is active in society. And if the society is a market society, with money, then people need to get money from it. And if it’s some other thing we evolve into, they need to fully participate as first-class citizens in whatever that thing is, some weird blockchain post-money future, whatever it might be. I think the name of the game is enfranchisement, and you have to be futuristic about enfranchisement; you can’t be antiquarian.

TM: If nodding made a sound, I’d be deafening you guys right now. I agree with that 100 percent, everything you’ve said. I’ve got a very good friend of mine, a very excellent academic and sociologist, Karen Gregory at the University of Edinburgh, who coined the phrase “Big Data, like Soylent Green, is made of people.” And that’s really the essence: the data that we’re using to build AIs, run advertising networks, influence corporate and governmental policies—and it’s blurred between those last two—is all born from our labor, whether we’re aware of it or not. I have concerns. It’s important to value that, like Jaron says, and like you said. I worry about the relationship between our data and our labor, and free market situations suddenly sound really scary to me. It could get a lot worse. We could actually make this a lot worse if data was suddenly a commodity that you were able to sell. I worry that that would make people less aware of the impact of selling data. But giving data a value and understanding what that value is, understanding the role it plays, is vital to protecting our humanity to some extent, I think.

JL: Just to clarify: this idea doesn’t work if you can just sell your data. You have to retain moral rights to your data and license it for use only on a temporary and renegotiable basis. That’s absolutely crucial. Otherwise, the idea fails very badly.

BLVR: My research looks at what I’ve called “the production of presence,” specifically the production of marginalized communities and the ethics of accessing their user-generated content. Morality around data still hasn’t been figured out, especially when it comes to women of color: there’s this way of exploiting our intellectual labor. I would love to see how they could be compensated for that labor, and I think that if you start with the premise that user-generated content is labor and people deserve to be compensated for it, you start looking at whose data is worth more and what makes data valuable.

TM: Often we don’t understand, when we’re generating data, how we’re generating it, who’s taking it, and where it’s going. And the people taking the data: 90 percent of the time they don’t even know what they’re going to do with it; they’re collecting it just because they’ve been told to. The problem you’re describing, Nehal, is that when certain demographics and communities aren’t represented, it doesn’t mean that decisions aren’t being made about them or that decisions are being made without using their data. They’re being made with somebody else’s data, so we’re back to the inadequacy of democracy, but a hundred times worse in some ways. If you have a Ring video doorbell that contributes data to a racist local police department, you’re part of the problem, whether you’re being targeted by the police or not. 

What you’re saying about content as well is doubly true. The content stuff is quite fascinating. I’m watching TikTok at the moment, thinking, These people aren’t getting compensated for their content. And I remember when Vine went down and everybody went, “I can’t believe they took away Vine!” Well, you didn’t own the content on that, anyway. That’s the tragedy of so many amazing creators—frequently people of color on most platforms—making fantastic content: very few people are able to build a career out of that, and then that platform just goes away. We are constantly giving up our soul and our content to platforms that we don’t even own. We’re building our culture on networks that we don’t have control over. I always think of that Pharcyde quote: there’s a line in one of their songs [“Devil Music”]: “Every time I step to the microphone, I put my soul on two-inch reels that I don’t even own.” And that’s from the early ’90s, and it’s so incredibly spot-on for now.

We’re all guilty; we’re all responsible. I mean, I’m selling books on Amazon, and I’m seeing Amazon getting much more money out of it than I am. I post content through any number of platforms: Twitter, Medium, whatever. And that publishing model has been there for a while, but now it seems more efficient and more brutal, and we’ve de-professionalized the creation of content to such an extent that it should be exciting, but what it means is that it’s a free-for-all where nobody’s really able to build a career out of being an artist unless you’re incredibly lucky or incredibly commercial. And it’s a real problem; it’s something we need to address.


BLVR: I’m curious about how the devaluation of personal data affects our perceptions and access to privacy.

JL: Privacy is a strange word. I think people use it without quite knowing what it means. We have the sense of violation from differential privacy, meaning that Google can derive information about you that you don’t even know yourself by comparing a lot of detailed data streams. Google might be able to diagnose a medical condition you have, for instance. And there’s something that feels like a violation in that. Mostly, people are offended and concerned by differential privacy—I think rightly so—and worried about what that means, because it’s ultimately a form of the loss of power or the loss of self-influence. There’s another sense in which the loss of privacy means working for free, which is what we were talking about before, which is if your data is taken and your data is labor, then you’ve worked for free and you’ve also been stolen from. I think that’s a little less articulated but it’s equally true. There’s an even deeper sense of the loss of privacy, which is the right to self-definition. I started by talking about the way people seem to have this frequent need to put one another down in order to feel OK about themselves, and one of the ways that’s done is by refusing to let people name themselves. And I think one of the minor but still important first signs of respect is to allow a person to name themselves and then use that name rather than impose one upon that person. When there are algorithms calculating who you are and what you want and what you’ll see, you’re essentially letting someone else name you.

You’ve mentioned Twitter a little bit, and I think Twitter deserves some special attention if we’re talking about Black culture. Because Black American culture is overrepresented on Twitter to an extraordinary degree, or at least it was a couple of years ago. I haven’t seen the more recent data; maybe that’s shifted. The thing about Twitter is that its architecture is based on whatever perturbs people the most, and that gets amplified the most. This is the “engage/persuade” model. Since it’s been criticized, that language has been somewhat abandoned, but that terminology was used in courses—at Stanford, for instance—that taught people how to enact that model. And the thing about it is that it tweaks people’s fight-or-flight responses; it makes everybody just a little more paranoid and irritable, which I think empowers them, but also it’s an invitation to third parties. For instance, Black Twitter’s been repeatedly defrauded. Russian fraudsters created a load of fake accounts—and continue to do so—specifically targeted at Black users—with the apparent desire to excite those fight-or-flight responses and to create divisions. Ultimately, it’s disempowering. It’s a way of getting people riled up in an irrational manner, which actually takes their eye off the real ball and makes them less powerful, even if there might be a degree of truth in the content that they’re getting riled up about. Their overall cognition is somewhat derailed by the process—not entirely; only to a degree. I don’t know what to do about that, but it’s something that concerns me a great deal. In a way, it’s not my business. Who am I to comment on Black Twitter? That should be left to the people on it. But I have to say that from my perspective, it appears to be something that is disempowering overall, even though it might feel empowering in the moment.

This is something I’ve written about a lot. There’s this initial phase I call “the magic carpet ride.” For instance, when Black Lives Matter was amplified on social media like Twitter, it felt like this incredible amplification, this rocket ship or magic carpet. But the algorithm doesn’t actually care about who anybody is or what their causes are. All the algorithm wants to do is to maximize the engagement. So just a little after that initial “magic carpet ride” phase, as the content that was uploaded by Black Lives Matter was sloshed around experimentally, the people who were the most riled up by it were the people who got amplified the most and introduced to one another. So all of a sudden, all of these racists were introduced to one another and then reinforced and reinforced and reinforced, and you had this revised Ku Klux Klan and Nazi movement, which we saw in Charlottesville, and that was what the algorithm made out of Black Lives Matter. So the thing is, it feels like it’s helping you, but it’s actually hurting you, and that’s universally true. I don’t know what to do about it at this point, because I almost feel like when I point it out, I’m pulling that magic carpet out from under the people who believe in it and are comforted by it, and yet I think it needs to be said anyway, and it’s difficult. 

BLVR: If you look at a tweet as a digital object, you can assign it agency. It then interpellates people into different groupings based on how they respond to it. Some tweets, like those using the hashtag #BlackLivesMatter, bring activists together, and also bring white supremacists together.

JL: There’s a ratio question: Does it do more for the bad people who come along or for the good people who started it? And the problem is, it does more for the bad people. If it was even-handed, I think we could live with it more, but I don’t think it is, because the bad people are even more excited by paranoia and irritability than normal people, and so it brings out the worst.

BLVR: Isn’t that a factor of the concentration of power?

JL: That’s an interesting question, and I think it varies in different parts of the world. For me, the more immediate issue comes from cognition. Let’s say if the message within Twitter gets amplified more to the degree that it exercises the fight-or-flight responses of people, and the fight-or-flight responses in the diffuse version become paranoia and xenophobia and irritability. If we think that those qualities can be constructive, that’s great, but on average, they’re not. Overall, they create distance between communities, they create dysfunction in society, they create unhappiness, and they create poor policy and poor priorities. Because that’s the nature: those people who are more vulnerable to being excited in that way tend to not be the most constructive individuals, and so you tend to have the rise of this particular personality type, the irritable, xenophobic, paranoid personality type. We’ve seen the rise of that personality type in politics all over the world at once since these platforms became powerful: Bolsonaro, Duterte, Trump, many others. It’s a little different from the personality type of the strongmen of the past, although you can find it in some of the examples from the past. The strongmen of the past tended to be invulnerable. And the new ones are like weird little whiny children. People identify with them because that is the personality that comes out of the use of things like Twitter. And that particular personality type is that of the reactionary communities, like the new Ku Klux Klan and Nazi Party in the US. And that’s brought out through this particular cognitive strategy that is available to them.

There’s so much to say about this. Interactive digital networks were originally implemented to enact this. The very first person to put someone in front of a screen that was digitally connected to other computers was B. F. Skinner [American psychologist]. Very weirdly, this is in the DNA of networks, but it doesn’t have to be. I think it definitely interacts with preexisting inequities, as you suggest. For sure that’s important, but there’s this other cognitive inequity that I think is even more important and not entirely new in the world. None of this is entirely new in the world, but in this kind of concentration and magnitude, it is.


BLVR: Jaron, you’ve done so much with virtual reality, which plays with these boundaries and borders in intimate and sometimes invasive visceral ways. Could you comment on how the more everyday boundaries between online and offline, virtual and material, are now becoming super blurry?

JL: This is a matter of some concern. We were talking right at the start of this conversation about the role of optimism and pessimism, and I’m looking for optimism where I can find it. And I’m gratified that there aren’t reports of more people who have died from injecting Lysol or something in response to Trump’s exhortation to do so. I think it indicates that there’s a little bit of connection to reality and common sense that persists out there. You might say that’s fighting too hard—maybe I protest too much in my quest—for it to be optimistic, but I still think that’s something. I’m going to count the small victories where I can find them. I want to use that as a starting point to say that as riled up as people have been by the malicious use of the information world—I’ll use that as the term so as not to confuse it with virtual reality proper—I don’t think we’ve totally lost the distinction yet, otherwise there’d be more people who would have died from injecting Lysol. I will say that of the people who seem to have lost a little bit of common sense about the distinction between the world of artifice and something that is more real and more important, I’m concerned about journalists as a class. There are so many journalists who have been made vain and silly by Twitter. I don’t know what to do about that, and it’s hard to quantify, but it’s just something I’ve observed quite a bit. Of course, there are many ordinary people who have not; however, journalists as a class, a very large proportion have, and I worry about them or us. I worry about intellectuals and cultural life being particularly vulnerable to the negative effects. And that’s what I’ll say for now.

BLVR: Tim? You wrote a whole book about this, in a way.

TM: Kind of, yeah. I wrote a whole book in which the internet disappears, and at the beginning of this crisis a lot of people were saying to me, “Holy shit. I just finished reading your book and it feels very realistic.” We had all that panic about the supply chains collapsing, and we might not have seen the worst of that yet. It feels like instead of the internet disappearing and causing the collapse, the collapse has come and we’re trapped in the room with the internet. For this current moment, it’s fascinating, that absolute, complete dependency that we have on the internet right now. It’s not like we’re online just to pass the time. We’re doing it to survive.

So our usage has changed in both quantitative and qualitative ways. And the boundaries are blurring in interesting ways. A lot of my work is about complexity and systems and how they lead us to believe in things that aren’t true essentially and to vote for leaders like Trump or Boris Johnson or Bolsonaro or whomever. The complexity has given the illusion of us losing control, so we vote or we turn to terrible ideologies that seem to promise to give us control back. I think we’ve seen that in the anti-lockdown protests to a certain extent—they sum up one part of the population’s concern that they’ve been locked out of things by being locked in. Whereas I think another part of the population is trying to wrestle with the stuff we’re talking about now: Well, I’m online but what does that mean, and what does it mean about what I’m seeing or hearing and what I’m producing?

As someone who works as a futures writer, I constantly have to be online in order to know what’s going on. It’s the horrible fucking Faustian deal I’ve made, where this thing is set out to make me upset, like Jaron was explaining earlier. And often with good reason. That’s the constant struggle that’s in a lot of my work.

BLVR: I do fear the culling too. What spaces of resistance are you finding right now?

TM: That’s a really good question, and it’s one I have literally just been thinking and talking about. I wonder what the impact would be if those who create content online also went on a general strike for a week. That would be fascinating to see. But the problem is that to organize something like that would be hard because we have this lack of class consciousness in mainstream society. Fingers crossed: let’s hope the impact of this is going to raise class consciousness. There’s some interesting stuff: images of people protesting and standing two meters apart in a grid format: it’s actually really visually impressive and it really makes an impact, but it’s tricky. But there’s other things we can do. Even a consumers’ protest for a week where we don’t buy stuff. If we could somehow convince everybody not to use Amazon for a week, it would definitely send a message to Bezos.

BLVR: I’m wondering, because you’re two cis white men who talk about this a lot, and I’m a Black woman who talks about this a lot: Do your answers to me change in any way? Do you consider things differently based on whom you’re talking to?

TM: I must. If I was going to give a smartarse answer to it, it would be something along the lines of, I much prefer talking to women of color than talking to white men about this stuff. Because invariably when I try and have a conversation with white men, it’s a battle for me a lot of the time, trying to get through the basics about privilege and entitlement and how capitalism works. White men have internalized negative feelings associated with that information, and they see it as an insult to them. They see it as an affront to them straightaway: “Well, you don’t know anything about me!” But I fucking do.

More Reads

Jeannie Vanasco in Conversation with Amy Berkowitz

Amy Berkowitz and Jeannie Vanasco

An Interview with Robert Thurman

Jane Ratcliffe

Nikky Finney in Conversation with Jericho Brown

Ismail Muhammad