An Interview with Jessica Bruder and David Blei

Meehan Crist
Facebook icon Share via Facebook Twitter icon Share via Twitter

There’s a black door at street level. Open the door, go down the stairs, and take a left down the hallway. At the end of the hallway, you’ll pass through heavy red drapes and emerge into Caveat, an event space. A library runs along the wall across from the bar, and the room between is filled with round tables facing a stage set with three leather armchairs, a rug, and a table with an improbable houseplant. This is where I host Convergence: A Show about the Future. The live show is new, as is the space. It’s all an experiment.

Convergence brings together two people from vastly different fields to explore how an emerging science or technology will affect culture, policy, and politics in the near future: not five hundred or one thousand years from now, but five, twenty, fifty years from now. Our lifetimes and those of the next generation. Each show is centered around a question, and often the guests know very little about each other’s fields, which tends to make them nervous, which I sort of love. One evening last fall, we brought together labor journalist Jessica Bruder and machine learning researcher David Blei to ask: how is machine learning (really) going to affect labor?

Jessica has written about labor and economic justice for publications such as The New York Times, Harper’s, Wired Magazine and The Nation, and she has just published a book about the new American labor force of transient workers and the dark underbelly of the American economy called Nomadland: Surviving America in the Twenty-First Century. When we first started emailing she was driving a camper across the country, busy getting the windshield fixed and finding new tires while also on deadline at Wired. Her messages crackled with wit and energy, as she does in person. David is a professor of statistics and computer science at Columbia University, and a member of Columbia’s Data Science Institute, whose mission is to train the next generation of data scientists and develop tech that serves society. He has a wonderfully gentle, self-deprecating affect that runs counter to every cultural stereotype of computer programmers and algorithmic thinkers you could possibly imagine.

I wanted to do this show because recently there has been a lot of panic and no small amount of hype about how the robots are coming for our jobs. There’s a 2013 study by a team at Oxford on “how susceptible jobs are to computerization” that gets cited a lot, and it estimates that “about 47 percent of total US employment is at risk.” So that’s a little nerve-wracking.

But then you hear what seems to be the exact opposite, that unemployment numbers are at an all-time low—in some states you have rates of 2.3 or 2.4 percent: everybody has a job, everything is fine.

Meanwhile, workers in the US and around the world are clearly struggling in a labor economy that’s increasingly marked by precariousness and unlivable wages. And on top of everything, you have this huge boom in machine learning and artificial intelligence. The rate of discovery is going faster and faster—research that could lead to more automation is moving forward at an unprecedented pace, which is super confusing. Where are we now, how did we get here, and what is the future of work?

         —Meehan Crist

I. THE KIVA FIELD

THE BELIEVER: You recently went undercover in an Amazon factory?

JESSICA BRUDER: I did. Or a fulfillment center, as they call it. But before I was interested in that, I was looking at collisions between labor and automation in some other ways, like UPS putting sensors all over its delivery trucks to track everything a worker’s doing, from when you sit down to when you leave, and just how basically tethered all the workers are to these handheld diodes that are telling them how many seconds they have to make the next delivery, how to drive, how to breathe… I mean, I’m exaggerating, but you get the idea. That kind of surveillance is also used in Amazon’s warehouses, where one of the hardest jobs is being a “picker.” Those are the marathon jobs you hear about, where people walk for miles on concrete floors holding barcode scanners that function as their digital overlords, telling each worker: Go this way, go that way, you have X [number] of seconds to the next pick—and each pick represents an order coming in that needs to be shipped out as quickly as possible.

BLVR: Can you give us a picture of what the warehouse looks like?

JB: The warehouse I went to was actually a pretty modern one, which was interesting. Previously, I’d only interviewed workers from what they call the “legacy warehouses,” and “legacy” is corporate slang for “old.” So I was in one of the new warehouses, which use robots from Kiva Systems, a company Amazon purchased. They look like giant 350-pound Roomba vacuum cleaners moving the shelves around. So this is a warehouse that is the size of many football fields. When you’re in there you’re restricted to designated walk paths. People actually were comparing it to being in jail, just because behavior is so regimented. You can’t bring anything in with you that’s sold in the warehouse. There’s a TSA-style security station on the way out. In every restroom stall there’s a color chart. You are instructed to compare the color of your urine with the color swatches on the chart to determine whether your hydration levels are appropriate, which kind of blew my mind. There were murals on the wall with an orange blob called “Pecy: Peculiar Guy.” Pecy imparts bits of wisdom like “Problems are treasures” and “Variation is the enemy.”

BLVR: How did it go with the machines and the people working together?

JB: So this was really interesting to me, because before I had primarily interviewed people who were in the old-school warehouses, where you’re rolling what they call a baker’s cart. There are no baked goods involved. You are just rolling a cart of merchandise among rows of shelves, and either stowing incoming merchandise or picking items for outgoing orders. But in this newer warehouse the robots bring you the shelves. So workers stood at fairly isolated stations around what they would call the Kiva field, and the Kiva field is oddly quite dim because robots don’t need light to see, and it’s fenced-in. It kind of had a cagelike feel. I was feeling like we shouldn’t feed the Kivas, which is accurate. And when orders came in, instead of workers walking to designated shelves to get the merchandise, the robots would bring the shelves to the workers. I was in quality assurance, so rather than helping me fill orders, the robots were bringing me things to count and check, creating a layer of human redundancy on the automated system. Things often seemed to go haywire. One night I got instructed to count the same twenty boxes of patchouli-scented incense over and over. The robots just kept bringing them back. I remember it stunk on my hands all night. I felt felt like I was in college again and I couldn’t escape the smell of nag champa. One of my coworkers teased me about it after our shift.

We heard these amazing stories—there was this one time a robot dropped a can of bear mace and another one ran it over and then they had to evacuate the whole warehouse, so we were instructed keep an eye out for bear mace—that’s contraband—and then [one] day somebody was saying “I got bear mace!” It had come to his station on one of the Kivas and nobody really knew what to do about it. So there was a lot of kind of… Charlie Chaplin chaos happening. You heard about the Kivas bumping into each other; you heard about them somehow accidentally making a wrong turn and leaving the enclosure.

BLVR: But they are doing jobs that would have been jobs that people did before, right? Someone would have driven a forklift somewhere…?

JB: Well, we would have been walking among aisles of shelves, pushing baker’s carts. Now the Kivas have displaced that labor, demanding more from other parts of the body. Instead of walking billions of miles, I’m doing a lot more standing and squatting. So if you picture each singular task—you know, each time I have to count something, I have to go there and count it, but if the robots are bringing it to me I can do a lot more counting, but I’m also doing a lot more of the things that I would do every time I reached the right shelf, which is standing up, reaching in, pulling it all out. And a lot of the workers I’d been studying were at or nearing traditional retirement age. They’d been hired through an Amazon recruitment program called CamperForce, which hires nomads, folks who live full-time in RVs, campers, travel trailers, vans, a few cars.

BLVR: And by “nomads” you mean mostly people who are retired and living on the road? Who maybe lost their savings in the 2008 crash?

JB: Yes. They don’t have houses, but please don’t call them homeless. They call themselves “houseless.” All sorts of stories. I mean, I was talking to one guy who was in his seventies. He had spent most of his career working as a mechanic in copper mines. His knees were shot, so for him, having the physical labor displaced from lots of walking to lots of kneeling was probably not the best thing.

BLVR: So in terms of “the robots are coming for our jobs,” there are some new robots doing some of our jobs, but we’ve heard this before, right? The idea that the robots are coming for our jobs is like a late ‘60s, early ‘70s thing. So what did the alarm sound like then?

JB: From a modern perspective, the way they reacted to the robots back then is either very funny, or very sad. I’ve got a collection of archival news stories from midcentury that seem so beautifully naive now. So many people thought the robots were going to take our jobs, but the prevailing attitude was utopian rather than dystopian. The idea was “the robots are coming for our jobs, which is great because we can eat bonbons and the robots will do everything. So what’s the next big problem?” There were serious scholars who were deeply worried about a crisis of ennui. I’m not kidding. There was a journal called Leisure Studies that came out around that time, trying to fill this gap: what will be the purpose of the human when we make ourselves obsolete?

BLVR: But that didn’t happen. What happened?

JB: A funny thing happened on the way to now. It rhymes with income inequality. Back then, they didn’t imagine that the spoils from the gains in productivity would be distributed so unevenly. And so they missed our dilemma: folks need to have enough income to buy the bonbons before they can hang out and eat the bonbons while the robots are doing all the work. That’s to say: ennui really hasn’t been that much of a problem. So it’s both depressing and kind of hilarious to go back and read all of these headlines from midcentury. In one article from Leisure Studies, researchers were funded to show people nature scenes while monitoring the dilation of their pupils—apparently, dilation can indicate pleasure. They wanted to see how people reacted, whether such scenes might satisfy the human spirit.

BLVR: To keep people busy? Joyful?

JB: To keep people joyfully busy. To help them find purpose in the post-work era.

BLVR: That doesn’t sound terrible.

JB: It’s wonderful and sad. We didn’t get there.

II. HAM

BLVR: David, I feel like there are lot of terms that are getting thrown around now—AI, machine learning, data science—and they seem to be kind of used interchangeably, and maybe some are involved with the other ones? Can you tell us: what is the difference between data science, AI, and machine learning?

DAVID BLEI: AI is a goofy idea that we can build computers that seem intelligent, or that are intelligent (depending on how goofy you want to get), and actually this apparent difference is a big debate for AI researchers. Do they seem intelligent or are they intelligent? But my perspective is: I don’t think either will happen in one hundred years or five hundred years or one thousand years or maybe forever.

So AI is fun for science fiction and for philosophical digressions and it’s a good bait and switch to get kids to study machine learning. Anyway, that’s what it was for me. I was nineteen years old and I read my dad’s copy of Gödel, Escher, Bach, which is Douglas Hofstadter’s inspiring book about logic and music and AI and… recursion! It’s totally inspiring. I had dreams about it! And I thought, Oh my god, I’m gonna do AI. I’m gonna build intelligent, conscious computers and they’re gonna date me and stuff.

But when I really got into it, I got deep into this beautiful field about the mathematics behind analyzing data, and that’s machine learning. So what machine learning is, if we put AI aside, is about methods that ingest data, find patterns in data, and then use those patterns to do something.

BLVR: Can you give us one example of what that looks like?

DB: A great success story for machine learning is the spam filter. And it’s also a safe subject, because the spam filter did not displace human email readers who read our email and decided if it was spam. It was just a good machine learning application that ingests email, finds patterns in the email related to “spam” and “not spam,” and then filters out the offers of prescription drugs for us and puts them in the spam folder. And it doesn’t always get it right. This is the blessing and curse of machine learning.

BLVR: So how does it actually learn? As far as I understand it, there’s three kinds of machine learning: supervised, unsupervised, and semi-supervised. What are those, and how does your work fit in?

DB: Right. So let’s take the spam filter, to begin with. That’s supervised learning, and what happened is places like Google or Hotmail or Yahoo! or now-forgotten email companies got big collections of real email and they had people mark those emails as “spam” or “not spam.” (They called “not spam” “ham,” which makes sense and is funny.) Then the machine learning algorithm takes in all that data—that is, emails and whether or not they are spam or ham—and learns what is the pattern of spam. From the data, it discovers that spam talks about mortgages and prescription drugs and contains all-caps and so on.

BLVR: But you need someone to have marked it first?

DB: Yes. You need someone to mark it. With the marked emails, the ML algorithm discovers the pattern attached to spam and the pattern attached to ham. Now what happens? So the machine learning algorithm learns those two patterns and then we deploy the email system and all of a sudden you start getting emails, and the algorithm looks at your emails—notice these are unlabeled emails now—and it decides that this one looks more like spam, this one looks like ham, and then it filters in that way. Spam filtering is classical supervised learning.

MC: So what is unsupervised learning?

DB: Unsupervised learning is where we don’t have people labeling things. That is my field of research. We have a big mess of data and it has no structure and we want to find patterns in that data and use those patterns to do something. A lot of my work is about ingesting big collections of text documents and finding patterns in those text documents—these patterns look like themes or topics—and using them to do things like improve searches or build navigators. And the kind of work we do, this work on ingesting text, is used by people like sociologists and historians who want to navigate big archives of text collections, but don’t necessarily have them labeled in advance.

For example, I work with a historian Matt Connelly, whose job is to get primary sources, read them and then form theories and stories about what happened. But Matt’s primary source is two million declassified cables from US embassies in the 1970s, and he wants to find the interesting cables. He can’t read two million cables. He’d be retired by then. So he uses these methods to do that.

Another example of unsupervised learning is recommendation systems. When we log onto Netflix and it shows us what movies it thinks we’re going to like, or when we see an algorithmically generated feed in Facebook, this is unsupervised learning. The algorithms are looking at patterns of people’s watching or reading history and then using those patterns to infer our preferences and then to suggest to us other movies or Facebook posts that reflect those preferences. Unsupervised learning, in particular, has become important in recent years because we have so much unstructured data.

III. THE WORLD THAT WE JUST CREATED

BLVR: The things you are both describing—looking through giant data sets and training a robot to do certain tasks—[present] very, very different kinds of problems. Training a robot to be able to recognize something, to know where its arm is in space, to grasp something and take it to the right place, is a very different scientific [task from] making categorical sense of massive data sets. But I think they are connected because they are tasks that can be automated by using algorithms in order to make work go away for humans, right? I’m curious what you both think about where we are now with the kinds of work that are being automated, or how they’re being automated, or who’s deciding what to automate.

JB: I’m fascinated that, for Amazon, one of the sticking points where human meat is still required—and they’re trying to narrow that gap as quickly as they can—is in what they call “picking.” That’s taking products off shelves to get them ready for shipment. The way they organize merchandise looks chaotic to the outside eye. I mean, I think the initial idea was if you have a bin that has three dildos, two fishing hooks, five books, and some cosmetics, and you’ve got all sorts of different products in different bins scattered through the warehouse, it’s more likely that a “picker” can get to any one bin with a desired object more quickly. It’s not like you’ve got merchandise shelved by category. So one of the challenges Amazon is dealing with now, even with these amazing schlepping Roomba things that shuttle around—they’re actually kind of cute—is that the company still hasn’t found a way to replace the human picker. They’ve had contests to try to get something to replace the human picker. Something that just reaches into a bin, takes out the objects, and can eventually supplant the human labor that now accomplishes such mundane tasks. I remember counting out, by hand, 125 thirty-five-dollar AMC gift cards. This is really a job that is better suited to automation, right? This is probably not the best use of human time. So those people will be replaced. In general, the number of tasks necessitating the movement of human meat will continue to shrink.

DB: Right. Twenty years ago, when I graduated college, it was unimaginable that computer science algorithms—particularly ones based on data, which is also what makes this whole thing complicated, because we are creating the data that the algorithms are learning about and then mimicking—20 years ago it was inconceivable that these would be deployed all across the world. So it’s true that computer scientists aren’t used to thinking about countrywide or global effects of deploying their algorithms, and now we are starting to. There are conferences on fairness and transparency in computer science. Let’s take robots that have fine-grained motor control. It’s true that those robots, if we had them, could replace many, many, many workers, and that might have a terrible economic impact and an impact on people’s lives that we as a society don’t want. So that’s a norm we have. But at the same time, those robots could be built out of some kind of fireproof material. (I’m just making this up; I don’t know.) And these fireproof robots can save people in places where firefighters can’t otherwise go. We computer scientists are very optimistic. When we build robots, we think about the robot going into a burning building to save a kid. I remember watching early robotics research videos, which were about how the robot’s going to deliver a bagel to Leslie—

BLVR: Maybe Leslie really likes bagels. It could be a great outcome for Leslie.

DB: To be clear, this is Leslie Kaelbling—she was my undergraduate thesis adviser and she is a robotics and ML expert (and an amazing scholar). And of course the CIA wants the robot to deliver the poison to whoever, but the computer scientist doesn’t think of it that way—they think of delivering bagels to Leslie. In other words, these are tools. They can be used in many ways. And whoever is deploying the algorithms to millions and millions of people is accountable for what those algorithms do. Now, I’m not sure that the person who builds the algorithm, but who doesn’t necessarily decide to deploy it, should be accountable, but this is now out of the realm of my expertise.

JB: There’s a tangible equivalent too. Have you heard about the exoskeletons for the workers in Japan? So, as we know, Japan has a rapidly aging population and there are many tasks the body does not particularly favor as it ages, including lifting, reaching, stretching, and there are many of those tasks that still need to be done, so there are companies building exoskeletons that older workers can wear to lift more stuff at the airport and at some of these hauling companies. And it’s really funny, because I see the technology and I think, They really wanna wring every last drop of productivity out of aging bodies. To me that looks pretty dystopian, but similar technology is also being used to help wounded war veterans walk again, and lift and move again. So I don’t think technology is neutral. It’s built by humans; the algorithms are written by humans. The data that we feed them, you know, it can be garbage in, garbage out. You can have different plans for the same kind of machinery, and it’s the robot taking that benevolent bagel to Leslie or a poison bagel to a CIA target. Those decisions are all about people.

BLVR: What has been automated thus far, and what has happened to American labor because of that? I mean, it seems like the logical conclusion to this would be that all the jobs that are really, really easy to do should be slowly going to robots, and jobs that have higher cognitive loads would be what’s left for humans to do, right?

JB: Unless, sadly, people remain cheaper than robots. Our federal minimum wage is still $7.25 an hour, which is pretty ludicrously low.

BLVR: And you have people crossing borders undocumented and working for even less than that. It seems like what has actually happened is the middle class is getting hollowed out. There’s more and more labor on the low-skilled and low-wage end, and there is, in fact, more labor on the higher cognitive load end, but there’s less of it. And the work in the middle is shrinking. If you used to have a factory job where you got paid seventy thousand dollars a year or even forty thousand dollars a year, and you had a retirement [plan] and benefits, and you worked at the factory your whole life, you could then retire and your elderly years were pretty safe, you know? You’ve done your work. Those workers are now the people that you seem to be encountering with CamperForce, who are doing these more-menial, sometimes backbreaking jobs because they didn’t have that kind of stable work in the middle class available to them. When we automated all of those jobs in the late ‘70s and early ‘80s, the new jobs that showed up were service jobs, right? A kind of work that it seemed couldn’t be automated. As some kinds of work got automated, there have been these waves of different kinds of new work appearing. Once you automate away the factory jobs, well, then suddenly there’s all these service jobs and you can do those jobs, then those wages get depressed, and now even those crappy service jobs that at least came with health care and benefits aren’t available, and you have this new precarious labor market where you don’t have benefits. Uber, for example.

JB: If you look at CEO pay versus typical worker pay, I think it’s something like 271 to 1 now. And it was 20 to 1 in 1965. So it’s mind-blowing when you look at the changes. I mean, I know a lot of this stuff is kind of buzz-phrasey, like, Ooh, inequality, but there’s really a substantial difference that’s happened in the culture in recent decades and it’s not sustainable. The center cannot hold when it comes to this, and that’s what we need to figure out. In my mind, that’s a more pressing question than “Are the robots going to eat us?” I think the robots can help us; we just need to point them in the right direction.

MC: So where are we now in terms of the kinds of jobs that are getting automated and the kinds of work that are going to disappear in the next five to ten years?

DB: One thing to remember about robotics: progress is much slower than the press might make it seem. I’m not a roboticist, but I go to many talks by roboticists. What you see is very impressive if you understand what it took to get there, but [it’s really only] a video of a robot with pincers holding a sheet, and then they obviously waited thirty minutes and it has now folded the sheet in half. And then time passes and now it’s in quarters and then everybody applauds. And then there’s sixteen slides of the math about how it worked. And, to be clear, it’s beautiful math. So if Amazon could make consistent all of their products, put them in boxes that the robots could read the barcodes on, and go in and grasp them, then they might be able to support a robot-only warehouse. That job by the “picker” should be paid fairly and have benefits, but I don’t think that job is going to be automated anytime soon.

I think another job that we often think about being automated is an Uber driver or taxi driver and truck driver and delivery-service person, and that, too, I think, is a long way away.

BLVR: You know, one can say that this shift has been going on for decades, right? Since the late ‘70s or early ‘80s there has been a decimation of jobs of the kind you have been talking about, Jess: manual labor. One could argue that there wasn’t a hue and cry in the press because no one was coming for the higher-level cognitive jobs yet, and all of a sudden, with advances in machine learning, the lawyers and the bankers and the journalists are feeling the squeeze from automation, and now suddenly there’s this new robot apocalypse. One could say that maybe the next level is something like the actual underpinnings of the economy, that we’re automating more and different kinds of jobs to the point of automating the thing that’s running all of this, that we could be running, but that is kind of running us.

DB: If we’ve made our whole world electronic and systematic and then we suddenly have algorithms that are operating and affecting hundreds of millions of people, what are the broader effects of that? Of that world that we just created? You know, here we are: computer scientists making algorithms and writing math papers about them. Then suddenly Netflix deploys matrix factorization—that’s an algorithm for recommending movies—and shows you that you want to watch Buffy the Vampire Slayer. They deploy matrix factorization to 100 million people, and that means 100 million people are deciding what to watch based on matrix factorization and whatever its properties are. Netflix is one thing, but the news is something else, right? Computer scientists have started to think about this. Just to make it even scarier: we’re talking about watching movies or clicking on news articles, but what if we used predictive algorithms to decide who gets a mortgage? This shines a light on the problem, because if the past data that we’re using to decide who gets a mortgage is based on racist mortgage brokers, or—I can’t remember who decides; maybe the bank loan officers—then we’re gonna build a predictive algorithm that has those properties. So what computer scientists have started to do is to think computationally about what it means to be fair and what it means to be biased. For, like, fifty years, theoretical computer scientists have taken algorithms and figured out: how do I decide how expensive that algorithm is, how long it’s going to take to run on my computer? They really focused on that for many years. Now they’re focusing on, OK, we know how to measure that, so how do I measure how biased it is? How do I measure how fair it is? And these are fuzzy concepts. You need sociologists and political scientists and policymakers to help decide.

IV. THE “WE” PART

BLVR: So, David, let’s start with you. What’s your worst case?

DB: Right. I think one worst case is there’s too much information on the internet, and we look at not-good recommendation systems to get information and then we become siloed and then we become divided.

BLVR: I meant in the future. [Laughs.]

DB: That’s right. [Laughs.] I guess the present is here. The other worst case is we don’t take the security of private data seriously enough and then there’s a big data breach and then all our data gets exploited.

JB: [Fake-coughs.] Equifax.

DB: Moving on to the future, another worst case is that these tools are used by evil governments and other organizations to violate our civil rights.

JB: [Fake-coughs.] NSA.

DB: Oh yeah. Uh-oh.

BLVR: Jess, do you have a darker worst case for us?

JB: I worry very much that we will not solve the wage-gap problem, that the people who are running the robots while they’re talking about UBI [universal basic income] will still be so enamored of what they’ve created that wages will remain flat. That we’ll keep hearing the same tired argument: companies won’t hire as many folks if they have to pay them more. We need a living wage. I mean, it’s as simple as that. So I worry that people will lack the work they need to support themselves while, at the same time, business leaders who live in a techno-utopia will realize too late that there’s no one left to buy their products.

BLVR: I’ll throw in an extra dystopian twist.

JB: Please. Bring it.

BLVR: If this all happens faster than we are imagining it would happen, if we all get driverless cars in the next five years, our economy will collapse because drivers are, in some places, between 7 and 9 percent of the workforce. It’s mostly men; it’s mostly people of color. And all of a sudden all those jobs disappear and our economy stutters massively and everything else kind of falls apart.

JB: In the book I wrote, there’s this guy who really was relying on the value of his taxi medallion to retire. He’s in San Francisco, and Uber completely wiped them out—you can’t sell a medallion for love or money—and now he’s living in a van doing CamperForce and working at the sugar beet harvest. So again, I think dystopia’s already here. It’s just a little quieter than we expected. Mic drop.

BLVR: OK! Shake it off, shake it off. Best-case scenario. How could we, in fact, given where we are, make this into a world we would like to live in?

DB: So a naive idea of best-case scenario: machine learning helps us become a more efficient and healthier society. Our workday becomes one to two hours shorter. We get enough sleep. And it’ll give us the information we need, when we need it, so monotasking becomes possible. Because we know that if there really is an emergency, we’re going to see it in our nice one-dimensional monotask program driven by machine learning. Machine learning helps us personally all suggest action to achieve personal goals, like around health and life. Our own personal machine learning algorithms monitor us—again this is best case, so we’re not going to the worst case—and we benefit from its insights.

BLVR: Which actually speaks to what you’ve said about privacy. If we all own our own data and, let’s say, our health is being monitored, but the government and our employers and insurance companies don’t have access to it, and somehow we know it won’t get hacked, in a utopian best-case scenario you could imagine a world in which having that data and being able to control it yourself could be beneficial.

DB: Also in my best-case scenario: machine learning helps us make important decisions around loans and jobs in a fair way. In other words, it helps us de-bias society. (Now, this involves solving all kind of problems, and we can go to worst case from there, too, but this is the best case.) And everything that I just described happens with interpretable and transparent machine learning, so the actions that machine learning suggests have such clear justifications that when we see it, we don’t even hesitate, we just know it’s the right thing to do.

JB: Kumbaya!

BLVR: All right. Jess, best case?

JB: I don’t know how to top that. It does strike me that even the era when we had manufacturing jobs was over-romanticized. I don’t think people want to work in front of a blast furnace for a million hours a day. I think there are so many cool things machines can do for us. I’m having a really hard time wrapping myself around the best-case scenario. I’m straining here. I heard you use the word “we” a lot, and I think that’s where it all is, is who’s we? Do you know what I mean? You look at things right now, like concierge medicine and who has access to fast broadband, and again if we can get the “we” part right, I think a lot of your dream may follow, and I will owe you ten beers.

MC: All right. Two more questions. First: what can people do to stay informed? Who do you follow? What do you read? If people want to think more about this, do you have recommendations?

JB: I am completely fascinated with Amazon, because I think they’re at the tip of the spear for a lot of this. They do cloud services. They’re really creating the rails on which much of the future economy will run. And one of the smartest minds on that is Stacy Mitchell from the Institute for Local Self-Reliance. I am, like, her biggest fangirl ever. So I recommend following her, for sure.

DB: I have two recommendations for learning about machine learning and algorithms. One is a book by Brian Christian and Tom Griffiths called Algorithms to Live By: The Computer Science of Human Decisions. It’s an amazing book. It’s very clear and it’s technically correct but it’s not technical. Then the second is Talking Machines, which is an excellent podcast about machine learning and data science. It is sometimes more technical, sometimes less technical, but it’s a great podcast and a great way to kind of learn the lay of the land in this area.

MC: Cool. And if people want to do something, if they want to get involved somehow, if they want to take some kind of action based on whatever they’ve heard here today, what would you recommend?

JB: We talked about this beforehand and I think I just started jumping up and down and going: antitrust now!

MC: We can’t do antitrust. [Laughs.] What does that mean? Break it down for us.

JB: I think we’re in a weird world right now where everybody’s expected to be a purist. “Do you ever buy from Amazon?” “Are you a full vegan?” And I’m just kind of over that. Everybody’s under a lot of pressure. What I urge people to do is take your choices one by one. When you can buy local, buy local. Don’t let the fact that sometimes you might buy your ten-pound bag of dog food on Amazon stop you from sometimes doing what you can in your local community and thinking about where your dollars go. I don’t think we need to be completely one way or another, and I think the culture suggests that we have to be. Just be open-hearted about this stuff, slow down, monotask a little, and think about where you’re spending your money.

More Reads
Interviews

The Process: Kirstine Roepstorff

One of the most surprising and critically divisive works in the 2017 Venice Biennale was Kirstine Roepstorff’s project Influenza for the Danish Pavilion. Primarily known for ...

Interviews

An Interview with Philip Seymour Hoffman

Ryan Bartelmay
Interviews

The State of the Fact

At the tail end of 2017, The Believer went to Wordstock, Portland, Oregon’s literary festival. This was a year the U.S. president called the press “the enemy of the people”; ...

More