2017, video, 6'12"
Shadow Glass is a voidLab collaboration between Jen Agosta, Sanglim Han, and Xin Xin based on an interview with Safiya Umoja Noble, the author of Algorithms of Oppression: How Search Engines Reinforce Racism (2018, NYU Press). In the book Noble coins the term “technological redlining”, describing the historical redlining that gets carried over into the creation of algorithms. Noble challenges us to think about how the design of algorithms and databases intersect with issues of race, gender, and class, and urges designers and policy-makers to confront and eliminate biases in the development of decision-making technologies.
Interview with Safiya Umoja Noble: Xin Xin
Visual Artist: Sanglim Han
Sound Artist / Music Producer: Jen Agosta
Shadow Glass and Shapeshifting AI were commissioned as a series by Feminist Climate Change: Beyond the Binary, Ars Electronica 2017.
Dr. Safiya Umoja Noble (SUN) is an author, researcher, and faculty at the University of Southern California. Xin Xin (XX) is an artist, journalist, and co-founder of voidLab and The School of Otherness.
XX: You have coined the term “technological redlining” in your upcoming book Algorithms of Oppression: How Search Engines Reinforce Racism. Could you explain the definition of technological redlining and how it is significant in the age of information?
SUN: One of the things I try to do is talk about technological redlining as a digital form of previous dimensions of redlining. So redlining has historically been a practice of not including people of color, racialized minorities, women in the United States through public policy practices. So for example, not giving home loans or mortgages to people who live in particular zip codes and using the predatory nature of racial segregation to bolster marginalization. In technological redlining, what we have are a number of similar kinds of decision making processes that affect people's lives when it comes to things like housing, credit, the ways schools are ranked and valued as excellent to not excellent and those decisions are increasingly made by algorithms or by digital decision making systems. This type of technological redlining is something that’s become so normalized in our society that it actually is thought of in many ways as a convenience, as a series of decision making tools or applications that make our lives better. Because the previous ways that redlining happened between human beings, for example, you sat across from a banker and the banker or the loan officer said no to you when it came to getting a mortgage if you lived on the south-side of Chicago or in South LA, for example. Now those decisions are just made by a computer or by software systems. And they just are wholly not intervenable and we cannot impact or shift the way that those decisions are made. We used to think of this as pointing to people as being discriminatory so what we needed as an intervention for example would be loan products or other kinds of policies where people of color or women would be involved because the idea is that they would be less discriminatory towards people who are marginalized already systemically. But now when we have computers making those decisions the narrative is not that computers can be discriminatory, but in fact that they are wholly neutral. And that is one of the reasons that we have to to talk about technological redlining and the ways that it is insidious and the ways that it is very difficult to intervene upon.
XX: You mentioned in your introductory chapter that digital media platforms are often characterized as “neutral technologies”. Could you share your insights on why that may be the case?
SUN: People often think of digital technologies, computerized projects, software, hardware as simply being tools. That the real power lies in the people who use the tools, rather than in the tools themselves. One of the things that I try to debunk in my work is this idea that software or hardware or computer decision making projects are neutral. The truth is that these projects are designed by human beings and many of them are highly reliant upon things like computer code. Computer code is a language. We certainly would never say that language is neutral, that it isn’t subjective. We know in fact that language is highly interpretable, that there are many many ways in which even the slightest inflections can change the meaning of something. And so code is also a language, that is written by human beings. The people who have the power to write that language are doing that informed by their own values. Many times they are writing code without sufficient knowledge about the impact, or the disparate impact that that might have on society. And so that’s one of the things that I really try to talk about is debunking this idea that the technologies that we’re engaging with are simply tools. In fact, they are not. There is a whole host of politics and power relations that are embedded in the technologies that we use. And it’s really important that we spend some times discerning what that is and what the long terms and short term implications of that can be.
XX: So when you said that code is basically just another kind of language, I think that that might be an idea that a lot of people, especially consumers of technology, don’t always realize. So there seems to be an issue coming from the gap between how tech companies narrate their software and promising what their software, what their code is doing versus the actual infrastructure, information, and code that’s being embedded in the software. For people who are not familiar with code, the black box remains mysterious and unattainable. What do you think are the possible ways for us to start revealing the information that's hidden in the black box and to make people understand that it’s a kind of language?
SUN: There are people who think that the only way to intervene upon, or the best way to intervene upon software or computer programming code is to become a coder oneself. We see a lot of attention and energy right now around black girls code or young people coding or STEM education (Science, Technology, Engineering, Math Education) for young people as a way in on these complex algorithmic environments, if you will. That is certainly one intervention, but one could argue, as many media scholars do, and I certainly argue this, that one doesn’t have to know the specifics of filmmaking, for example, in order to critique racist or sexist films that are made in Hollywood, right? One doesn’t have to know how to build a television in order to critique the kinds of programming that airs over the airwaves. So these are the kind of things that I think we have to think about. What are the multiple kinds of ways in which we can talk about the impact of these technologies in our society. I don’t think that people have to know how to build, for example, an alternative search engine or some other type of software, some other type of code, in order to critique the output of that code. And that’s one of the things that I really try to stress is how do we critically think about the output and the impact of these types of technologies in society. One of the things that we know is that typical engineering curricula does not focus on liberal arts education or the kinds of critical thinking about society that one might need if you were going to develop software programs or develop hardware. So I think these are the kinds of things where I think we need to have much more integration about the humanities and the social sciences alongside our math and engineering focused programs and educational initiatives. Rather than just thinking that if more people of color and more women are coders that will somehow translate better technologies.
XX: I wonder though, for more traditional media such as painting or advertisement in magazine, or film for instance it’s so much focused on representation of identities or of bodies or culture. When we look at imageries produced by Hollywood, we can take the image and deconstruct meaning and culture, whereas when we’re looking at search engines, it’s a very different process. It seems to me that the approach goes beyond visual analysis and goes towards algorithm literacy and system thinking.
SUN: We certainly needs more systems thinking in society. So while on one hand, in my work I’ve talked quite a bit, for example, misrepresent women and girls of color. When you search for black women and girls, Latinas, Asian women and girls, you often get highly sexualized and pornographic representations. That’s one way, for me, to point to the politics that are embedded, and the value systems that are embedded, in these technologies. So google search is just one way, and studying the representation, is just one way in to thinking about, well what else is the system doing? What other kinds of biases are happening? In some ways we see kind of the male gaze, a patriarchal bias happens when women are hyper-sexualized and pornified in search engine results. But there are a whole host of other ways that commercial biases are happening in these systems. So we can’t think of them as just public information resources, these kinds of projects, because they are not. In many ways you can see how a company like Google, for example, will prioritize its own properties. It will prioritize YouTube videos before Vimeo or its competitors. It will prioritize, when you’re looking for directions, not only its own map applications, but its advertisers, who are using and paying them for more profile within their services. So these are the kinds of things, when we think about a system, now the enclosure, the digital enclosure, like we digital media scholars like to talk about is now a highly commercialized, orchestrated system of movement. And these are the kinds of things: Well what isn’t on the map when you look at google maps? Well maybe the vigil for the teenager who was killed on that corner isn’t on that map. The Starbucks is on that map. Maybe the obscured history of who used to live in that neighborhood before it was gentrified is not presented in that map, right? So those things are deeply political and they help develop our worldview, a worldview that we take for granted and that we normalize. That’s the kind of thing that I’m trying to help us think about in my work.
XX: In the introductory chapter you also described the internet as “the most unregulated social experiment of our times”, what do you mean by that?
SUN: We’ve got about thirty years of great research now about the ways in which people are impacted by the internet. Whether it’s cyberbullying and people who are telling us about the effects of online harassment. Whether it’s the ways in which bias is happening and obscured in these online environments. That the internet itself, and that the possibilities for everything for everything from extreme violence, such as seeing videos of people who are murdered, with our without our consent because they just appear in our social media feeds. Those projects and processes are unregulated. There’s no public policy protecting society from the impacts, from the negative psychological, emotional, and other types of impacts of what happens when we are exposed to the internet. So what I often say is that yes, the internet is a highly charged environment that’s not regulated and that is really a social experiment. We have yet to see the kinds of incredibly harmful effects of the internet on society, but we see some seepages that are happening. We see what happens when, for example, misinformation is captured in a search engine and represents a person and they can never get that off, they can never erase that from the internet. So now we are having a response to that. In Europe we have the right to be forgotten legislation that has been a really important step in helping people to correct the ways in which they are misrepresented or mischaracterized or damaged personally. We don’t have those kinds of protections in the United States but I certainly think that we have amassed a lot of evidence that could help us move toward a better public policy environment to protect society, and especially the most vulnerable members of our society from the negative impact of the environment.
XX: Are there any lawmakers in the states right now that are excited about working towards that direction?
SUN: I am thrilled to see that there are lawmakers at the state level that are working on this. Particularly in the more narrow area of revenge porn. Revenge porn is someone posting compromising photos of you, this mostly happens to women, where pictures of them in compromising or sexualized ways are posted to the internet without their consent. And there are certainly a number of states that are passing legislation to criminalize revenge porn but it’s not deeply penalized yet. So the stakes or the consequences of posting photos of women in these ways are not quite what we want to see them at yet, or what I would argue would be a deterrent, significant enough deterrent. But I think that’s an opening where we see lawmakers starting to care about this. Again one of the things that challenging is that women are more vulnerable in our society, have less power, so there is less care and regard for a teenage girl whose life is destroyed, who feels completely psychologically, emotionally, socially damaged by something like revenge porn. And of course we know that there are young people who are terrorized online who commit suicide based on experiences they have on the internet. I think we have to look more closely at that and think about public policy and legislative interventions that can help protect children and people in our society who are most vulnerable. Certainly, that needs to happen.
XX: There's a huge discussion among digital scholars in how cyberbullying gets translated into real world violence and this kind of blurring of the cyberspace and IRL space.
SUN: Algorithmic oppression is no different than other forms of oppression. Part of the challenge is that people think of cyberspace as being somehow not real, or not a part of reality. People talk about an online world or a virtual world versus a material world, but it’s all the same world. Algorithms, computers, a computer sitting on our desk, we certainly wouldn’t say that it’s not part of the real world that we’re on. As is the software that you load onto it. It’s all part of the real world. Algorithms are part of the real world. They are not just part of a virtual world and a virtual world isn’t separate from the everyday life that we’re living in. So this is an important distinction because many people still like to think of the internet and technology as ephemeral, as not having a real material reality, or not occupying material space. But of course, in many ways, it’s not just the computer sitting or the laptop sitting on your desk or the phone in your hand, which no one would argue is not part of our everyday, real world. The technology, the systems that are running in those material objects are also real. So I think that it’s important that we, first of all, clear that up, and that we address this kind of misnomer people have of digital technologies as being ephemeral or virtual. The effects, let me give you an example of the real world effects, the material effects of algorithms. We have, for example, in the United States, the mortgage crisis of the last decade. So 2006 to 2008 we enter a recession, we have a federal bail-out and we have the worst loss of wealth, particularly in marginalized communities, in the history of the United States. Now, the mortgage crisis is driven by data and algorithms. The gamification of the market, which is ultimately what led to the crash of the market, was about the kinds of algorithms and the gaming of the system that was happening through manipulation of data. And in fact, a bidding against Americans, quite frankly. So you have a situation here now where, would we argue that the algorithms that drove the financial crisis were not real? That they were ephemeral? That they were not material? Well we know that’s not true. IN fact, that mortgage crisis in the United States led to the greatest wipeout of Black wealth in the history of the United States. All of the gains that were made in the civil rights movement and shortly there after in terms of wealth building, home ownership, the struggles to even have access to even have access to mortgages and banking, were wiped out, in one fell swoop. So these are the kinds of ways in which we need to think about algorithmic accountability, material accountability for these digital technologies. What they do certainly has real world effects.
XX: Could you tell us more about the Searching for Black Girls chapter in the book? What are the different ways black people are commodified through algorithms?
SUN: Sure. There’s a long history in the United States of commodifying black people, African Americans. In fact, we entered the Unites States, in its early formation, as commodities, to work as laborers, people who were sold and traded on a market. That’s actually the genesis of African American-ness, if you will, in what we think about as North America or United States, but also certainly in the Caribbean and South America. So this commodification of black bodies is not a new practice. Black people being sold and traded on a variety of denominators has a long history in this world and certainly in the US and in North America. To see many of the stereotypes and the narratives and the images evolve over time, you can look to wonderful resources like the Jim Crow museum of racist memorabilia. Here we have a perfect example of a museum that has captured these many narratives and misrepresentations of black bodies, of black women, and black girls to help justify racialized capitalism in the US, to help justify why people should have a lower rank or lower status or not be afforded human rights, not be afforded civil rights. So this history predates Google, certainly, by many centuries. But what we see when we look at, and one of the things that I’ve done is look at the ways in which Google search has represented black women and girls. Black girls in particular are codified in these hypersexualized ways. Now if we trace back the history and the lineage of hypersexualizing black women and girls, that’s often been used as a stereotype and a narrative in society to justify using black women as a reproductive workforce. Quite frankly, as reproducing the next generation of slave labor and blaming it, in fact, on black women because they are so sexual, so hypersexual that they can’t help but have children. And of course those children have been born into slavery in the United States. So again, this far predates something like a search engine, but we see many of those narratives, those stereotypical narratives, that are used to disempower black women and girls and black people more broadly. They are recreated, and they creep into these digital technologies as if there is no history connected those representations. Marlin Riggs is a wonderful filmmaker who produced an excellent documentary called ethnic notions. If you’re interested in this I highly recommend watching this documentary because he gives the whole history of racist advertising that is both racist and sexist in the United States and the way that those racist narratives about black people and about black women have been used to control, in fact, this community, our community, my community. I think this is a really important lesson, if you will, in how to connect history to the present and unpack and make sense of the kinds of things that show up in our everyday technologies today.
XX: You have mentioned in multiple occasions that “Google Search is an advertising company, not a reliable information company”. Can you explain what you mean by that. From your observations, what kind of filter-bubbles are formed by Google Search? How are the categories different from Facebook?
SUN: I often talk about Google as an advertising company versus and information retrieval company because it’s an important distinction that needs to be made in the public. Many people think of Google as being a kind of great, almost public library on the web, but it’s not. In fact it is an advertising company. If you think about the mechanics of how search works, it works in tandem with Google search’s main product, which is adwords. That’s about buying keywords to help advertise products. So these auctions happen 24/7 where people are able, companies, anyone is able to bid on paying a particular price to help move their products or their ideas up in Google search, and much of that is contingent on how much they are willing to pay. In that way, this is a really important dimensions in understanding the kind of information that rises to the top. Of course, what’s so important to know is that the majority of people don’t go past the first page of real-estate, so to speak, on a Google search. So this is one of the reasons why we have to think about what Google search is, as an advertising company versus a different kind of information environment like a library or some other noncommercial information space. One of the other dimensions about this is that people are often in somewhat of a filter bubble in that Google of course is trying to personalize more and more the kinds of information that we get. So of course the things that we’ve looked for in the past will, to some degree influence the things that we find in the present and also in the future. Also, it’s working in tandem with these other projects like adwords where people are trying to get particular ideas in front of us too. So it’s not just entirely that we’re trapped into a space of our previous searches but there’s kind of a confluence of multiple factors that’s happening. Now this is slightly different from something like Facebook where Facebook’s algorithm is definetly trying to tailor the things that you get based on other people that are in your social network. Because that information is so highly influenced by the people you are friends with, the things that you like about the things that they post, there’s a much, much more specific kind of tailoring. Again, in Facebook you have Facebook’s affinity marketing programs where people and companies are able to purchase certain kinds of keywords and pay to have information appear before you in your feed, based on things that you’ve liked in the past or a profile of who you are. Now this begs the question about our ability to be exposed to a broad swatch of information and different kinds of ideas and of course Facebook has come under severe criticism of this in a way that I think Google search hasn’t quite come under the same level of scrutiny Facebook has. Part of that has to do with the recent presidential election in which Donald Trump won the presidency off of what some people might argue is a lot of misinformation and disinformation that was easily purchased and placed into people’s newsfeeds, both for Donald Trump and against Hillary Clinton. In those ways I think the platforms are operating differently and they have kind of different agendas in their different projects. What I would say ultimately governs both of these projects and many others is this idea that what people click on is highly profitable to these companies and so getting people to click on things are particularly served up by people who are Facebook’s and Google’s advertisers. Again remembering that we users are not the clients of Facebook and Google. We are the product of Facebook and Google. Which means it’s our attention, our eyeballs, our gaze, the time we spend in these spaces that’s being sold to advertisers and the more an advertiser is willing to pay, the more likely their content is to show up in searches or in our social media feeds. These are the kinds of relationships that we are starting to elevate and escalate in society. It’s very important that people understand these platforms, rather than relying upon them as some kind of objective arbiter of fair and unbiased information. One of the other things that I’ll mention and I often try to teach this with my students and I try to talk about in my own work is that there are some things that some people look in a space like Google search, where they are looking for an answer. I think of this as being most specifically and problematically characterized in the case of Dylan Roof. Now Dylan is a white nationalist, racist, white supremacist in the United States who opened fire on unsuspecting African American worshippers at Immanuel AME church in Charleston, South Carolina just going on a couple of years ago. One of the things that Dylan Roof said in his own manifesto online is that he was conducting Google searches on the phrase ‘black on white crime”. Now when he did searches on black on white crime, what he was lead to was a whole host of white supremacist organizations on the internet because those are the kinds of organizations that use a phrase like black on white crime. The kinds of organizations that don’t use a phrase like that might be something like the Federal Bureau of Investigation, the FBI, that would characterize crime in the United States as a more specifically intraracial phenomenon. So what you’ll see when you look at FBI statistics is that majority of white people who are killed by homicide are by other white people, in the same way that black people are more likely to be killed by black people and latinx are more likely to be killed as well within community and so forth. What Google search doesn’t do is that it doesn't provide a counterpoint. When you search for certain types of content, and in the case of Dylan Roof he looked on a phrase like black on white crime and it confirmed his already patently false beliefs about the ways in which crime is enacted in racialized ways in the United States. It didn’t dispel or problematize his query. It didn’t lead him, for example, to scholars or others who are talking and writing about crime not being particularly being an interrelational phenomenon as much as it is a intraracial phenomenon. SO these are the things that are,again, so important for us to understand that going to a search engine, thinking that it will answer complex questions is one of our immediate problems that we have to address. Of course, we have more and more teachers, more and more parents telling young people, telling themselves, to just google it, to find the answer and again already highlighted the ways in which information comes to the front page of a Google search. So these are the things that illustrate some of the challenges. What we know is that there is information and knowledge and ideas that has been highly contested for hundreds if not thousands of years. There are many ways of thinking of knowing and believing that cannot be sussed out in .03 seconds. Also can’t be sussed out based on order that confers some kind of credibility. These are the things that I talk about in my work to problematize what we’re doing when we rely upon these platforms to help shape knowledge and information and understanding in society.
XX: So it sounds like based on what you’re saying that we have quite a bit to go before we can get to that state where public policy is really integrated into the space of the internet. What is your advice for what we should do now, as users of the internet? How we should interact with Google Search? Should we not use it at all? Should we use it with a critical mind? Should we have a flexible mentality where we feed into certain kinds of content but hold back from other kinds of content? What are your thoughts on that?
SUN: We’re living in a very complex digital media environment. Certainly using something like Google search for finding out when Starbucks closes or where the nearest drycleaner, these kinds of banal information, makes reasonable sense. In many ways those kinds of queries have simply replaced the phone book, for those of us that are old enough to remember the phone book, the Yellow Pages. SO in that way, I think that many people experience Google search and many other kinds of technology and platforms as helpful in making life easier. On another level though, we must engage with these technologies with a critical eye, with an eye towards when are these platforms and these spaces not appropriate for us to use. Maybe we have to think about the long term consequences of what it would mean to turn over all of our discovery about ourselves to machines, quite frankly, rather than to have those discoveries happen in other ways. We previously relied upon art, we’ve relied upon literature, we’ve relied upon the university and educational spaces as a way to think through the many complex phenomena in this world that exist. It seems to me that we would be missing out and really short changing ourselves to simply turn to machines to ask those questions of, assuming that there is a finite, fixed, perfect answer, because we know, in fact, that not to be the case. The more tragic part of this, I think, when we think about the longer term interventions that could happen is that many of these interventions are only coming about because of serious tragedies that happen. Whether it’s revenge porn, whether it’s the witnessing murders online live through a project like facebook live, snapchat, or some other type of streaming possibilities. We have to ask ourselves, why it has to take and why it will take egregious, horrible situations to have interventions. Maybe we might want to be more thoughtful before we get to those types of possibilities for regulation or some other kind of policy. I think that these technologies are here, until they’re not. You know there’s nothing sacrosanct about the internet or the technologies that we’re engaging with. Some us were alive before the internet and remember that there was a particular way of living and doing plenty of things without it. I wonder what it will mean when young people today, whose whole lives have been documented on the internet who have lived out their childhood, and their teenage years, and their young adult years in full view of the spectacle of the internet. What will the consequences be when they want to run for Congress or they want to have a job that is really important to them and some activity that’s been documented precludes their ability to participate in those ways. These are the some of the things that we have yet to see. We are just beginning to see the negative consequences of what it means to be living online in plain view, highly surveilled, highly documented, with very little privacy from these digital technologies. I think their might be a moment when we decide that this isn’t the best way, this isn’t the best quality of life, and it doesn't create the most possibilities, maybe this isn’t the liberatory possibility that people imagined. Maybe it’s rather just using our lives by large multinational companies to make a lot of money and have incredible profitability. Then I guess the question then is profitability at what cost and I think we will be the people that pay that price.
XX: So for Ars Electronica this year, the theme for the whole exhibition this year is artificial intelligence. The full name is Artificial Intelligence, the Other I. I wonder whether you can give us some insight into what kinds of artificial intelligence that you feel is urgent for us to look at with a critical eye?
SUN: I’m certain that artificial intelligence will become a human rights issue in the 21st century. I write about that furiously and I’m speaking about that furiously now. There are deep machine learning projects that are underway, both by the government and in industry that will have a radical, transformative impact on society. I think that these decision making tools and systems are being developed in private and closed environments that will be deployed on communities with no say, whatsoever, and likely very little ability to push back. The fact that so many decisions are already made that give us access to or deny our ability to engage with things like housing, educational opportunities, that algorithms are playing such meaningful role in sorting us into preferred and less preferred people and categories of human beings. That is going to intensify. So if one is born into poverty and one is born into a social network where other people in poverty, also systemically marginalized, structurally, for generations and may not be able to get out. Those things are only going to be further entrenched because those environments will not become predictors of one’s success, of one’s abilities, of one’s possibilities in life. We see this increasingly happening in society. This is a fundamental human rights issue as to whether people have the possibility to exceed their current conditions, but more importantly these technologies are doing nothing to dismantle or shift the structural inequalities in our society. They’re just making better and more preferred classes of people within our society. That’s what big data’s promise, quite frankly, is to help companies find the best people to engage with any leave the rest behind. So these are things we have to pay attention to and I think these will really start to come to the fore. We’re going to see these in more egregious ways. We already see the seepages of this. Artificial intelligence should be on our radar as a human rights issue. We should be talking about it and engaging with it on those terms.
June 15, 2017. Los Angeles.