Yuwei Pan



I am an organizer, creative technologist and communications strategist. 

I make the unimaginable and invisible into the tangible and believable, and to invigorate radical re-imagination of our future.

Information
Contact me
Instagram

“An artist's duty is to reflect the times.” — Nina Simone.

“Artists are the gatekeepers of truth. We are civilization’s radical voice.” — Paul Robeson
=


Discriminating Systems: How Algorithms Amplify Vulnerability and Popular Feminism


Writing
2020
  1. Algorithms, AI, and… Feminism?

What is the relationship between gender and technology? Feminist theorist Rosalind Gill threw out that question in 1995 right before the dot-com boom, which was a period of explosive growth of the Internet in the United States. Back then, many believed that a digital revolution would be a great equalizer and eventually solve issues of gender, race and social class. So much technological advancement has been made in the last 25 years that it’s necessary to reconsider her question in a contemporary setting. We hear buzzwords like “artificial intelligence (AI)” and “algorithms” all the time, but many of us are unaware of the role they play in our current attention economy as well as how they contribute to feminist discourse. As I will explore in this paper, many of Gill’s concerns back in 1995 ring true today. As a society, we need to be vigilant about how new technologies are building discriminating systems that amplify existing misogyny, racism and xenophobia, and exploit the most vulnerable parts of our society, as well as how these problems are concealed by the slickness, efficiency, profitability and convenience that modernity promises.

Our modern life is full of algorithms, whether it is the content shown on our social media feed, the personalized ads that hunt us wherever we go, or predictive sentencing software that decides how long a person is to be in prison. Hundreds of computational models rank, categorize, and even in some cases produce the content we see, based on our revealed identities and preferences on the internet. For example, if we “like” a post on Instagram about Planned Parenthood, we are going to be tagged with certain attributes and targeted for future content. However, the mechanism behind these algorithms is quite opaque and often traps people into information bubbles without any warning. In this essay, I will be using the terms algorithms and artificial intelligence (AI). Note that there are key differences in their definitions: algorithms are automated instructions that can be simple or complex, whereas AI is much smarter. AI is a set of algorithms that is able to learn to adapt to unforeseen circumstances and function without much human input, and I argue that this exact mechanism of needing little human moderation presents many of the dangers of AI.

Current discussions on AI have been focused on election meddling and radicalization effects of recommendation algorithms on platforms like Youtube. So, what has it to do with popular feminism? In this paper, I will layout the disarticulation of feminism in “hashtag activism,” the urgency of building technology that does not perpetuate gender discrimination, and why our current algorithms promote borderline content and exploit vulnerable communities like women and especially women of color. Finally, I will ask questions about who is making AI and why simply hiring more women won’t solve sexist structure in tech fields. I am aware that these are big questions that cannot be solved in a short paper, and that this is merely a survey of the tip of the iceberg of all these issues relating to artificial intelligence, gender and media.

2. #Hashtag Activism: Visibility for Visibility’s Sake

Hashtags “#” are used in diverse ways — they signify participation, assert individual identity, promote group identity, support, or challenge an ideological frame. On social media, they serve as channeling mechanisms and a tool for social media platforms to shape trajectories of information flow. The political potential of hashtag activism was unknown until the 2011 Arab Spring, when social media served as an organizing tool for activists and a catalyst for a huge anti-government revolution in the Middle East and North Africa. Afterwards, hashtags like #BlackLivesMatter started movements for historically disenfranchised populations, some of which transferred from online to protesting and civil engagement in real life.

Looking at these cases on the surface, social media is fulfilling its prophecy to democratize media and give voice to those who otherwise have none. However, there are limits to digital activism’s impact on structural change, as its priority is to raise visibility and not much more than that. Marketing companies quickly caught up to this trend and hashtags have been co-opted to serve neoliberal purposes by selling products and signaling “wokeness”. This strategy has also been picked up by popular feminism to spread ideas about how individuals should practice #selfcare #girlboss #girlpower #empowerment and #leanin. Hashtag activism expands the divide between two feminist visions — individual empowerment and collective liberation, and the economy of visibility tends to favor that of the former because it is often more “aesthetic,” palatable to a wider audience and performs better for advertisement profit. An example of this can be seen in the Me Too movement. It was founded in 2006 by activist Tarana Burke, who wanted to help other women, in particular poor women of color, to stand up against sexual violence and assault. The movement was known to relatively few until in 2017, #MeToo exploded as a viral trend popularized by Hollywood actresses like Alyssa Milano, and the movement took a turn to be about “empowerment through empathy”, pushing people of color and discussions that concern broader social issues out of the conversation. The hashtag #MeToo was what made the movement much more visible but also erased many of its intersectional origins, because social media algorithms are much likely to pick up trends that are appealing to a greater portion of influencers, who are mostly white and middle class. Despite the high visibility of popular feminism’s aspirational messages of empowerment, another evidence for its ineffectiveness is the lack of real change in governmental institutions. Banet-Weiser argues that the 2016 US election and the consequent passing of sexist laws rendered these messages meaningless.

This phenomenon did not go unnoticed by technology companies that host these online conversations, and it’s important to note these conversations are not happening in a public sphere but in a private space owned by these companies. This means that large corporations have control over what kind of information they promote, despite their refusal to be treated and regulated like a publisher in order to evade responsibility for their content. Therefore, hashtag activism is not much like using a megaphone in a town square. It’s more like speaking through a filtered mic that claims to protect freedom of speech but acts actually like a gatekeeper to information.

By promoting posts with certain hashtags using algorithms even if there could be harmful consequences, tech companies are making huge profits by driving engagement up so that people stay on the platforms longer. The algorithm is designed to make social media as addictive and attention-grabbing as possible, often by promoting inflammatory content or content with strong emotions. These algorithms are changing how we engage with information, and the attention economy is commoditizing our time, emotions and energy and turning us into data to be bought and sold.

Political theorist Jodi Dean argues that these networked communications technologies are deeply depoliticizing by coining the term “communicative capitalism.” Political messages are circulated, and they are not engaged in more than a “likes” and heart emojis (or the recent addition of the “care” emoji amid COVID-19). So rather than actually addressing the cultural, political and economic structure that’s creating sexism among other types of discrimination, these hashtags displace the responsibility to the individual, who can serve that responsibility by just liking the post or buying the product that’s associated with it. Instead of focusing on a collective feminist politics, hashtag culture promoted by social media algorithms focuses on the individual and disarticulates former forms of feminism. More often than not, that individual is a media-savvy, English speaking, and middle-class white woman: increasing her confidence or body positivity is unlikely to affect the lives of women with marginal identities.

3. Why Positive Feedback Loops Amplify Vulnerability

Popular feminism recognizes the vulnerability of women, but instead of challenging already established structures of paternalistic power, it challenges vulnerability itself, thus wanting women to be more “confident” and “powerful” in both economic and political spheres. This is reflected in social media content and subsequent backlash, where women, especially those with marginalized identities, have to face increased vulnerability. In this section, I am exploring how the positive feedback loops in algorithms, especially on social media, play a role in the misogynistic attacks on women and amplifies vulnerability.

In an address written by Mark Zuckerberg himself in 2018, he admitted himself that there’s a “basic incentive problem” that “when left unchecked, people will engage disproportionately with more sensationalist and provocative content. Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.” See the graph below for the pattern Zuckerberg was referring to.


Positive feedback loops are what makes certain types of content on social media go “viral,” and the closer the content is to being controversial, the exponentially more it’s going to be magnified. It is both a result of and a contributor to polarization in countries like the United States and the United Kingdom. For example, when a content creator makes a post about something that is a little sensationalist, they are going to get more engagement which will encourage them to write about something more sensationalist every time, until they reach the policy line and cross into prohibited content. Journalism is also greatly impacted by this algorithm and in danger of falling into the same narratives, as many of us now receive news on our social media feeds. If the content we see is purely based on artificial intelligence predicting what “triggers” us without an editor, then we will see a lot more content that treats its audiences as consumers not citizens. This was already a trend in the 90s and early 2000s and is increasingly true today in both mainstream and alternative media. As predicted by digital journalism scholar Bob Franklin: “Entertainment has superseded the provision of information; human interest has supplanted the public interest; measured judgment has succumbed into sensationalism.”

Due to this reinforcement of borderline problematic content, the dynamic between popular feminism and popular misogyny has been fired up to be often more hostile and violent on the misogynic side. This can be seen in instances like #GamerGate, the subreddit R/redpill, and subcultures like Incel, all of which received wide attention and fed into the online male supremacist ecosystem. Taking one as an example, Incel is short for “involuntary celibate.” It was an online forum that was started by a queer woman who had trouble finding a romantic partner, but then later taken over by mostly heterosexual men who hate women and it blew up in popularity as the algorithm renders it more visible. The Incel community encouraged violence against women and incentivized at least four mass murders, resulting in a total of 45 deaths. Another example is the fake news industry is also very good at creating content that stirs up radicalization and hate, and women often become victims as either the goal or a side effect. Fake news or disinformation is often amplified by algorithms before it’s taken down, because they are often presented as clickbait and viral content. An investigation found that the fake news industry weaponizes women by stealing their identities using bots to boast misleading information. This is not surprising considering the deep sexism of the people who are usually spreading the fake news to sway political opinion and are often part of the alt-right, or sometimes it could be foreign interference to further divide the country.

4. Toxic Technocultures: Who Makes AI?

As the algorithms become more sophisticated, they enter the realm of Artificial Intelligence, which is more adept at complex processes that involve changing variables and unforeseen scenarios. A notorious example would be Cambridge Analytica using AI to make its powerful digital campaigning tools that can target citizens on over 5000 data points, including race, gender, consumer and lifestyle attributions. So the question would be, who is generating these new systems of discrimination in the form of AI? Unfortunately, just like the broader tech sector, there is a significant diversity crisis in AI across gender and race. A report that came out in 2019 by AI Now shows that women comprise only 15% of AI research staff at Facebook and 10% at Google. As for race, 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%. There’s zero data on gender minorities.

Scholars like Gill, Banet-Weiser and Rottenberg have discussed in depth the issues of representation in technology. This conversation is urgent and relevant as digital technology determines more and more facets of our lives, including our very social identity, and we cannot be building systems of discrimination for future generations the same way our former generations did for us. In the AI Now report, much of the research has shown that bias in AI systems reflects historical patterns of discrimination and creates new types of discrimination.

There is no clear consensus on how to solve this diversity problem, or whether solving it will change the future of AI at all. However, there are identifiable problems in the current trend in “Women in Tech”. One is that it privileges white, upwardly mobile women. In particular, the use of AI to classify, detect, and predict race and gender is in urgent need of re-evaluation: there are now AI tools that claim to identify sexuality from a headshot photo. If we want our experiences with AI to reflect the intersections of race, gender, and other identities, then we need to include more voices.

The second problem is the focus on fixing the “pipeline” instead of addressing the toxic culture that is deeper-rooted. As the AINow report poignantly points out, “Workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization” are causing people to leave or avoid working in the AI sector altogether. Banet-Weiser describes this culture as “toxic geek masculinity” that involves a sense of entitlement in all realms of culture, economy, and social life. The infamous manifesto from James Damore (he was later fired from Google under public pressure) claims the essentialist view that women are biologically unsuited for tech, and it is another good example of the deeply rooted misogyny in tech, which cannot be solved simply by implementing pipeline programs like “Girls Who Code”. Another issue with this pipeline of “sending more women into tech” is that it implicitly and explicitly devalues other more precarious career paths that also lack representation of diversity, by promising a life of stability and wealth.

The third problem is that the extent to which an engineer’s individual identity may not be as decisive a factor of the end AI product as one might expect. Progress is slow and arduous if the dominant culture and economic system that supports AI development remain unchanged. The female engineers who succeed in technology tend to be those who already take on the values and ideologies of the organization they are at, having been socialized in a way to fit in, think and engineer things in a certain way, which inevitably reflects that of their professor’s, mentor’s, manager’s and of the dominant culture that they work in. They often fit into the neoliberal work principles and “lean-in” to their careers, not given much space to derail from the corporate mission. Therefore, merely hiring more women into tech and especially AI research is far from enough.

5. Conclusion

Algorithms and AI are only a small part of the discriminating systems (some already built, some in the making) that reflect and amplify vulnerability, biased stereotypes, and resurface biological essentialism in automated systems. Throughout history, technology has always influenced culture and vice-versa. Therefore, it’s important as we enter a digital age where everyone becomes a creator that we are aware of what kind of content and technology we are producing, and what kind of values they represent. As we produce more automated systems, it’s important for people from fields that are traditionally viewed as non-tech to participate in the production process: social scientists, media theorists, psychologists, policymakers, activists, and so on. We need to broaden our frame of reference so that we can explore new pathways to resolve the imbalances and harms that were addressed in this paper and beyond.

See works cited here.