Skip to main content

A Conversation with Dr. Joy Buolamwini

During Westbound Equity Partners’ Annual Summit, Sixth Street Co-Founder and Co-President, David Stiepleman sat down with award-winning AI researcher and best-selling author Dr. Joy Buolamwini for a conversation about, in the words of her Algorithmic Justice League, getting the world to “remember that who codes matters, how we code matters, and that we can code a better future.”

On this episode of It’s Not Magic, you’ll hear how in her office at MIT, Dr. Buolamwini stumbled on the realization that nascent AI systems weren’t neutral and could prefer, and exclude, people based on how we look. We discuss Dr. Buolamwini’s journey from academia, to discovering ways to combine hard research and art, to becoming a Sundance documentary star, to walking the halls of power, to leading the movement for equitable and accountable AI. We also discuss how, if AI eliminates entry-level drudgery, we may be living in the “age of the last masters.”

We are proud to be a founding strategic partner of Westbound Equity Partners, an early-stage investment firm deploying financial and social capital to build great companies and close gaps for underrepresented talent. The conversation took place this summer at the Westbound Equity Partners Summit. Thank you to the Westbound Equity team for having us and to Dr. Buolamwini for the important and timely discussion.

Note: Westbound Equity Partners, formerly known as Concrete Rose Capital.

Listen to full episode:

Listen On:

Spotify Apple Amazon

More from this episode:

Episode Transcript:

Joy Buolamwini: Hello.

David Stiepleman: How are you?

Joy Buolamwini: I'm well.

David Stiepleman: Do you, well, you probably don't feel so, but I feel like you've set an impossible bar in setting expectations here, but that's okay. The, the, you also, but you, you, Will left for me a little bit, uh, on the intro because, uh, I, you know, of all the accolades, it was probably too many for just one person to list. But I, I, they're, um, impressive and, and relevant to I think a lot of our conversation. Uh, you have a PhD from MIT. You, uh, were a Fulbright Fellow, a Rhodes Scholar, an entrepreneur. You shred electric guitar. You were one of, believe it or not, for everybody, two pole vaulting PhDs at our table last night for dinner. Where's Elizabeth? I mean, that's not going to happen again. At least not for me. Well, it may happen for you, mathematically speaking. Um, uh, you are a reluctantly retired uh, skateboarder. Um, and congrats on the book.

Joy Buolamwini: Thank you.

David Stiepleman: Recent author and also a voice actor because you recorded the audio book which is really cool. It's awesome to be with you. Thank you for doing this.

Joy Buolamwini: Thank you.

David Stiepleman: Um, you're a scientist and an artist and your work if we can look at the subtitle of the book is about centering the human in technology, which is probably nothing more important than we could, than we could be talking about right now. And in that vein, would you start, we colluded, would you start with a poem?

Joy Buolamwini: I will start with a poem. And uh, I was really inspired about the words around compassion. This morning, I used to say that my motto was to show compassion, uh, through computation, but it gets tough trying to change the world, as I'm sure you all know. And so in the book, I talk about how I picked up skateboarding again when I thought I was going to drop out of, uh, MIT. And this poem is related to getting through some of the tough times that happen as we endeavor to start, uh, you know, do our organization start different initiatives in the world. So it's called Terminal Resistance.

Withstand the praise and the prison of expectations. Withdraw the temptation to mold yourself into who they thought you should be. Remember the doctor who told your worried mother a PhD for this child is out of reach. Remember not because they were wrong, but because their eyes were too small to imagine the frail and fading body in their care. Contained a formidable spirit. Remember their miscalculation is one we can all make when we fail to look beyond present conditions and bleak predictions. Withstand the pressure and the prism of demands spread across innocuous requests and insistent pleas. Withdraw your participation when you must fold your dignity, lower your stature, or diminish your worth to appease would be king makers. Remember the dear ones. The ones who took the calls and took the time to remind you of who you were without the crown and why they believe in you. Take the time to listen to the frail fading hearts who have forgotten the strength of their spirits. Remember the small significant moments of acknowledgement and support. I remember the custodians who opened doors early so I could study longer. I remember the staff who gave me extra portions to show their pride and to encourage me to take bold strides. I remember the coaches who told me the truth, least they cheat me with cheap congratulations on less than my best. I remember the teachers who gave me more than required, so I could know the elasticity of my capacities. Do not be afraid to reach higher than others dared. To stay longer when seemingly no one else cared. To fertilize the soil of toil before the vision germinates, before the sprout pushes through the terminal resistance that unleashes your power.

David Stiepleman: And we're done. Have a nice afternoon. Okay. A number of images in there that remind me of thinking about you as I was reading the book in your office at MIT. Can you take us back to The Aspire Mirror, how we got on this journey, have you got on this journey?

Joy Buolamwini: Yes. In front of you, you should have a copy of the book and you see this beautiful cover, I Am Holding a Mask and the reason I'm holding a mask is I was a student at the MIT media lab, working on a class project. I took this class called science fabrication. You read science fiction. I heard there's some science fiction fans in the audience. And then you create something you probably otherwise wouldn't make, uh, as long as you could do it in, uh, six weeks. So I wanted a shape shift. Six weeks wasn't enough time to transform the laws of physics. So I thought, what could I do? So I decided to build something called an Aspire mirror, which would allow me to change not my physical body, but the appearance of my reflection in a mirror using this property half silvered mirror. So when you shine light through it, it comes in like you would see on a computer screen. And if there's just the black background, it looks like a mirror. So you get the effect of having like a Snapchat filter, but instead of it being through your video feed, it's actually in your mirror because this is what we do at the media lab. So I was just having fun. As I was working on the project, I thought it would be nice for the actual digital face to follow me in the mirror. So I decided to get a webcam that had some face tracking software. This is where it goes sideways. So I'm trying to get the webcam to detect my face and it's not detecting my face that consistently. It happened to be Halloween, there was a party, and I had a white mask around. And so when it wasn't detecting my face consistently, that's when I started to put on the white mask. And the white mask… I didn't even put the white mask on all the way, right? I was a little annoyed. I barely put it on my face and it was already detecting the inanimate white mask while my human face remained undetected. And so this was kind of a moment for me. Here I was at my dream school. I finally made it to MIT. I'm in this epicenter of innovation supposedly and I'm coding in a white mask and so that led me down what eventually became the Algorithmic Justice League.

David Stiepleman: Can you, the, the one thing that really struck me as I was thinking about you and reading your book. You, your first impulse was, I don't want to draw social, social conclusions about this. I don't, this is, uh, this is hard. I don't want to do this. This is not what I got into technology for. Is that, how, how did you think about that?

Joy Buolamwini: Oh yeah. No, I got into computer science because people are messy, right? I was like, yay, math, algorithms, woohoo, good times. And so when I was, um, at MIT, I was wanting, again, to explore. Explore imaginative ideas and that kind of thing. The last thing I really wanted to be doing was talking about social justice issues. I figured somebody else could take on that role and I'll make, uh, cool technology. And so it was in that process of trying to make interesting technology and encountering these sorts of issues that I realized that I couldn't escape those issues. But, I was reluctant at first. Because when you speak up, you have a target on your back. And being a black woman in tech was hard enough. Now being the black woman in tech, talking about racism and sexism in tech, I don't know if I want to be that one. So I was hesitant for sure.

David Stiepleman: Yeah. How did, so where did this go? So what did you find out? And how did you get conviction that this is what you're supposed to be doing?

Joy Buolamwini: That I should do it?

David Stiepleman: Yeah.

Joy Buolamwini: Uh, Trump got elected, and so I thought, well, maybe I should step up and do my part with things that were going on. And I say that because I had, you know, I had friends who voted for Trump. I had friends who voted, uh, for others. I grew up in Oxford, Mississippi and Memphis, Tennessee, but I remember specifically being at the media lab, um, after, uh, that election. And there was just the sense of dejection from some people in my network and a sense of elation. Uh, from others, but also this a bit of foreboding of we can't just trust who's in power to safeguard the technologies, uh, that are being deployed. And so I felt like there was an impetus to actually stand up. Another thing that happened around that time, it was, um, uh, 2016, there was a paper that came out of Georgetown law called the perpetual lineup. And in that paper they showed one in two adults in the U. S. had their face in a face recognition network that could be searched by law enforcement using algorithms that hadn't been audited, uh, for accuracy. And so I also, I had a lot of excuses for why I was not the one, this was not what I should do. I'm gonna keep making fun, uh, Aspire Mirrors and other sorts of projects. But once I saw that, I saw that, okay, this tech is moving out of the lab and into the real world and it could lead to false arrest and other things that I predicted at the time that did end up becoming the reality and so the stakes were high. The political climate had changed and I realized that maybe I was given these opportunities for a reason.

David Stiepleman: What did you find out?

Joy Buolamwini: So I decided to first, start just testing faces with different companies to see how, uh, AI system might read a face. So you could detect the face, you could guess the age of a face, you could guess the perceived gender and so forth. And, uh, as I was doing this, I was using my TED profile image. And my TED profile image, sometimes it wouldn't be misgendered. So because of the misgendering, I started then testing, um, photos of WNBA players and Olympic athletes and all of these sorts of things. And I saw more misgendering. So that's what gave me the idea to create the gender shades project, which was essentially seeing how well different tech companies performed at the task of guessing the perceived gender of an image. Long story short, tested IBM, Microsoft, later on, uh, Amazon, and they all unsurprisingly showed gender bias, they showed skin type bias, and they showed a bias at the intersection as well. So you could have a, a system like, for example, Microsoft. Perfection was possible. The lighter skin males, pale males as I call them affectionately, they were 100, there were no errors, right? The darker skin females on the other hand. In some categories the error rates were, um, around, uh, 47 percent for commercially sold products. So I thought the tech companies might want to know, so I ended up sharing the results. Uh, with um, uh, and then the paper came out, it became one of the most cited papers and the kind of filled them in on algorithmic auditing and so forth. And eventually some changes happened.

David Stiepleman: When you called different companies, they responded very differently.

Joy Buolamwini: Oh my goodness. Yes. Well, some, well, there was also no response. So in addition to the companies I mentioned, there was also face plus plus. Um, based in China. and so they did not respond to us directly, but in the study, they had actually performed best on, uh, darker skin men. So in the news media, they were saying that they had better performance, but that was all we heard. Um, IBM, speaking of the power of, uh, networks, I had been at an Aspen AI roundtable, and the head of AI research was there. So the day I was turning in my master's work for MIT, which showed this research, I actually met him as I was walking to MIT with his two young daughters, um, who were interested in tech and so forth. So I said, you're going to want to read this paper. So when I emailed, uh, IBM, they actually were, uh, very responsive, invited me to the headquarters, actually released a new model that I tested before the paper, uh, came out. Microsoft was a little late to the game. They waited until the paper came out and then they realized a post doc had been a co author of that paper and started paying attention. So, a few different approaches and responses.

David Stiepleman: Interesting. You're very insistent on precision, not surprisingly. It's very important for what you're doing. You started to talk about the harms that come from, um, from being X coded as you, as we, as I've learned to put it. Um, and you're, the Algorithm Justice League has a very, people should go look at it, a taxonomy of harms. Why is that important to, like, be very precise about what can happen?

Joy Buolamwini: I think it's most important when we're talking about precision to not think about the precision of the technology, but the precision of the lived experience of people who are being harmed. So when we talk about the X coded, no one's immune from being X coded. You could be, uh, Taylor Swift and you have the deepfakes, uh, exploitative, uh, photos, or Tom Hanks having his likeness used to promote some dental, uh, service, uh, he never used, or, uh, the young girls, uh, in Spain who are having deepfakes, uh, made of them by, uh, their classmates, or people like Robert Williams, falsely arrested due to facial recognition technologies, Detroit Police Department, Portia Woodruff, eight months pregnant, uh, same kind of story. And so…

David Stiepleman: Accused against common sense of carjacking,

Joy Buolamwini: Carjack. I don't know anyone carjacking eight months pregnant, but yeah,

David Stiepleman: It's like this faith in technology just overrides like what you're seeing in front of you. It like makes no sense. But anyway, sorry.

Joy Buolamwini: Absolutely.

David Stiepleman: Yeah,

Joy Buolamwini: For sure. What was the question?

David Stiepleman: I don't remember. Um, oh, but precision on detailing harms, the taxonomy of harms and why that's important.

Joy Buolamwini: I think it's important for the taxonomy of harms, but I also had to grapple with focusing on precision and accuracy, particularly as somebody with a computer science background. And so when I first started the work, so much of it was focused on the accuracy numbers or the performance metrics, right? The false positive rates and that kind of thing. And with that focus, people weren't actually focused on the harm. So let's say we had perfect facial recognition, which we don't, uh, that could lead to a surveillance state that you don't want, right? Or you can imagine drones with guns and facial recognition. It's not just a question of how accurate systems are, but how can these systems be abused and what are the safeguards we can put in place?

David Stiepleman: I want to talk about delivery mechanisms because you've, you've evolved since the gender shades paper and presentation and really kind of came into your own as someone who wanted to make sure these, ideas, your research, uh, everything that you were finding, uh, were accessible. And, uh, I love the story of you going to Brussels, meeting with EU defense ministers and the beginning, talk about what, like how that meeting started.

Joy Buolamwini: So they are gatekeepers. So I remember being, um, so excited because I had been invited to be a member of the EU Global Tech panel. And one of my mentors who I respect so much, Megan Smith, she was a former CTO of the United States. Some of you might be familiar with her many, many different first, but she was the first engineer in the role. And she pulled up a seat for me at that table. So I was excited. Here I am talking with the big dogs. Let's go, go to Brussels. So I get to Brussels and I'm trying to just check in so I can go to the meeting. And I tell them I'm here for the meeting, um, and they're just looking at me as if I don't belong. And granted, I didn't look like the gray suits walking around, so, so they had a point. So I explained what, who I was there for, and I, and they said, well, who brought you here? And so I shared with them that I had an invitation letter, and I pulled it out. They said anyone could print anything from the internet. They're not wrong, right? So now I'm pulling out my phone, hoping the international plan doesn't fail me now. And I was able to get in touch with, uh, the secretary there, and they brought me in, and I was given this special badge. When I got to the room, I realized that nobody else had that special badge. And this was before I, this is the work before the work, when you're trying to do the work to get in.

David Stiepleman: Right. And then you go to, if I'm not mistaken, you go to the meeting and they play “AI, Ain’t I A Woman,” which you, which you made. I think I'm getting that right.

Joy Buolamwini: Yes, so

David Stiepleman: What is that and like, what is that? Tell us, I mean, this, this whole thing of making sure that the delivery mechanism works and is accessible, like it really had a profound impact on the mood of the room.

Joy Buolamwini: It's true. So we were just talking about some of the research that I conducted and that was shared as a typical research paper. Now not everybody's trying to read a research paper, right? And so I thought of how do we move from AI audits or algorithmic audits, like what the gender shades paper was to something that a wider audience can connect with. And so that became this notion of the evocative audit and that brings in poetry. So I created this poem called “AI, Ain't I A Woman,” which is a spoken word poem, but it's also a test of AI systems. So I show the faces of Oprah Winfrey, I show, uh, Sojourner Truth, I show Michelle Obama. I show all of these iconic women being, uh, misgendered. Or otherwise mislabeled by AI systems from Amazon, Microsoft, Google, and so forth, right? So I ask, can machines ever see my queens as I view them? Can machines ever see our grandmothers as we knew them? Ida B. Wells, data science pioneer, hanging facts, stacking stats on the lynching of humanity, teaching truths hidden in data, each entry and omission, a person worthy of respect. Shirley Chisholm unbought and embossed the first black Congresswoman, but not the first to be misunderstood by machines while versed in data driven mistakes. Michelle Obama unabashed and unafraid to make her crown of history. Yet her crown seems a mystery to systems unsure of her hair. A wig, a bouffant, a toupee, maybe not. Are there no words for our braids and our locks? The sunny skin and relaxed hair make Oprah the first lady. Even for her face well known, some algorithms falter, echoing sentiments that strong women are men. We laugh, celebrating the successes of our sisters with Serena smiles. No label is worthy of our beauty.

Joy Buolamwini: And so, even though I was the only black person in that room, physically, I brought them in with me. And this was, um, uh, to the EU Global Tech Panel. And this was led by the Vice President of the European Commission. You had the, uh, head of World Economic Forum, president of Microsoft, all kinds of people who were there. And so that did set a different sort of stage. And later on when you had EU defense ministers thinking about adopting, uh, facial recognition for lethal autonomous weapons, this was played ahead of that conversation. Because in that particular, uh, “AI, Ain’t I A Woman,” video, you see the top tech companies in the world getting, you see Amazon labeling Oprah's face as, uh, male. No, no shade to AWS as a sponsor of this event. This is, uh, Amazon editor's pick and so forth. So there's always a learning, uh, journey. But all this to say, it was showing that you had to take more of a pause than maybe what companies might market initially when they, of course, want you, uh, to adopt a particular system.

David Stiepleman: That was a journey for you, and I'm conscious that, to Jeff Weiner's point that we should know our audience. There's a lot of founders in here, some younger than others, who are on their journeys, building their companies, running around, psyching themselves up every day to do what they gotta do. Your journey of, like, mixing the hard science, the math, the data, having all the answers with art was a journey for you. It wasn't, it wasn't like, at the beginning you weren't, that wasn't necessarily how you were going to handle that meeting, but you gained the confidence to do that. How did that happen?

Joy Buolamwini: Yes. So I am the daughter of an artist and a scientist. So it's, the art didn't come from out of nowhere, but being an underrepresented person in the STEM field, I didn't want my technical expertise or credibility to be seen as less than or for my art to come off as being, uh, gimmicky. So at first I felt I had to armor up, right? So I got all the degrees you can imagine, all the fellowships, all of these other things, and it wasn't really until I was, uh, working on my fourth degree. PhD from MIT that I felt like, okay, the artists can peek out now. And so, “AI Ain’t I A Woman,” which you heard of a snippet of it just now, that was my first affirmative step of saying, okay, if I say I'm a poet of code and part of a poet's job is to help the world see themselves differently. And in this case, see technology, uh, differently than I have to step up. So that was me stepping up to the call of being a poet of code.

David Stiepleman: I think we're grateful for that. What, um, I want to take a step back just, just to talk about these are AI and, you know, harnessing it to incredible, the incredible speed of dissemination, the computing power that's now available. It's everywhere. It's what everybody's talking about. That said, these aren't new issues, like technology making choices about excluding some images, some people, some realities is not new and maybe you could tell people what the Shirley Card is/was.

Joy Buolamwini: Yes, so sometimes you know old burns, new urns. Collecting data, chronicling our past, often forgetting to deal with gender, race, and class. So if we go back to the history of, uh, photography. Before we get to computer vision and facial recognition, we're just trying to figure out if we can make, uh, photos in the analog world, right? So you have film, you have chemical solution, you're trying to get an image to be printed. So back in the day, they used to have something called the Shirley Card. There are a series of them. But at some point, there was one woman who, she was literally the standard by which the film was calibrated to see if the chemical solution, uh, was actually portraying people as they ought to be, uh, portrayed. So these initial ones essentially set the default to be a white default. And you got, I see young faces, but I see older faces. You've seen the old photos, right? All you can see are eyes. And teeth, and that sort of thing, right? That is part of the legacy of it. But also, it didn't have to be that way. Uh, uh, different, um, different film technologies evolved. So, for example, uh, Kodak changed when you had furniture companies and chocolate companies complaining, right? You can't see the difference between the milk chocolate and the dark chocolate. You can't see the difference with the different woods. And so, uh, people with dark skin actually benefited when they came out with a new products that were meant to better expose, uh, chocolate and furniture. But also, hey, you can see me in the photo now at the birthday.

David Stiepleman: When you you want to do it, you can do it. What's what, finish the analogy. So what's the analogy now? What is the, what is the Shirley card in AI?

Joy Buolamwini: So, in this case, I think it's really, I alluded to it a bit earlier, is this white default, right? So, sometimes I'll call it the pale male, uh, default, but white as being normative. And so, when we look at many of the data sets that are used as the standard by which to either trade models or to test how well the models are doing. We oftentimes have a false sense of progress because of the measures we've chosen. That was a big part of my research. I created a new, uh, data set to serve as a benchmark because the government benchmarks, um, women of color were represented less than 4.4 percent of the time. So you could still get a pretty high score, right? And, uh, completely dismiss a whole important section of humanity. But also what happened was when people improved those benchmarks and improved the training data, you did see an improvement in the performance from a technical perspective.

David Stiepleman: People do really, uh, maybe more at the beginning of when you were doing, doing this early research and releasing this early research, and maybe this is a sign of progress. But I think you were getting the response like, Are you sure? Because math is just math and computers are neutral and this is just like, this is just reflecting reality. What, what, how did you answer that?

Joy Buolamwini: Well, that's why I would show the example of Kodak, right? Because you could also make the argument that a camera is meant to be objective, right? Here's a lens. The lens isn't biased or racist or that kind of thing, but there's a process that's going through and so there are choices being made about what's going to be in that chemical solution. There are choices being made about what's going to be the standard and the biggest thing is there are choices being made they have our human fingerprints all over them.

David Stiepleman: So what are we doing? What what, what does the Algorithmic Justice League doing? What are some of the things that you're working on to, kind of, stem this tide? You can get quite, one, I could get quite discouraged thinking about, geez, I mean, the numbers that are be 400 billion of research is being spent. I think Goldman Sachs estimated that this year just from five of the big companies, that's how much money they're putting into, you know, the next step. Like how can our merry band of, you know, thoughtful people, but without those kinds of resources, how do we, how do we turn the tide on that? How do we make sure that data sets are representative that people understand the harms that we slow thing? I don't think you're ever saying I want to stop this.
We're not stopping it. But how do we make sure that it's not regressive.

Joy Buolamwini: Yes. How do we build AI for all of us and not just the privileged few, which is really why, um, the Algorithmic Justice League exists. One thing we believe is that everybody's story and everybody's experience is important. So this is my story, right, of coding in a white mask and we see that evolution of that single experience right then leading to the creation of the Algorithmic Justice League leading to testifying it in front of Congress leading to legislation being passed in multiple cities around the U.S. that restrict harmful uses of facial recognition. The companies that we audited, all of the U.S. based companies, actually stopped selling facial recognition, uh, to law enforcement. Uh, later on, after, uh, some litigation, Facebook deleted over a billion, uh, face prints. And so I share all of this to say that it matters when we speak up. We can build better systems, we can also stop harmful systems and harmful deployments. One thing that we've been doing, uh, lately are different sorts of campaigns. So anybody been to the airport lately? Show of hands. Alright. Anyone seen the facial recognition scans? Okay. How many of you, um, know that it's optional? Not that many people. So this is supposed to be an opt in program. You guys know about design patterns, right? This is not an opt in design, uh, kind of pattern. And so we saw this happening actually just last week. I was talking to, uh, secretary Mayorkas, uh, Head of Department of Homeland Security, because we had a summit called the freedom flyer summit and we started something called the opt out club. So this is the kind of thing we do with AJL and what we have been doing is collecting hundreds of stories of people's experiences, uh, with TSA. So I was able to read out to, um, to Secretary Mayorkas and also the representatives of TSA, people talking about how it wasn't truly an opt in, uh, sort of experience. And we were able to bring those stories to bear and get a verbal commitment. We're going to follow it up because we always have to, right? You know that they would change, um, the story, but I think it goes to show the importance of actually resisting, um, speaking up and then having access at, uh, tables of power. So that's the sort of thing we do. So we like to say we're naming and changing. It's not about naming and shaming and just saying you did it wrong. That's it. We get our social justice points or something like that. It's really, how do we transform a system, right? So that we get to enjoy and experience the promises of what AI can be.

David Stiepleman: I imagine you get the question, and I happen to know you get the question all the time, but the killer robots, does that count? And you have a point of view on, on, on why that's not a helpful question.

Joy Buolamwini: Well, it's interesting because I do support, and I talk about it in the book, that I support the campaign to stop killer robots. So that example where I shared, you have the drones with the guns, with the facial rec, that technology is there. You have all kinds of things being used in Ukraine and in Gaza and so forth. So I think those, uh, concerns are extremely valid. That was even why I was serving on the EU global tech panel. So I did some initial research on the limitations of computer vision within, uh, uh, military context and what those, uh, restrictions need to be. So there are definitely dangers with these sorts of systems, but when it comes to X risk, or the computers are going to get so smart, or, you know, and that kind of thing, I very much think about not just how computers hypothetically can rise up against us, but how AI can kill us slowly. And so when you're thinking about access to opportunity, so the AI system that denies you or your loved one, the kidney transplant because there was a race based correction put in it, right, to a story. The self-driving car that doesn't see you as a pedestrian because they show research that it doesn't work as well on short people compared to tall people. So the vertically challenged like me and children are more, um, at risk, right? So that was the, the point I was making in the book was that we don't have to think about some hypothetical future, you know, to mobilize to make better AI systems. We can think about immediate harms, people who are being X coded now, and make decisions now for people who exist in the next generation, uh, so that your hue isn't a cue to dismiss your, uh, humanity, and that data doesn't destine you to discrimination.

David Stiepleman: And absent your work, those are things we would never, uh, know or being like written into the code or excluded or included in the data sets. We would just never know

Joy Buolamwini: I'd like to take all the credit, but there was a huge so many people who were looking at this that I truly admire many of them being black women scholars. So I think about people like Dr. Safiya Noble. She's a MacArthur Genius Award winner. She has the book, um, Algorithms of Oppression, where she was talking about bias and search engine results. She was trying to find an activity for her daughter to do with her playmates, and she searched for a black girl and what came up were these pornographic images. Same thing when you search for, uh, uh, Asian girls as well. Not so much if you search for girl or just white girl. And generally that's what that would map to. I think about Dr. Latanya Sweeney. She was the first black woman. To get her PhD, uh, from, uh, MIT in computer science. She was on my committee. She did a test of seeing if what would happen when you search for quote unquote black sounding names versus white sounding names. And turned out when you put in a quote unquote black sounding name you would get ads that implied a criminal record, even if that person didn't have a criminal record. And so you then had these representational harms because now you might not get the job or get the housing because there is, you look sus when I search in the box. And so there are so many people who laid down a groundwork. I think about Dr. Ruha Benjamin and her work, Race After Technology and also Viral Justice. I just finished my PhD, uh, not too long ago, so I could keep going on with the bibliography and so forth.

David Stiepleman: I think that was all alphabetical.

Joy Buolamwini: In the back of the book, you know. But I think it's, uh, it's definitely, um, I also have to, um, shout out, uh, Dr. Timnit Gebru who co-authored the Gender Shades, uh, paper with me. When I started my, uh, work at MIT as a master's student, she was finishing up her PhD at Stanford in the leading computer vision, um, lab in the world. And so her mentorship definitely, uh, helped me out. And then she goes on later to become the, uh, Google Ethics Co Lead. And then, you know, some drama happened, some of you might have, uh, seen. She actually has a book that'll be coming out soon, so. But yes, I did help, you know, with the white mask and so forth, for sure.

David Stiepleman: I appreciate your precision. Absent in part your work, fair enough. Um, the, you know, one of the themes that, um, in a lot of conversations we've been having, at least I've been having with others here in our breakout at lunch, at dinner last night, talking about, um, Learning and learning curves and ending up in seats where you're not trained to do, where you were trained to do something else, but all of a sudden you're a leader and how you deal with that, and you talk about that a little bit in the book as a, as a, this is kind of a, the age of the last masters or last experts, that AI could produce an apprenticeship gap, and I'm very interested in that, and how you think we should Counter that or if that's an okay thing or what. Explain that if you don't mind.

Joy Buolamwini: Yes. I remember I was speaking, I think I was on a panel with the, uh, CO of cohere and he was all excited about the ways in which AI can, um, basically save us time, right? So that we wouldn't have to be doing the mundane essentials. And that's when I started thinking of this notion of the apprentice gap and also, uh, professional calluses, which then leads to the last master. So let me explain. So I used to play a little bit of guitar and I dropped it down so that I could go, you know, down this educational path. And then, out of nowhere, I was featured in Rolling Stone alongside some of those women that I mentioned with you. And so I felt inspired to go get a guitar. It was a Les Paul slash Gibson Appetite Burst, for those of you who know, you know. And so I got this guitar I always wanted to play. And as I started, I realized, oh, I kind of still have my calluses here. Right? And those calluses came from prior practice years. And so I was thinking about this in the notion of professions. What are the professional calluses that we build up that we might not even be aware of until it actually comes up in context of doing your job? So now how does this relate to AI? If we're automating away all of the entry level positions, right, how do you build up, uh, that experience you need? And hence the apprentice gap. And so if you never have the on ramps to gain the skills, then we end up living in this age of the last masters.

David Stiepleman: So what do we do? That's a, I think I, I'm a lawyer by training. That's highly boring to most people. But the repetition of reading and, um, drafting and all those things, then six years in or ten years in, you can kind of automatically cite chapter and verse on, on contracts or whatever it is that you're doing, and then all of a sudden you have judgment because you know sort of where things fit and how things go. I can't imagine developing that judgment without the repetition of an apprentice. How do we fix that?

Joy Buolamwini: I think you have to safeguard that apprentice time period. And so even if you're adopting automated tools, you don't want to have that kind of atrophy occurring. It's like if you never worked out, you know, that kind of thing. So I think you're going to have to be intentional about creating space and being disciplined about not just using the automated path because it exists. Got it.

David Stiepleman: It's like this technological determinism. It's there. We have to use it. We have to make it bigger, better. It's, it's, it's a, it's an interesting. It's a, it's a very compelling force that we're, are we fighting against it, or we're trying to channel it, or, I don't know.

Joy Buolamwini: When it comes to technological determinism, this idea that technology will progress, and so you might as well just go with the flow. My existence, the existence of the Algorithmic Justice League that we've had companies change their practices, that laws have been enacted and all of that, shows ultimately that we have agency. The future of AI is up to us. What we decide to create, how we decide to create it, the norms we establish, and so that's what gets me excited, because it is not determined the fate of AI.

David Stiepleman: Let's talk about that because what's the, we're in a room where people are investing a lot of capital. They're, um, they're contracting with services that embed AI. What's our responsibility? What are we supposed to be doing? What is this room supposed to be doing as they're, as we're building our businesses?

Joy Buolamwini: One thing that I think is really important to think through is kind of this thing I think of as algorithmic uh, hygiene, right? And so, a big part of it is, uh, data providence. Knowing where the data came from. Without that, you're liable to all kinds of lawsuits, which we've been seeing from, uh, some of the leaders as well. I also think about, uh, meaningful, uh, transparency. And I think a lot about affirmative consent. So much of what people have become excited about with generative AI has been built on a foundation of stolen data. And I don't think that will, I think companies that are more careful about where that data is coming from so they can avoid some of the legal pitfalls will, in the end, last longer.

David Stiepleman: Hmm, I have so many questions. Okay. You know, I, I, we were talking about your accolades and all the things that you're doing up front, which are incredible. It's a lot. And you, one of the things that we've been talking about and that the group, I think, the groups I think have been talking about are how do you organize your time? How do you prioritize? No one person or one group can do everything. And you, you had a moment like this as you were, about to get the PhD and you, you were like, this is too much. And so what'd you learn from that or what have you learned since then? Like what, some guidance for us, for who, who aspire to do a third of what you're doing in terms of like volume and impact or whatever. But like, and a third's probably very high, but, um, some lessons for the group.

Joy Buolamwini: Yeah, no, I got a case of what I call the academic twisties. And so some of you all might have remembered, um, from the last Olympics when Simone, uh, Biles had the twisties. And so she flips and she turns and the twisties occur, uh, for gymnasts when they're no longer able to track themselves in the air. And it can be very dangerous because you could land on your neck. It could potentially, uh, end your life. And so she had that decision of pulling out with the weight of gold literally on her shoulders. And as that was happening, she was having, um, the gymnastics twisties, I was having the academic twisties. I'd been, this was my fourth degree. I'm testifying. I'm doing art shows. I'm doing way too much. And at some point I was just so burned out that a week before I was supposed to defend my, uh, PhD, I just had to reach out to my committee and tell them, uh, that I couldn't. In fact, I was like, and I'm gonna quit. Then my, uh, one of my, uh, committee members said, instead of quitting, why don't you just take a pause and then come back and, uh, think about it. So I did take a pause and during that pause I reconnected with, uh, skateboarding. I was watching the Olympics and the Ghana skate. Commercial came out. I think it was Meta Facebook that, uh, did this commercial, but they were showing skaters in Ghana and all of that. So being from Ghana, being a former skateboarder, I thought, okay, if I drop out of MIT, maybe I could become an Olympic Skateboarder as an alternative.

David Stiepleman: We can get you to Paris, I think, in a day.

Joy Buolamwini: You know, so my friends let me live that dream for a little bit. And then, uh, at that time, um, he started seeing the, uh, Taliban, um, resurgence back in, uh, Afghanistan and women started burning their diplomas or hiding it and so forth. And it really just made me, pause and think about how much I was taking for granted the opportunity to be in this body, to be in the skin, to be in this perceived gender and to have those opportunities. Not so long ago, people who looked like me would have been, uh, workers or would have been slaves or it would have been even illegal for me to read and write and have the knowledge that would allow me to earn this PhD, um, uh, from MIT, right? And so that was kind of the impetus that I needed, uh, to get back. But it was important lesson for me that you don't have to do it all at once, all the time. You can take a pause, you can take a beat. Um, and it's okay. Things, things will work out.

David Stiepleman: I wanna, I'm gonna, we're gonna do Q& A for a couple minutes, get, get your guys questions. We're gonna deploy what is now known as the Mendy Method, which is I guess you have to earn the follow up question. Uh, which was kind of, he can do that, I can't do that. He, he runs this firm. Okay, so, anybody have a question?

Joy Buolamwini: Questions.

David Stiepleman: Mics out.
Joy Buolamwini: Right here.

Question 1: Hi, Chuma. I'm the founder of Nexel. Thank you for talking. Huge inspiration. I followed your journey.

Joy Buolamwini: Thank you.

Question 1: When I hear you talk about the apprenticeship gap, I find that fascinating. I'm curious how you balance that with the flip side of the coin, which is, as technology evolves, humans evolve, right? And things that we consider, sort of, requiring experts today, many years from now, who are not, those are sort of the tasks that apprentice, apprentice, like, entry level people will be doing, right? It's sort of like, even as I look back as a software engineer, What I was learning in college, people are learning in high school now, right? And so, is, like I'm curious how you think about that. Like, is there truly an apprenticeship gap or does what is considered mastery today just move down and then we continue to evolve and what become masters are things we don't even imagine right now?

Joy Buolamwini: I think that's a great question because we're, we have to think of different timelines. When I think of the Apprentice Gap, I'm thinking about it as it exists today with the adoption of AI systems. I think you make a great point, right? What is considered mastery will continue to evolve. I think about a non-profit named NETA, National, um, Eating Disorder Association. So they got all on the AI hype, right? We can automate everything. And their workers wanted a unionize. So I think there was a vice, um, uh, headline. It was, I think, May 25th, something like workers are unionizing. And so they're going to replace them with the chat bot. They replace them with the chat bot. It's not even a week before the next headline chat bot shut down. Why was the chat bot shut down? Well, it turned out it was actually giving information that's known to make eating disorders worse. And so I think the stage we're in with the development of AI systems. We're still so early with the development of AI systems. And I think sometimes there's a premature adoption before we have the safeguards in place, before we fully understand the jobs and the roles. And so I, with the apprentice gap, it's really saying, are we thinking through what it means if we automate away some of the entry level positions? Not to say we can't reimagine it. But if we haven't reimagined it at this point and we're adopting it, we're going to be in trouble. It's a great question.

David Stiepleman: Sounds like we were in another one. Hand over there.

Question 2: Hi, um, you said something that was kind of fascinating to me when you were talking about data provenance. Um, and I'm just curious, uh, it seems like right now we have Zuck championing this idea of an open source model, but it's not open source in how it's being trained and the data going into it. So like, how do you, I don't know, how do you reason with something like that and how should I feel about that as a consumer?

Joy Buolamwini: Yes, now I, open source AI models are really fascinating to me for that very reason, right? Because if you're open sourcing something that's been trained, but we don't know what it's been trained on, and even if you say, oh, we have the weights to the model and all of that, and that is further than others have gone, we're not actually seeing the entire pipeline. And so I agree with you that it's an incomplete way of open sourcing, but I also, especially because so much of my work has been focused on biometric technologies, um, faces in particular, uh, certain kind of data that's can be harder to anonymize just because the data itself, at least in its raw form is actually a representation of a person. So I think you have to be really cautious about What you're open sourcing and why and what the risks are. So, for example, just having an open source data set of faces is extremely risky. People have been and can be targeted, we know, with the rise of deep fakes and so forth. So, I certainly think you have to proceed with caution and ask how open, open for who, under what terms, and also data exploration. Is there ever a time to delete a certain, uh, uh, data set and what would the conditions for that be? Data minimization, how much data do we actually need to accomplish this versus just getting as much data as possible just in case, uh, we might need it, um, later on. I also think a lot about context collapse. Data collected for one purpose, being used in a context it was never, uh, intended for. Uh, in the book I talk about, um, this reference, uh, of, uh, kangaroo are not caribou, thinking about, uh, self driving cars. So let's say, uh, you train it on Canadian streets. Okay, cool, caribou, avoid the caribou. Now you take that model to Australia, and suddenly the animals hop! Not in the plane of reference! What are we going to do? That kind of thing. Another one I talk about in the book, um, was this, uh, Canadian company, and they were well intentioned. Health, uh, tech startup. They wanted to see if they could use a voice analysis for early detection of dementia. And so they worked with, uh, Canadians who, uh, spoke English first, uh, first language.
And when they tested it on Canadians who were first, uh, language, French speakers, it was, uh, detecting the French speakers as having dementia, which we know that's not the right, uh, signal but an example of context collapse. So that's where that data provenance, um, really comes in but it's not just knowing where the data is coming from but also thinking of questions of, uh, uh, ownership, uh, and control.

David Stiepleman: You helped advise the administration on President Biden's executive order on AI, um, and you, you were referring to a little earlier, like, states and cities who have passed laws, ordinances. There's, I don't know, 16 bills in the California State Legislature right now, including SB 1047, getting a lot of criticism, there's the EU, what's, is there a good model, is there a good composite of models, like, what, what, what, what do you look at and think, uh, that, that works, or are we so early, we, we've got to stitch all this together?

Joy Buolamwini: I mean, right now we're seeing different kinds of approaches. So, in the EU, for example, with the EU AI Act, you have more of a risk based approach, right? So saying if the AI is high risk, then we're going to have, um, certain kinds of regulations, uh, come in. If it's lower risk, you're fine. And then in the U S we saw with, um, particularly the office of science, technology, and policy, they released an AI bill of rights. Which was a rights-based framework, which is saying what is first our vision? And coming back to some of the conversation we were having earlier, the vision and the values, and I think you wanna start with the vision and the values, right? So AI systems should be safe and effective. We should be protected from algorithmic discrimination. There should be human alternatives and fallback. There should be a notice and explanation. And so you certainly see multiple approaches coming on board. I think the hard part, and it continues to be difficult, is when a technology is ever evolving, not restricting it so much that we can't get to the benefits, but also not being so free and open that immediate and, um, obvious harms are being let go.

David Stiepleman: And that we're relying on companies to self-regulate who prove that they're not going to do that. Question.

Question 3: Hi, thanks so much for your time and everything you've shared. One quick question is, we as young startups, Uh, with limited resources, even if our heart is in the right place, we're trying to care for a lot of like, coming from Latino backgrounds or different backgrounds, um, the amount of money do you have in order to be able to even gather the data, uh, for new areas that you're trying to do it for training, uh, or even go source data from the populations that you're joining. What advice do you have for startups like us where the incentives are not necessarily aligned, I'm trying to get the customers, the customers may not or may care about some of these things. Even the portfolio of population that you can actually do. What do you think startups can tap on today that would help them in general to get the resources, to get the data resources, or do the collections? Grant is a way, but it takes a long time and effort.

Joy Buolamwini: Yes. Values cost us something at some time. And I think in the short run, it is going to take more time. It is going to take more resources. But in the long term, if you have the more robust AI systems that are built on your values, you do have people who want to say, what are the responsible AI options? What are the ethical AI pipelines? That's probably one of the number one questions I get asked the most. We see all of these companies doing it in this rushed, hurried way because of FOMO. Are there alternatives where it's been built, uh, right? Um, and I think it's worth taking the time to build that foundation that's not based on stolen data.

Joy Buolamwini: I think the lawsuits that are going to continue to mount and the cost of those are going to, um, we're going to start seeing the impact on the bottom line, uh, sooner rather, uh, than later. So I think you have to have, uh, a longer term vision for the type of company you want to be, the sorts of values you want to affirm. And if that means you don't move as fast as the other companies, that will later on likely be slowed down. Right? That is a calculation you have to make, um, that's based on the values. Uh, you have, but also people want systems that work, right? So if you have the, um, for example, I invested in one company, uh, Bloomer, uh, Tech, and they were addressing, uh, a major issue when it comes to, uh, healthcare, which is heart disease for women. It is the number one, uh, killer of women, one in three women who die, die of cardiovascular, uh, disease, but less than a quarter of research participants. I think today we have a new technology also really important challenges are women and a lot of the information we have and even the AI models that have been trained have been trained on male data. As a result, the models that are being used right now fail half of humanity. There's a huge business opportunity, right, and so by them taking the time to actually address this data gap, they're actually able to better serve humanity in the long run, and I think that's worth the time.

David Stiepleman: I just got the high sign. Dr. Joy, uh, thanks for sharing your story in the book. It's a rare computer science page turner. I really mean that and you guys should read it. Uh, we're gonna yeah. Okay. Yeah, would you read? Yes. Would you do another poem? Sure. Would you do claim again?

Joy Buolamwini: Well as we are ending the end of our time I would like to end on a final poem if you will all permit me. Do I have your permission?

David Stiepleman: Yes.

Joy Buolamwini: Yes Okay. Thank you. This one is called – I'm in between two, you can vote: “Unstable Desire” Or “The Brooklyn Tenants.” So who wants to hear “Unstable Desire?” And who wants to hear “The Brooklyn Tenants?” Oh, oh, okay. We're going Brooklyn. We're going Brooklyn.

David Stiepleman: I'm shocked by that vote. I gotta be honest here.

Joy Buolamwini: To the Brooklyn Tenants, resisting and revealing the lie that we must accept, the surrender of our faces, the harvesting of our data, the plunder of our traces, we celebrate your courage. No silence, no consent. You show the path to algorithmic justice requires a league, a sisterhood, a neighborhood, a summit, hallway gatherings, Sharpies and posters, research and potlucks, dancing and music. Everyone playing a role to orchestrate change. To the Brooklyn tenants and freedom fighters around the world persisting and prevailing against us. Algorithms of oppression, automating inequality through weapons of math destruction. We stand with you in gratitude. You show that people have a voice and a choice. When defiant melodies harmonize to elevate human life, dignity, and rights, the victory is ours. Thank you.

David Stiepleman: Thanks.


AUM presented as of 06/30/2024 and excludes assets and commitments of certain vehicles established by Sixth Street for the purpose of facilitating third party co-invest opportunities. Calculation of assets under management differs from the calculation of regulatory assets under management and may differ from the calculations of other investment managers.