Skip to main content

A Conversation with Daniela Amodei, Co-Founder and President of Anthropic

Daniela Amodei, Co-Founder and President of Anthropic, joins us for episode four of It’s Not Magic’s San Francisco season. Founded in San Francisco in 2021 as a Public Benefit Corporation, Anthropic has emerged as a global leader in AI safety and research, known for its flagship model, Claude.

In this episode, Daniela discusses Anthropic’s commitment to responsible scaling and how the company has grown to now serve over 300,000 business customers. The conversation dives into the “wonder and fear” of the current moment, covering the pace of model development, the importance of regulation, preserving culture amid rapid growth, and why generalists and specialists need each other to succeed. As a San Francisco native, Daniela also reflects on launching the company in her hometown and why the city’s culture of innovation remains the ideal base for their new downtown headquarters.

Thank you to Daniela for this illuminating conversation.

Watch the full episode:

Listen On:

Spotify Apple Amazon

More from this episode:

Episode Transcript:

Daniela Amodei: It feels very important that we sort of have this concept of a human in the loop to also just make sure that the models are not kind of veering into a direction that we don't want them to be. Are we letting the models be 100% fully autonomous for tasks? Or is there a process of being able to kind of bring humans and AI together?

Daniela Amodei: Hold light and shade is one of our values, and it's this concept that the technology itself has great risk and great opportunity, and that that is a very complicated, complex thing to hold. We have to consider and talk about all of the risks, all the things that could go wrong, while also considering thinking about and building towards a world where all the things go right.

Daniela Amodei: My sense is given the locus of the technology industry being here, the kind of orientation towards creativity, it doesn't surprise me that artificial intelligence, you know, first took off in the Bay Area. There's accountability that the non-tech residents of San Francisco kind of put on the technology companies that I don't necessarily think is bad. They're like, hey, we're providing a home for you. We just want you to be good citizens. Over the course of the past few years, there's been a lot more mutual understanding that I think has happened between what I'm describing as kind of old time San Francisco and Tech.

David Stiepleman: Thanks for having us.

Daniela Amodei: Oh my gosh. Thank you for having me.

David Stiepleman: It's great to be here.

Daniela Amodei: It's a pleasure. Good to see you again.

David Stiepleman: Can you tell people what room we're in? You can't really see the books on camera, but like we're in a library. It's amazing.

Daniela Amodei: We are in a library. Anthropic is such a unique company and I feel like a library speaks to the research roots of Anthropic, but in general, I think we're very well-read people at the company.

David Stiepleman: Oh, interesting.

Daniela Amodei: Very multidisciplinary. I think when we first started out, one of the things that like most struck me was how many different disciplines people came from who came to the company. We had political scientists and ethicists and biologists. I was a literature major. As you can see, the literature major has shown.

David Stiepleman: Yeah.

Daniela Amodei: I actually encourage you if you want to borrow a book on your way out, go peruse for you.

David Stiepleman: Really?

Daniela Amodei: Yeah.

David Stiepleman: It's like kind of like a pass to come back in and return it, which is nice. So, it's January 2026. I feel like even if we drop this podcast tomorrow, so much stuff will change between now and then, whether it's geopolitics or the economy or tech or AI. Tell us where we are. How do you think about it?

Daniela Amodei: Yeah, light, easy question to start with.

David Stiepleman: Yeah, exactly.

Daniela Amodei: You know, I think one of the challenges of working in this field is it's much easier to look backwards and talk about an epic that happened. But as it's happening, sometimes we don't even necessarily expect or fully understand where we are in relation to what will be true six months from now or three months from now. But I think there's a few things that feel especially present or happening right now. One is, at least internally, we feel like the last class of models that we released was a big step up, so Opus 4.5, just in its capabilities, right? The level of intelligence of the model, but also the way that it's now being used in the world. I think particularly on the coding dimension, you have developers who are now saying like, I'm running Claude Code 100% of the time that I'm working.

David Stiepleman: Wow.

Daniela Amodei: It's making me twice as productive, right. It's doing most of the low-level tasks for me. We're also seeing quite a lot of adoption in biological sciences and healthcare, which we've always had this dream. It's something we talked about really from day one of, I mean, it sounds aspirational, but can you use tools like Claude to literally help find cures for diseases like cancer. And I think this is sort of this big dream, this big idea we've had. But now as we're doing company planning, we're talking about, hey, can we contribute to, you know, helping to eradicate polio in 2027 and 2028? If we partner with groups that are doing it. So, I think for us, the biggest thing I would say right now is just the step up in the capabilities of the models. And then, you know, all of 2024, 2025, “agents” was kind of this word that was thrown around, I almost feel like it lost meaning. But I think this is truly the first time we've seen Claude capable of significant agentic abilities.

David Stiepleman: Okay.

Daniela Amodei: So Claude is able to actually go onto your computer and move your files around or open them or process them or delete them or restructure them. That was just not possible, literally in December. So, the difference between 2025 and 2026 and the course of a month has felt like a big increase to us.

David Stiepleman: Yeah. I feel like at the end of the year, all the wrap-ups of the year on AI, were like, oh, agents are actually disappointing. This was not the year of agents. We’ve got years and years to go. What changed between a month ago and now?

Daniela Amodei: You know what I find interesting is again, like do we really know? I would say this, I don’t know if prediction is the right word, but we have this sort of belief in this concept of scaling laws, right? You give more compute to the models, you give them more information. They get smarter at a predictable rate. And we couldn't have predicted in advance when we would've crossed a particular capability threshold, right? So we wouldn't have said, oh, we know this model or this date Claude is going to be more capable of agentic capabilities. But I think as the models just get better and get smarter, A, we're able to do some training stuff to actually help empower some of those capabilities. But I think more broadly what we're seeing, it's just like watching a person get smarter over time. Right? Get more education, go to college, learn more things. And I don't necessarily know that we're going to be able to predict every future positive thing the models are going to be able to do.

David Stiepleman: I know everyone's doing this, but you're very organized about how you think about adoption and how it's being used and where it's being used and for what, and you were just talking about some of that stuff. You all released this Economic Index report. It told you something about rich and poor in terms of countries. It also told you something about adoption within the United States. Can you talk about some of that and what conclusions were you drawing from that?

Daniela Amodei: I think what's interesting is like most technologies, AI is following I would say a mostly predictable adoption path, right? Which is that unfortunately the folks that have the most are going to be the first to adopt it, right? They have the most resources. They have the most time. They have the most access. And so, they're going to be the folks that say like, cool, I'm a computer scientist at MIT, of course I'm going to start using Claude Code. Right? But I think what we're seeing is the rate at which the technology is being adopted is significantly faster than anything else. Right? If you look historically, even at things like the adoption of Google or when people had dial-up for the internet, AI is just going much more quickly in terms of adoption rates. The trend is the same, but the speed is faster. And so, I think what we're going to see over time is a broader distribution of state adoption than perhaps with previous generations of technology. And I actually think the same will be true internationally as well. But I also think that there's work we have to intentionally do to make sure that there are not big regions of the world that are left behind because of how quickly the technology is developing. Being one year behind or two years behind could actually have an incredibly big impact.

David Stiepleman: How are you thinking about that? A theme of knowing you a little bit, following you guys, a theme of Anthropic is, there's the slogans and there's the culture and the good stuff, and you're known as the ethical company. But you're actually trying to back that up with concrete, in the world stuff, to use the technical term on that point, in terms of adoption in countries where it's just not happening, how would you do that?

Daniela Amodei: I think there's a couple of different ways that we approach this. The first is we view it always as our responsibility to state the truth and be transparent about what we're seeing. And I actually think that's a really important step one, if you don't understand the problem, you can't do anything to make it better. I think it's often a little bit unusual or maybe even a bit unique about Anthropic that we prolifically publish papers and reports about the trends that we're seeing. That's everything from adoption, like we've talked about, where the technology's being adopted, potential risks that we see, suggestions for policy about how to address some of these issues because we alone cannot solve the problem. I actually think corporations alone together can still not solve the problem. It's going to take action from civil society groups, from government. We do personally try to take some responsibility for this, and Anthropic has two different teams that work on something we call beneficial deployments. This is essentially our efforts to deploy the technology for things like biological and life sciences, healthcare, the developing world. So, we have a number of pilots with the Gates Foundation. We also work with the Clinton Health and Access Initiative. We're running many different experiments to see how we can approach potentially deploying Claude in developing countries.

David Stiepleman: Got it. Let's talk about Anthropic, the ethical company. So, I think everybody who's listening to this will already know the Constitutional AI, but spend a minute on what you actually mean by that?

Daniela Amodei: Constitutional AI is this idea that our researchers sort of invented and the reason we came up with it is that the previous techniques used for training LLMs essentially was a like reward function system. So basically, if you ask it a question, if it answers the question correctly, you give it a reward. If it answers the question incorrectly, you don't give it the reward. Sometimes you even give it sort of a form of, punishment is the wrong word, but you say, hey, that was the wrong answer. We felt that there was a better way to kind of approach how the model learns, and it evolved into this concept of Constitutional AI, which is essentially rather than sort of saying, hey, this is a good answer, this is a bad answer, this is an ethical answer, this is an unethical answer. We actually gave Claude a system, a framework for thinking about ethics. So, we trained it using a variety of different documents. I think most famously in the original constitution, we put in the UN Declaration of Human Rights. The Apple terms of service are in there. We recently published Claude's Constitution, but I think essentially, we're sort of trying to say, we really want the models to understand a broader system of ethics. So not just, you know, what is one particular question. How are you answering it? Is it correct or incorrect? But how do you actually give it a framework, like a moral framework to say how should people be treated? How do you make sure that you're not discriminating against people? How do you make sure you are thinking critically about how you approach complex ethical questions. And what we've seen is this over time not only increased the ethicalness of its answers, but it increased its overall intelligence and capability too.

David Stiepleman: How do you judge if that's working? Can you measure that? I mean, it's ethics, so it's famously subjective. How are you determining whether or not it works?

Daniela Amodei: So there's two different sort of mechanisms that we use. The first is definitely looking at kind of performance compared to competitors, but the second is there's an evolving sort of set of benchmarks that are used around a variety of different measures of model capabilities, but also sort of models abilities to, to grapple with complex issues. And it's really tricky because the benchmarks are pretty outdated. They were sort of made for a different era of LLMs. Even kind of prior to LLMs, but what we're attempting to do is have a system of benchmarks and evaluations that are kind of agreed upon in the academic community to say, how are we actually going to measure these models? And I think it's a little bit of an imperfect art, but Anthropic has invested significantly both in measuring our own models, but also trying to contribute to say, how should we have these benchmarks exist in the world.

David Stiepleman: Got it. We're kind of in the pre-collective action era in terms of how AI should live in the world. And right now, I think one of the things that you guys talk about is Constitutional AI ought to result in a race to the top, right? Where other companies hopefully, or other players are adopting because they see that it's working, they see that it benefits you guys, so they'll want to do it and you're in favor of that.

Daniela Amodei: You know, I think race to the top is a really useful concept that we've actually applied across a variety of different ways that we operate. I think one of the most classic ones is many of the guardrails and the safety techniques of Constitutional AI is one Anthropic pioneered. And what we saw was that this drove demand for our products in the market. I think in particular, enterprise customers really care about having a lot of these safety, security, and reliability features in place. And so, a lot of the time what would happen is an enterprise would sort of say, wow, we're choosing to use Claude. We're choosing to partner with Anthropic because of these techniques that you used, right? Or because of these guardrails that you put in place. And what we found was then competitors said, wow, in order to be able to compete, we have to be able to offer these kinds of guardrails. I think we've also seen it happen in some other ways, even at the more basic research level. Anthropic has an interpretability team. This is basically the team that you can think of as Claude's neuroscientists. So, trying to understand what actually happens inside the models. One of my co-founders, Chris Olah, is sort of the pioneer of the field and we talk about it with businesses, we talk about it in our research papers. And suddenly, you know, people are coming to Anthropic to work on our interpretability team, but they're interviewing at competitors and saying, hey, if you had an interpretability team, I might be more interested in working here. What we've seen is interpretability teams now spinning up at competitor companies. And so, I think this idea that if you can find the intersection of something that is actually good and is good for business, of which there's a lot in our view that actually drives competition in a healthy and positive way for the market. That just raises the waterline for everybody.

David Stiepleman: Are the enterprises that are coming, your clients, are they self-selecting in a particular way where you're getting certain segments of the corporate world versus others, sort of like, you know, you're getting the lawyers but not the crazy cowboys?

Daniela Amodei: So I think a really important dimension of this is that Claude's capabilities are also extremely powerful. I think over the course of the past several years, we've arguably had the strongest models for the majority of the time. What we've seen is particularly in areas like coding and agentic coding, that cut across industries, right? If you work in financial services, in healthcare, in the technology space, a legal law firm, right? All of those areas have a need for technical development tools anywhere that's going to have a set of developers, right? People that are writing code. They're going to be using Claude Code. And so, there's an enterprise demand to use our products sort of regardless of industry. I think that also speaks a little bit to both the specificity of this kind of artificial intelligence skill, right? We talk a lot about artificial general intelligence, but the models are really, really good, for example, at writing code. Then there's also a set of general use cases that I think apply across a lot of business sectors. And actually, what we've seen is pretty rapid adoption. Unsurprisingly, most quickly from technology companies, because again, they're going to be the most, you know, tech savvy. They're going to be reading the news about AI. But increasingly, especially over the course of the last year, I would say many more enterprise businesses have, I think, understood the need to have an artificial intelligence strategy. And I think in addition to having capable models, having this very strong safety orientation, Anthropic is also just well oriented to being a business-to-business company. We care about partnership. We integrate directly with a lot of the companies that use Anthropic. We serve over 300,000 businesses worldwide. And we've just gotten a lot more practice actually working closely with customers in the enterprise space.

David Stiepleman: I love that. I was a transactional lawyer for a lot of years, and I think about my training. This question is about, are we in the age of the last experts, right? Because I think what people say is we're always going to need humans. We're always going to need judgment. We always need people like you sitting on top of organizations with vision, with judgment, with ethics, with the ability to communicate. But how do you train that and the way that, in all likelihood, you know, develop judgment and continue to develop judgment. The way I developed judgment was getting my reps as a young person and doing the drudgery, honestly. If we take that away, how do we make sure that people can develop judgment?

Daniela Amodei: I think we as a society are entering a really interesting period. What does productive work mean? Just the nature and the shape of it. I think in the same way that before the internet, which I'm old enough to remember a time before the internet, and what you did for work every day looked really different than what you do, you know, the difference between 1990 and 2010 was dramatic. We're kind of compressing that into a much shorter period of time in terms of the rate of technological development in artificial intelligence. But I think fundamentally, you know what's interesting is I should start with a caveat, which is, suddenly all these developers are saying like, oh my gosh, Claude is so good at doing my job. This feels a little scary. As kind of a people leader and a manager, you know, I'm like, oh yes, of course, this could impact your work. But Claude can never do my job. Then I'll ask Claude to help me coach a report through a challenging situation. And I'm like, wow, Claude, that was a really good answer. My report could have asked you that question. And so I do think it's a reckoning that we're all going to have to face together. I will say, I think we are underestimating the degree to which AI and human compatibility and partnership are going to be how we walk into this new world. So much of what we've seen even today, even with the incredible capabilities of something like Claude Code is developers are better with Claude Code than Claude Code alone or the developer alone. Will that continue forever? Of course, we don't know. Will there be some disruption? I suspect, yes, but I think what we're really seeing is that the intersection of AI doing what AI does well, and humans doing what humans do well is actually going to be the way that we sort of pull everybody up. And you know, I'm also remembering we did a leadership offsite and our Chief Product Officer at the time drew this kind of diagram of what a developer does, and he said actually hands-on-keyboard coding is about 20% of their time. So even if you imagine

David Stiepleman: This was when?

Daniela Amodei: This was six months ago.

David Stiepleman: So, a lifetime ago.

Daniela Amodei: Maybe now it's 30%. I don't know. Maybe 30%. Maybe 40%. But nevertheless, there's so much work that goes into, how do you actually architect the code that you're writing, how do you have it communicate with other parts of the code base, what is the product you're actually trying to build? Who is the customer you're building for? How much time are you talking with them? You're overlapping, you're spending time with designers, you're spending time with product managers, right? You're trying to understand the customer. And so, a lot of where my mind goes is that people who have the ability to work with other people, have humans like to talk to each other, regardless of how good AI becomes, I think there will always be a role for people to communicate with each other. Even if we were, you know, to imagine that that full 50% able to be done by an AI, say it's 50% of the code writing process for a developer, first of all, I don't think that's necessarily going to be the case, but even if it were, what it means is that the other 50% of the job becomes a bigger focus. I think our ability to actually develop new products, empathize with our users, figure out more of what they want and need, that just goes up. You know, I don't want to diminish that we think there are real risks here and there could be real disruption, but I also think as we all get more used to the technology, we adapt to it, I believe there's going to be a lot of opportunity for all of us to be able to do more. Maybe in the example you shared, rather than coming up through the ranks of having to grind away, you know, when I worked in DC this was the ethos too. It's like grind for 15 years and then you'll have the plumb job. You kind of get to leapfrog some of that and maybe that's actually nicer for the next generation of people that are going through the 15 or 20 years of the sort of drudgery.

David Stiepleman: It does all go back to something you said about when do we decide to flip the switch into collective action, whether it's industry-wide or there's some sort of government thing. Where are we relative to that? Where are we relative to having to have some guardrails that we're all going to kind of agree on, and are we there yet? And what should that look like?

Daniela Amodei: So I think there's a few different ways that we can approach this. One of the things that I think Anthropic has always felt strongly about is the need to have a human in the loop in a lot of work that's done by artificial intelligence, some of that is for safety reasons, right? You just don't necessarily want a new technology to go off and be able to just kind of take actions 100% independently from an end-to-end task without a human checking, particularly for very important decisions, right? Anything related to health, financial services, you just want to make sure a human is checking over it. I think in general, it feels very important that we have this concept of a human in the loop to also subtly make sure that the models are not veering into a direction that we don't want them to be. I don't know exactly the structure and form that that might take to bring it to your point about collective action, but I think this concept of, are we letting the models be 100% fully autonomous for tasks or is there a process of being able to bring humans and AI together to do things collectively? Or to do things together. And doesn't that sort of give humans the opportunity to say, wow, is this really what we want the model to be doing? Is this the intention behind it? It provides some guardrails. Again, I don't necessarily have a recommendation yet for like, what should that look like? But I think this concept of really combining the work of AI and humans together is going to be really, really critical in the next few years.

David Stiepleman: Okay. This is the San Francisco season of our podcast. When you and I first met, we met with Mayor Lurie. He was our first guest on the season. We’ve had some incredible guests, including you. We're in downtown San Francisco. You're from San Francisco. What is it about this place? Was it just inevitable that AI had to be here? What is it about this place?

Daniela Amodei: Oh man. You have to be careful about asking a native what it is about San Francisco, because we'll talk forever. But one of the things that I find the most incredible about it is just the dynamism of the city. Like how much it has changed and has waxed and waned over the years. I grew up in San Francisco. I grew up in the Mission. I went to Lowell and I have friends from different eras of time in San Francisco and inevitably, at some point, someone will say something along the lines of like, well, the way San Francisco used to be, or like kind of how things were is better in the old days. What I find so interesting is, my parents moved here in the 70’s and I can easily imagine them saying like, oh man, by the 90’s it was so different. Right? Of course, the good old days of the 90’s, of course in the 70’s, the hippies were here and it was creative. It was so artsy, and then all that kind of got washed away and, and I think it's a good moment for reflection and for humility to just think of what other city has gone through the level of transformation that San Francisco has gone through? Over the course of the past 100 years even, long before anybody in my family lived here. I think it is a one-of-a-kind city. The combination of openness, expression, creativity, I still feel that spirit in San Francisco. It has obviously changed a lot since I was a kid. But I think sometimes we underestimate the, I don’t know if rebellious is the right term, but there's a counterculture component to the city that has always been there. And it's taken different forms. Speaking of evolution, I don't know if it was inevitable, but my sense is given the locus of the technology industry being here, the kind of orientation towards creativity, it doesn't surprise me that artificial intelligence first took off in the Bay Area.

David Stiepleman: How do you think about the relationship between Anthropic and the city, or maybe if you want to generalize it, companies that are here in our city. Are there responsibilities? How do you approach that?

Daniela Amodei: I think what's so interesting is that as natives, Dario, and I never had a question of where we would start Anthropic. We've always wanted to be in the city, be home. Thinking of friends who don't work in technology, who I grew up with, right? Of course there's this sort of tension of like, oh, the old San Francisco, the old timers, right? And these tech folks came in, they ruined everything. And I think just like any industry, and I'm not a New Yorker, but I imagine financial services and old-time New Yorkers probably have reckoned with this as well. I think there's common ground that can be found on both sides. And there are parts of San Francisco that like, you know, I spend time and I was like, it feels like nothing has changed. Right? From the early 2000’s or the late 90’s, it feels like I could have gone back in time, right? Maybe people have iPhones now, but I think there's a healthy tension that exists in some ways. There's accountability that the non-tech residents of San Francisco put on the technology companies that I don't necessarily think is bad, right? They're like, hey, we're providing a home for you. We just want you to be good citizens. We want you to show up, care about the city that you're founded in. We want you to engage in politics in a healthy, productive, constructive way. We want you to care about the integrity of downtown. I think there's a lot more common ground and I've actually felt, maybe this is in my head, but over the course of the past few years, in particular post pandemic, there's been a lot more mutual understanding that I think has happened between what I'm describing as kind of old time, San Francisco and tech. I hope to see more of that. I think of Anthropic as like, we're like the hometown team. We grew up here. San Francisco's our home. It'll always be our home. And I think there's something very special about being able to get to do what we do in the city we’re from.

David Stiepleman: Let's shift gears. Because you mentioned a couple times, you're not a technical background. You went to school for literature. This is my favorite topic because I was a French major and now with my partners to help run an investment firm, which, I love that story in terms of being a generalist and figuring things out. How did you do that path? You went to school for music, right? You played the flute in college. Do you still play the flute by the way?

Daniela Amodei: You know, I haven't recently. I still go to the symphony. I love going. A lot of my best friends are actually still in music, but I have not played flute recently.

David Stiepleman: Do you use your music brain at work?

Daniela Amodei: I do. I actually think there is a set of skills that you learn around preparation, performance, you know, is orchestral. I think the concept of collaboration and understanding where you fit in the larger orchestra and the notes that you're supposed to play and your awareness of your own role, but also your awareness of the entirety of the orchestra. I think about that a lot from a team perspective. I think in building the leadership team, I think a lot about what is everybody's instrument, right? What is the position that they play? Not just like, is this the best oboist, but is this oboist going to partner really well with the clarinetist? How is this sound going to tune together? I think on the literature side, which is what I majored in, I think the critical thinking skills, you probably feel this way too, from French and from working in law, the ability to approach problems from different angles. I think the empathy that's built from understanding other people's stories is incredibly important for running an organization. So much of what I spend my time on is thinking about like who are the people that are actually solving the problems? What are the problems that need to be solved? How do we take something that's very abstract and really break it down into practical steps? How do you show up for your users? I think the empathy piece is just incredibly important because everybody that is encountering AI, maybe not everybody, but close to everybody is experiencing a combination of two conflicting emotions, right? One is excitement. If you've ever watched these models, to me it feels very similar to watching a person learn, right? You're like, this is incredible. I was like, Claude, build me a website, and Claude just goes and does it. I'm not a developer. I don't know how to build a website. Right? It's incredibly exciting. And then the second is fear. There's a component of anxiety. What is this going to mean? Not just for my business, but maybe for my kids? A question I'm asked in meetings with CEOs of the Fortune 500 more often than you would expect is, what should I tell my kid to major in in college?

David Stiepleman: What do you say?

Daniela Amodei: I say it doesn't matter so much what they major in, it matters how they treat the people that they work with and the people they're encountering at school. I continue to think that the thing AI really can't do is interact with humans the way that humans do. The human relationship, the ability to read people, the ability to work well with others, that is something that we look for more than ever in the employees that we bring into Anthropic because I think that empathy, that ability to work well with other people to read the room is going to be more and more important as the technology advances.

David Stiepleman: Do you have a reliable way of screening for that?

Daniela Amodei: We have a culture interview that we have perfected over the five years that Anthropic has been in existence. We really look for a variety of different characteristics, right? We look for mission alignment, right? They're here because they care about the public benefit mission. We look for integrity. Have they made difficult decisions in their life to stand up for their values, whatever those are for things they believe in. We look for collaboration, right? The ability to work well with other people. Those are just a few. I don't want to give all the interview topics away. No, no. But you know, I think those are not necessarily things that you would expect to see in an interview at a company like Anthropic, but to us, the integrity of people who work here, their ability to be able to partner well and work through tough topics, that's what we specialize in. And so I think the ability for individuals to be able to do that well is going to be more important worldwide.

David Stiepleman: Again, not a developer, not a technical background. You're managing a lot of people and that's how they came up. Is that a language translation issue? Or is that overstated?

Daniela Amodei: I really feel like I could not be more fortunate in the people that I chose to co-found Anthropic with. I truly think over the course of the past five years, it's been sort of miraculous to watch how the seven of us and other early employees, just how aligned we are in our commitment to the mission. I think everybody who founded Anthropic, who came to Anthropic early, it was because of this clear North Star. This idea that there are real risks to artificial intelligence, we need to mitigate those risks, but that if we're able to do that, the potential upside is, is almost unbounded. The ability of these models to solve fundamental problems for humans, I think is, is unparalleled. I can't think of another technology that could match it if we're able to get where we want to go, but in order to do that, we have to get the first thing right. And I think that has really united us in our ability to focus on these important topics. But, I continue to be blown away by the quality of human that is at Anthropic. And I actually think that is our biggest strength by far.

David Stiepleman: What does hold Light and Shade mean?

Daniela Amodei: Hold light and shade is one of our values and it's really what I was sort of just talking about, right? It's this concept that the technology itself has great risk and great opportunity, and that that is a very complicated, complex thing to hold. There are organizations and businesses that are sort of centered around shade, right? Like how do you prevent a bad thing from happening? A lot of security companies are that, right? There are companies that are centered around light, right? They're like, this technology is going to be great. It's going to cure cancer, it's going to do wonderful things. Anthropic has to exist between, in both of those spaces, right? We have to think about what are the ways the technology can be abused, how can it go wrong? We view ourselves as stewards of ensuring that that doesn't happen. We have to consider and talk about all of the risks, all the things that could go wrong, while also considering thinking about and building towards a world where all of the things go right. There's just a very unusual orientation of any company. It's complex, it's nuanced. It comes up in a lot of decisions we have to make, right? We think about the power and promise of the technology, but also the downsides.

David Stiepleman: Yeah. You're talking about culture in a lot of ways. Everybody's going to tell you that they have a great culture. We think we have a great culture. I think we're all stewards of our culture at sixth Street, but as one of the stewards of the culture, like to really make it concrete and live in the world, that's very hard. How do you think about that?

Daniela Amodei: You know, I think that culture is a few things. I think culture, it's things you say, but it's also things you do. It's actions. It's what behaviors are quietly rewarded or sort of quietly not rewarded. I think something that feels the most important to me about our culture at Anthropic is this concept of integrity. We poll employees very frequently to say like, what does it feel like to work at Anthropic? What is your definition of our culture? Because by the way, culture evolves, right? Anthropic five years ago was seven people. Anthropic today is 2,500 people. It's not going to be the same how they interact as how seven people who have known each other for a long time interact. But you know, words that we consistently hear, low politics. High integrity. Mission alignment. People really love their work. They're kind but direct, I think. All of these things are actual behaviors that you can observe and reinforce, right? You have to talk about them, you have to write them down, but they also have to be baked into the processes for like how we exist with each other, right? To me, low politics just means like having uncomfortable conversations, right? Like a lot of politics is like, I don't want to have this difficult conversation, right? I'm going to go behind your back, talk to somebody else about it. We have no tolerance for that. If you have something to say, you can find a way to say it respectfully to your colleagues, and I think we work through issues extremely well because it feels like we have to talk about it.

David Stiepleman: Well, like you said, everything that you're doing is difficult issues, so you have to model respectful debate where you get along, but you're very direct. How do you model that? Are you doing that at the top of the firm so everybody sees?

Daniela Amodei: I would say within the leadership team, that's absolutely how we communicate with each other. I have never felt anything then 100% respect between the leadership of the company. So I think everybody respects each other immensely, but what that means is we're able to raise tough things and say, I actually don't think this is the right decision. Can we talk through how we got to this decision? I don't think it's right. Let's talk about it. There's a flip side to that, which is like, sometimes we take longer to do things because we really want to get to the right answer, right? But I think for us, being able to model those values to the rest of the company is really important. And I think Dario and I also really aim to have a lot of transparency. To the degree that we can, we have this concept of notebooks in Slack and they're basically just like an open forum where Dario and me, but really the entire leadership team, anybody at the company can make one. Just things we're thinking about, things on our mind. I post about things related to culture, related to decision making, related to the models. Just like we talk about them, people can comment on them, people can ask about them at an all hands, and I think it's really important that we show employees. We ourselves don't have all the answers, right? We don't know, right? We're all walking into this very uncertain future together. We're going to do that better if we hear from people and if we get more opinions early in the process to be able to make better decisions.

David Stiepleman: I love the idea that the culture doesn't stay the same. Is there a tenant or a principle that you jettisoned at some point?

Daniela Amodei: I don't necessarily think we jettisoned something. I almost think if you think of a person, if you think of yourself 20 years ago, would you be like, was there a value that I jettisoned? You might be like, well, I think maybe my thinking evolved on how to do something fair. And I think companies are a little bit like people that way, where the DNA sort of is what it is, right? Probably if you looked at yourself 20 years or when you were a kid, you'd probably say, I recognize that person. Yeah, I recognize that guy. Yeah. First thing, right? I have little kids and I was like, wow. I probably would recognize my kid 20 years from now, you know who they are. There's some qualities that are probably going to be the same, but their approach to living those values changes based on the experiences they have. And I think Anthropic has gone through evolutions like that.

David Stiepleman: That's cool.

Daniela Amodei: Right. We've said like, okay, holding light and shade before we had a product looks really different than holding light and shade when you serve the Fortune 500. I think really how we've approached challenging situations has just evolved to meet the moment that we're in.

David Stiepleman: The industry is crazy. Your life must be bananas. You've got to be thoughtful and very intentional about your time. How do you do that?

Daniela Amodei: I love the idea that I have a perfect formula figured out. It's an evolving process.

David Stiepleman: Yeah, of course.

Daniela Amodei: There's a couple things I would say here. I think the first is it has taken some work for me as someone who's an operator and executor. I like to be busy all the time. To imagine this concept of like, thinking time is part of my job. I have a big team. I manage a lot of people. I travel for work. My days are usually very crowded. But my wonderful admin team has said like, hey, sometimes you just need an extra hour or two in the morning of thinking time makes a huge difference for your ability to execute better during the day.

David Stiepleman: Oh, so they noticed that?

Daniela Amodei: They gave me that feedback.

David Stiepleman: That's amazing.

Daniela Amodei: Yeah. I have the best team. I think it's always a little bit of a balancing act because of course you're missing out on time that you're spending with people in the company or externally. But I think as much as possible having breaks and sometimes we've even experimented with no meeting weeks where we've said, look, of course we need to coordinate. There's so much that needs to happen. But for some of the executives, like the ability to just like step back, right? We have hack weeks for developers where we build new things. We're outside of the day-to-day sort of like hustle and bustle. I love that. I think there's something important to this concept of sort of building space and then I think on the personal front it's like. I truly believe that doing a job like this is a marathon. It's not a sprint. And you have to find a way to do it sustainably. I have two little kids at home, friends and family. I'm a well-rounded person who likes to do things outside of work. I don't have that much time to do that many things outside of work, but I have to exercise in the morning. I have to get enough sleep. Am I perfect about it? Of course not. But I think having worked in the technology industry for almost 15 years now I've always worked at high growth technology companies. I've always worked at startups. They're always moving a hundred thousand miles a minute. And you, there is a little more room at least psychologically, you can think in your worst moments, actually half an hour for a jog can make your entire day better. And missing out on that half an hour of meetings is worth it to have a little bit of time to have some space. I think it's especially important for an industry that that can trend insular, right? In the technology industry, we can forget about the world outside of tech. It's important to me that I have relationships. From people before my life in tech. Some of my closest friends are, like I said, musicians, artists, writers, and being able to talk to them. Like, how is this impacting you? How do you feel about the technology is not a perfect way of relating to everyone in the world, but I think having a wider perspective and thinking about how this is impacting, you know, real people's lives. That makes me better at my job and that's a good investment of time.

David Stiepleman: 100%. That's great. We started talking about the library. What are you reading or if you wanted me to pull a book off the shelf, and I'll return it, what would you recommend?

Daniela Amodei: It's hard to pick just one. You know it's funny, I read a lot of science fiction. I read a lot of history. I don't know why those two things are interesting to me. Just off the cuff, have you read The Guns of August? About World War I?

David Stiepleman: Yeah, of course.

Daniela Amodei: It’s an incredible book. And I think the intersection of the personal experience of people who were sort of thrust into leadership in this very tumultuous time, and then the massive impact that their decisions made on an entire continent full of people.

David Stiepleman: That's an incredible call out, that book, because it's, these systems that were set up? Why?

Daniela Amodei: Right.

David Stiepleman: And then the buttons were pushed and then it just ground millions of people into dust. What an interesting thing to say.

Daniela Amodei: And it wasn't intentional. Right? I mean, that's what I think she does so masterfully is, you look at every decision point, every individual slight, every miscommunication. And you think, was this inevitable? Maybe if the Duke hadn't been assassinated, right? Maybe things would've gone differently. But was there sort of this pull that like there was going to be a conflict and it was just on the turn? I don't know, but just like feeling the weight of the responsibility of the people that were in those situations as individuals and then just the massive impact that their decisions had. I think we can learn a lot from history and I think the more we study history, the better decisions we make. So many things that came out of that in later wars, right? Like we talked about, the Constitution is trained with the UN Declaration of Human Rights. Right? Why does that exist? Where did that come from? It came from extremely painful lessons that humanity had to learn. That's probably the book I would pick. World War I is not as talked about sometimes as much as World War II, but I think many salient lessons to be learned.

David Stiepleman: I agree with you. We're going to end it there because we're not going to do better than that.

Daniela Amodei: Thank you. It was a pleasure. Pleasure. Thank you so much.


*Assets under management (“AUM”) is presented as of 12/31/2025, unless otherwise noted. AUM includes the net asset value, plus outstanding leverage and asset-based financing undrawn amounts, in respect of private investment funds, certain co-investment vehicles and accounts for which Sixth Street provides investment management or advisory services, as well as capital that such funds, vehicles and accounts have the right to call from investors pursuant to the terms of their capital commitments, and additional fundraising commitments and fund, vehicle and account liquidations through 12/31/2025. In the case of Sixth Street-managed business development companies, AUM reflects their total assets (i.e., gross of any fund-level liabilities) plus asset-based financing undrawn amounts, as well as capital that such companies have the right to call from investors pursuant to the terms of their capital commitments. With respect to Sixth Street-managed collateralized loan obligations, AUM reflects the face amount of debt and equity outstanding. AUM includes capital to be managed in connection with the strategic partnership discussed in the Sixth Street press release that can be accessed here. Calculation of AUM differs from the calculation of regulatory assets under management in Form ADV filings and may differ from the AUM calculation methodologies used by other investment managers.