At WIRED’s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei responded to critiques regarding the impact of regulation on the AI industry. Amodei addressed comments from David Sacks, a prominent figure in AI and crypto, who had suggested via Twitter that Anthropic was manipulating regulatory discussions through fear tactics. She asserted her belief that the company’s focus on identifying potential AI risks strengthens the industry.
“We were very vocal from day one that we felt there was this incredible potential for AI,” Amodei stated. She expressed that Anthropic aims to elevate awareness of AI’s positive contributions while emphasizing the importance of managing associated risks. “We have to get the tough things right,” she added.
Anthropic’s Claude model is utilized by over 300,000 startups, developers, and companies. Through these engagements, Amodei noted a consistent demand from customers for reliable and safe AI products. “No one says, ‘We want a less safe product,’” she remarked, comparing the disclosure of her AI model’s limitations to a car manufacturer releasing crash-test results to showcase safety improvements. Just as consumers may trust a brand that proves its safety, companies using Anthropic’s AI are likely to favor products deemed more reliable.
Amodei clarified that this self-regulation in the market is influenced by the standards set by Anthropic. “We’re setting what you can almost think of as minimum safety standards just by what we’re putting into the economy,” she explained. Companies are reportedly developing workflows that prioritize AI reliability, indicating that they prefer products that minimize risks. She posed a critical question: “Why would you go with a competitor that is going to score lower on that?”






