Explore diverging stakeholder opinions, as well as the crucial common ground between incumbents, startups and investors.
The rapid advance of AI brings tremendous promise for progress and significant societal risks. In my first piece of this five-part AI series, I explored five hypothetical examples to engage with the ethical and legal questions that may arise from this next phase of AI development. So, with possible future landscapes mapped out, let’s return to the present and explore the key players making waves in AI on an industry level and how their competing interests are influencing the regulation debate.
The Challenge of Regulation: Seeking Consensus from Competing Perspectives
The task of regulating any industry, sector or technology is extremely challenging, let alone one like AI that is moving faster than experts in the field can reasonably keep up with.
Clamping down too fast on innovation in any field often stifles progress and hands an advantage to the nations and governments willing to let a thousand flowers bloom. Unfortunately, even regulation that initially addresses the problem it is designed to solve can have unpredictable second order effects that end up canceling out any social benefits made from the initial intervention.
But history is also ridden with examples where the pursuit of profit compromised safety and where failure to regulate effectively proved harmful. For decades, tobacco companies were allowed to deny the harmful effects of smoking, while Big Pharma was left unchecked to market drugs like Thalidomide and opioid painkillers at a huge cost to public health. In more recent history, the inability of regulators to intervene properly in social media has negatively impacted social cohesion and, at times, the functioning of democracy, while the rights of workers in gig economy jobs have been first sidelined and then contested.
Whatever side of the regulatory debate you come down on when it comes to AI, two things are clear from history:
First, public wellbeing intersects with commercial priorities and second, the winners and losers in any industry will depend heavily on how the regulatory story unfolds. Naturally, the regulation of AI has become one of the most debated topics in our society, with differing perspectives emerging from the range of players in tech and business. By exploring the perspectives of incumbents, startups, and investors we can map out the regulatory debate and start to see how these competing interests intersect to determine where the chips land.
Incumbent Instincts: Big Tech’s Preference for Self-Regulation
Industry leaders want to remain captains of the AI ship they’ve built.
Mark Zuckerberg, whose opposition to Elon Musk’s doomsday warnings against AI went viral, has argued repeatedly against overly restrictive regulations: “If you’re arguing against AI then you’re arguing against safer cars that aren’t going to have accidents, and you’re arguing against being able to better diagnose people when they’re sick.” In the past, he has also stated that existing controls across safety, privacy and data security make additional regulation unnecessary. Ultimately, Zuckerberg believes companies like Facebook can and should self-police AI risks through internal practices, best understood by engineers building the technology. While this take might land with certain players in the industry, Meta’s disbanding of their responsible innovation team during September 2022 layoffs raised eyebrows and affected those overseeing the evaluation of civil rights and ethics in AI development on their platform.
By contrast, Google’s Sundar Pichai has struck a different tone, stating that “every product of every company will be impacted by the quick development of AI” and warning that society needs to prepare for technologies like the ones it’s already launched. Google has also published a document outlining “recommendations for regulating AI,” where Pichai writes that society must quickly adapt with regulation and laws to punish abuse and treaties among nations to make AI safe for the world as well as rules that align with human values including morality.” The document also mentions that regulators should “consider trade-offs between different policy objectives, including efficiency and productivity enhancement, transparency, fairness, privacy, security, and resilience.” The truth is there will always be a tug of war between corporate entities struggling against oversight and government regulators seeking to protect the public, and while Google appears to have taken a pro-regulatory approach at first glance, the very nature of their proposal clearly promotes regulatory sway in a direction that is favorable to them.
Public faith in self-governance has clearly waned after episodes like the Facebook-Cambridge Analytica scandal, making some oversight appear increasingly prudent, but most tech firms will point to the remarkable innovations they’ve created and their significant social and economic benefits. Brad Smith of Microsoft falls firmly into this category, declaring himself as being against the open letter, supported by Elon Musk, Steve Wozniak, and Yuval Noah Harari among others, calling to pause AI experimentation to allow regulation and safety to catch up. According to the likes of Smith, premature regulation risks constraining further beneficial breakthroughs. “Rather than slow down the pace of technology, which I think is extraordinarily difficult, I don’t think China’s going to jump on that bandwagon,” Smith said. “Let’s use six months to go faster.”
Overall, large tech firms that dominate today's landscape have generally favored a light touch regulatory approach to AI. Taking that stance is regarded as reckless by many but the reality is that the incumbents are able to sway regulatory frameworks with recommendations that favor their businesses while rushing ahead in the race to dominate the space. They benefit more than anyone from the rapid pace of AI development, and with billions spent annually on AI research, are strong advocates for industry self-regulation over government intervention, which prevents an existential threat to their dominance.
The Startup Stance: Seeking a Level Playing Field
AI startups trying to establish footholds despite the incumbents' dominance tend to favor more aggressive regulation, but their position is not without nuance. They largely favor guardrails to prevent unchecked capability races and stop incumbents from crowding them out as even the best funded are financially unequipped to compete in an “unbound race” towards ever-increasing power and capability. Their objective, broadly speaking, is to define constraints clearly to ensure everyone competes on a level playing field. However, notably, many of these projects are funded by the biggest incumbents making their incentives, and therefore their stances, intertwined with the legacy players.
OpenAI’s Sam Altman (who, with hundreds of millions of users and ever increasing market capitalisation, is looking increasingly like an incumbent with every passing day) has been most vocal about the fine balance between laxness and over-regulation. After a world tour meeting with government officials to try and influence regulatory frameworks, he expressed skepticism centered on designation of “high risk” systems, in which OpenAI may be included, as it is currently drafted in E.U. law.
Anthropic CEO Dario Amodei agrees that oversight is important for steering powerful AI responsibly. In a presentation he gave to the Senate in July he warned that AI is much closer than anticipated to overtaking human intelligence and even helping to produce weapons of mass destruction. Amodei, whose AI company is structured as a “public benefit corporation”, recommended that US policies secure AI supply chains — from the semiconductors that provide computing power to their resulting models.
Scale AI CEO Alexandr Wang also echoes this sentiment by asking specifically for “algorithmic transparency standards [that] help build public trust in AI systems without revealing sensitive IP" in his recent presentation to the House Armed Services subcommittee. Wang is urging congress to pass a major national security bill with dozens of provisions to further adopt AI. The bill narrowly passed the House and is awaiting a senate vote.
Another key frame of reference comes from Anthropic regulatory lead Jack Clarke, who posted a 6,000+ word tweet thread on AI policy calling out large tech companies “doing follow the birdie with governments – getting them to look in one direction, and away from another area of tech progress”. He suggests that even large companies that have large AI policy teams are effectively using it as “brand defense after the PR teams”.
With clear regulatory guideposts established, startups can focus innovation on beneficial applications of AI instead of dealing with the fallout from unchecked power. However, the calls for nuance reflect startups' interest in oversight that sustains responsible innovation without imposing excessive burdens on fledgling firms. Striking that productive balance remains an evolving challenge. While regulatory frameworks like Europe's AI Act point in the right direction on risk-based oversight, the compliance and complexity risks may still favor incumbents with significant fire power and manpower.
Investor Incentives: Weighing AI Risks Against Portfolio Returns
Investment firms help shape which AI ventures thrive through capital allocation but financial incentives don't always align with social impacts.
Marc Andreessen recognizes AI requires long-term thinking but warns against government regulation that will stifle new entrants to the market. As an investor tasked with finding the next “Trillion Dollar Idea” in the space, he writes, “Big AI companies should be allowed to build AI as fast and aggressively as they can—but not allowed to achieve regulatory capture, [and] not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk”.
Sonya Huang at Sequoia Capital suggested that the startups themselves can create technology to mitigate risks of this technology. She recognized that questions around ethics and regulations are “very real [and] thorny”; however, she used the example of hallucinations, which is the tendency of these models to make up things on the fly, to suggest “that’s getting solved by these foundational model companies. I wouldn’t be surprised if in the next six to twelve months we have models that are actually capable of truthfulness”. More than half of Sequoia's roughly twenty new investments this year have been focused on AI, up from around a third last year, numbers that have not been previously reported. Favoring regulatory frameworks that support startup innovation is aligned with their financial incentive.
VCs continue to pour billions annually ($52B in 2022) into AI and they have yet to substantially favor or fund ventures prioritizing explainability, safety, or social responsibility. Vinod Khosla, who is also invested in numerous companies in the space, suggested that efforts to moderate the rate of progress, such as the hiatus in research, are misguided or even self-motivated. He warned entrepreneurs that the proposed halt of AI advancement over-focusing on ethics and responsibility can undermine competitiveness. Khosla, Andreessen and some peers, often cite China's unfettered AI surge as justification to move as fast as possible, risks be damned. (More on varying government approaches to AI regulation in the next part of this five-part series!)
Herein lies the hardest tradeoff: investing responsibly often conflicts with maximizing near-term returns, but whether investors actively advocate for equitable regulation may depend on just how quickly the returns on that investment materialize.
AI Alignment: Seeking Shared Objectives & Charting a Responsible Path Forward
Ultimately, crafting a balanced regulatory approach requires engagement from all stakeholders. Heavy restrictions can easily stifle beneficial innovation, consolidating incumbent dominance, but unfettered development brings risks including job losses, inequality, and unpredictable accidents.
Allowing tech giants to self-police has repeatedly failed, harming consumers and society, but prescriptive top-down regulation struggles to keep pace with technology's rapid evolution. Startups need latitude to build AI responsibly but principles-first development alone cannot compete with the allure of power, profits and rapid capabilities growth. Lastly, investors must balance risks with their structural incentive to prioritize financial reward.
While perspectives differ, common ground exists across incumbents, startups and investors. The challenge is a simple one: build AI that uplifts society rather than wreaking havoc, avoid unbridled arms races that risk harmful accidents, and regulate in a way that is not excessive or stifling of progress.
Audrey Miller