Microsoft’s president Brad Smith gambles his profession on AI
Microsoft president Brad Smith learned to work with D.C. Now a brewing debate over AI regulation is testing his well-worn playbook.
But the prophecy was right. As if on schedule, on Thursday morning Smith convened a group of government officials, members of Congress and influential policy experts for a speech on a debate he’s long been anticipating. Smith unveiled his “blueprint for public governance of AI” at Planet Word, a language arts museum that he called a “poetic” venue for a conversation about AI.
Rapid advances in AI and the surging popularity of chatbots such as ChatGPT have moved lawmakers across the globe to grapple with new AI risks. Microsoft’s $10 billion investment in ChatGPT’s parent company, OpenAI, has thrust Smith firmly into the center of this frenzy.
Smith is drawing on years of preparation for the moment. He has discussed AI ethics with leaders ranging from the Biden administration to the Vatican, where Pope Francis warned Smith to “keep your humanity.” He consulted recently with Sen. Majority Leader Charles E. Schumer, who has been developing a framework to regulate artificial intelligence. Smith shared Microsoft’s AI regulatory proposals with the New York Democrat, who has “pushed him to think harder in some areas,” he said in an interview with The Washington Post.
His policy wisdom is aiding others in the industry, including OpenAI CEO Sam Altman, who consulted with Smith as he prepared policy proposals discussed in his recent congressional testimony. Altman called Smith a “positive force” willing to provide guidance on short notice — even to naive ideas.
“In the nicest, most patient way possible, he’ll say ‘That’s not the best idea for these reasons,’” Altman said. “‘Here’s 17 better ideas.’”
See why AI like ChatGPT has gotten so good, so fast
But it’s unclear whether Smith will be able to sway wary lawmakers amid a flurry of burgeoning efforts to regulate AI — a technology he compares in potential to printing press,but that he says holds cataclysmic risks.
“History would say if you go too far to slow the adoption of the technology you can hold your society back,” said Smith. “If you let technology go forward without any guardrails and you throw responsibility and the rule of law to the wind, you will likely pay a price that’s far in excess of what you want.”
In Thursday’s speech, Smith endorsed creating a new government agency to oversee AI development, and creating “safety brakes” to rein in AI that controls critical infrastructure, including the electrical grid, water system, and city traffic flows.
His call for tighter regulations on a technology that could define his company’s future may appear counterintuitive. But it’s part of Smith’s well-worn playbook, which has bolstered his reputation as the tech industry’s de facto ambassador to Washington.
Smith has spent years asking for legislation, establishing himself as a rare tech executive whom policymakers view as trustworthy and proactive. He’s advocated for stricter privacy legislation, limits on facial recognition and tougher consequences on social media businesses — policies that at times benefit Microsoft and harm its Big Tech rivals.
Other companies appear to be taking notes. In the past month, OpenAI and Google — one of Microsoft’s top competitors — unveiled their own visions for the future of AI regulation.
But Microsoft’s embrace of ChatGPT catapults the 48-year-old company, along with Smith, to the center of a new Washington maelstrom. He’s also facing battles on multiple fronts in the United States and abroad as he tries to close the company’s largest ever acquisition, that of gaming giant Activision Blizzard.
The debate marks a career-defining test of whether Microsoft’s success in Washington can be attributed to Smith’s political acumen — or the company’s distance from the most radioactive tech policy issues.
The proactive calls for regulation are the result of a strategy that Smith first proposed more than two decades ago. When he interviewed for Microsoft’s top legal and policy job in late 2001, he presented a single slide to the executives with one message: It’s time to make peace. (Businessweek, since purchased by Bloomberg, first reported the slide.)
For Microsoft, which had developed a reputation as a corporate bully, the proposition marked a sea change. Once Smith secured the top job, he settled dozens of cases with governments and companies that had charged Microsoft with alleged anticompetitive tactics.
Smith found ways to ingratiate himself with lawmakers as a partner rather than an opponent, using hard-won lessons from Microsoft’s brutal antitrust battles in the 1990s, when the company engaged in drawn out legal battles over accusations it wielded a monopoly in personal computers.
The pivot paid off. Four years ago, as antitrust scrutiny was building of Silicon Valley, Microsoft wasn’t a target. Smith instead served as a critical witness, helping lawmakers build the case that Facebook, Apple, Amazon and Google engaged in anti-competitive, monopoly-style tactics to build their dominance, said Rep. David N. Cicilline (D-R.I.), who served as the chair of the House Judiciary antitrust panel that led the probe.
Smith recognized Microsoft was a “better company, a more innovative company” because of its clashes with Washington, Cicilline said. Smith also proactively adopted some policies lawmakers proposed, which other Silicon Valley companies aggressively lobbied against, he added.
“He provided a lot of wisdom and was a very responsible tech leader, quite different from the leadership at the other companies that were investigated,” Cicilline said.
Microsoft is bigger than Google, Amazon and Facebook. But now lawmakers treat it like an ally in antitrust battles.
In particular, Smith has deployed this conciliatory model in areas where Microsoft has far less to lose than its Big Tech competitors.
In 2018, Smith called for policies that would require the government to obtain a warrant to use facial recognition, as competitors such as Amazon aggressively pursued government facial recognition contracts. In 2019, he criticized Facebook for the impact of foreign influence on its platform during the 2016 elections — an issue Microsoft’s business-oriented social network, LinkedIn, largely didn’t confront. He has said that Section 230, a key law that social media companies use as a shield from lawsuits, had outlived its utility.
“Having engaged with executives across a number of sectors over the years, I’ve found Brad to be thoughtful, proactive and honest, particularly in an industry prone to obfuscation,” said Sen. Mark R. Warner (D-Va.).
But as Microsoft finds itself in Washington’s sights for the first time in decades, Smith’s vision is being newly tested. Despite a global charm offensive and a number of concessions intended to promote competition in gaming, both the U.K. competition authority and the Federal Trade Commission in the United States have recently sued to block Microsoft’s $69 billion acquisition of Activision Blizzard.
Twin complaints signal new FTC strategy to rein in tech industry
Smith signaled a new tone the day the FTC decision came down.
“While we believed in giving peace a chance, we have complete confidence in our case and welcome the opportunity to present our case in court,” Smith said in a statement. The company has appealed both the U.K. and FTC decisions. Smith said he continues to look for opportunities where he can find common ground with regulators who opposed the deal.
When Microsoft was gearing up for regulatory scrutiny of the Activision Blizzard deal, Smith traveled to Washington to talk about how the company was “adapting ahead of regulation.” He announced Microsoft would adopt a series of new rules to boost competition in its app stores and endorsed several legislative proposals that would force other companies to follow suit.
On Thursday, he once again tried stay a step ahead of worried Washington policymakers. Smith delivered Thursday’s address in the style of a a tech company demo day, where executives theatrically unveil new products. There were more than half a dozen lawmakers in the audience, including Rep. Ted Lieu (D-Calif.), who has used his computer science background to position himself as a leading AI policymaker, and Rep. Ken Buck (R-Co.), who co-chaired the antitrust investigation into tech companies with Cicilline.
Smith proposed that the Biden Administration could swiftly promote responsible AI development by passing an executive order requiring companies selling AI software to the government to abide by risk management rules developed by the National Institute of Standards and Technology, a federal laboratory that develops standards for new technology. (Such an order could favor Microsoft in government contracts, as the company promised the White House that it would implement the rules over the summer.)
He also called for regulation that would address multiple levels of the “tech stack,” the layers of technology ranging from data center infrastructure to applications enabling AI models to function. Smith and his Microsoft colleagues have long made education a key part of their policy strategy, and Smith has been focused on educating lawmakers, members of the Biden administration and their staff about how the AI tech stack works in recent one-on-one meetings, said Natasha Crampton, the company’s chief of Responsible AI, in an interview.
Smith, who has worked at Microsoft for nearly 30 years, said he views AI as the most important policy issue of a career that has spanned policy debates about surveillance, intellectual property, privacy and more.
But he is clear-eyed that more political obstacles lie ahead for Microsoft, saying in an interview that “life is more challenging” in the AI space, as many legislatures around the world simultaneously consider new tech regulations, including on artificial intelligence.
“We’re dealing with questions that don’t yet have answers,” Smith said. “So you have to expect that life is going to be more complicated.”