A senior software engineer at Google wrote a critique asserting that the internet search leader is losing its edge in artificial intelligence to the open-source community, where many independent researchers use AI technology to make rapid and unexpected advances.
The engineer, Luke Sernau, published the document on an internal system at Google in early April. Over the past few weeks, the document was shared thousands of times among Googlers, according to a person familiar with the matter, who asked not to be named because they were not authorized to discuss internal company matters. On Thursday, the document was published by the consulting firm SemiAnalysis, and made the rounds in Silicon Valley.
In Sernau’s analysis, Google’s rivalry with startup OpenAI had distracted from the rapid developments being made in open-source technology. “We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?” he wrote. “But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source.”
Sernau did not respond to a request for comment.
As progress in generative artificial intelligence accelerates, employees at Google and other tech giants have engaged in spirited discussions internally and externally about the technology that is remaking their industry. Google, in particular, has come under pressure as the wild popularity of OpenAI’s chatbot ChatGPT has sparked concerns that the company may be losing its advantage in artificial intelligence, a field where it has long been a leader.
Yet Sernau asserted that the real threat to Google is coming from open-source communities, where engineers are speedily advancing models that rival the quality of those at big tech companies, and can be made more cheaply. These models, he said, can be faster, more customizable and more useful than Google’s own.
“We have no secret sauce,” Sernau wrote. “Our best hope is to learn from and collaborate with what others are doing outside Google.” He expressed concern that clients would not be willing to pay for models with such high-quality technology on offer for free.
The spokesperson for Google did not provide a comment on the content of the post. In a recent earnings call, Alphabet Chief Executive Officer Sundar Pichai said, “Our investments and breakthroughs in AI over the last decade have positioned us well,” pointing to progress in developing models and working with developers and other partners. Pichai has called for AI regulation in the past, cautioning that the technology could be “very harmful,” if not deployed in a thoughtful way.
In February, a large language model created by Meta leaked out into the open, jump-starting progress on generative AI in open-source forums. The model, known as LLaMA, is smaller than the models that Google and OpenAI have been touting, making it easier to work with; researchers currently have to apply to Meta to access LLaMA.
Google would do well to shift its focus to smaller more nimble models, Sernau argued. “Giant models are slowing us down,” the engineer wrote. “In the long run, the best models are the ones which can be iterated upon quickly.”