The company decided to give its artificial intelligence away.
Crown jewels. The Silicon Valley giant that owns Facebook Instagram and WhatsApp had developed an AI technology called LLaMA which can power online chatbots. Meta, instead of keeping this technology to themselves, released the code that underlies the system. Meta allowed academics, government scientists and other individuals who provided their email addresses to download the code after the company had verified the person. Meta's AI technology was essentially being given away as open source software, which is computer code that could be freely modified, reused, and copied. This provided outsiders with all they needed to build chatbots quickly. Yann LeCun said, Meta's Chief AI Scientist, in an interview, that the platform which will win is open-source software.
Meta stands out in the race for AI leadership across Silicon Valley by taking a unique approach to technology. The company is driven by its CEO and founder.
Meta believes the best way to increase its influence is to share the AI engines that power its products. This will help it to move forward faster.
The actions of the government are in contrast to those of
OpenAI and OpenAI are the two companies at the forefront of the new AI arms races. These companies are increasingly secretive in their AI product development, as they fear that AI tools such as chatbots could be used to spread hate speech, disinformation and other toxic material.
Google, OpenAI, and others have criticized Meta for its unfettered approach to open-source. The rapid growth of AI in recent months has raised concerns about its risks, such as how it can disrupt the job market. Within days of LLaMA being released, it was leaked on 4chan, an online message board notorious for spreading false or misleading information. Zoubin Ghahramani is a Google Vice President of Research who oversees AI work. She said, "We need to be more careful about giving details or open-sourcing code" for AI technology. "Where could that lead to misuse?" Google employees have also questioned whether open-sourcing AI could pose a threat to their competitiveness. In a leaked memo from Semianalysis.com this month, a Google employee warned his colleagues that open-source software such as LLaMA would cause Google and OpenAI's AI lead to be lost. Meta, however, said that it did not see any reason to keep the code secret. LeCun called the growing secrecy surrounding Google and OpenAI a "huge error" and "a really bad interpretation of what's happening." He says that governments and consumers will not embrace AI until it is independent of companies such as Google and Meta. Do you want to put every AI system under the control a few powerful American companies? He asked. OpenAI declined comment. Meta's open source approach to AI is nothing new. In the history of technology, there have been many battles between proprietary and open-source systems. Others give away the tools they use to create tomorrow's platforms of computing, while some hoard them. Google recently opened-sourced its Android mobile operating systems to compete with Apple's dominance of smartphones.
Researchers have often urged companies to share their AI technology. Their tactics is changing due to the AI race. This shift started last year, when OpenAI launched ChatGPT. The chatbot was a huge success with consumers, and it sparked competition within the AI industry. Google moved quickly to integrate more AI into their products.
OpenAI, a $13 billion investment in OpenAI.
Meta has invested in AI for almost a decade. Google, Microsoft, and OpenAI are the most prominent AI companies. The company spent billions on the software and hardware required to build chatbots and "generative AI" that produce images, text and other media by themselves. Meta has been working furiously in the background to integrate its years of AI development and research into new products. Zuckerberg wants to make the company a leader in AI. He holds weekly meetings with his executive team, and product leaders. Meta announced on Thursday that it has designed a brand new computer chip, and upgraded a supercomputer, specifically to build AI technologies. Meta is also working on a new data center for computers with AI in mind. Zuckerberg stated that "we've been building an advanced infrastructure for AI since years, and this work represents long-term efforts which will enable more advancements and better use this technology in everything we do." Meta's most important AI move was the release of LLaMA. This is also known as a LLM, or large language model. (LLaMA is "Large Language Model Meta AI.") LLMs can learn new skills through the analysis of large amounts of text. This includes books, Wikipedia articles, and chat logs. ChatGPT, Google's Bard and other systems are built on top of such systems. LLMs can identify patterns in text and then learn how to create their own text, such as term papers, blog entries, poetry, and computer code. They can carry out complex conversations. Meta released LLaMA in February. It allowed academics, government scientists, and anyone else who gave their email to do so to download and build their own chatbot. The company, however, went beyond many other open-source AI initiatives. The company allowed users to download a LLaMA version after it was trained using huge amounts of digital text culled off the internet. This is called "releasing weights" by researchers, referring to specific mathematical values that the system learns as it analyses data. It was important because the cost of analysing all this data is typically hundreds of computer chips, and tens or even hundreds of millions of dollars. Most companies don't have these resources. Weights allow those who possess them to deploy software quickly, cheaply and easily. They can do this for a fraction the cost of creating such powerful software. Many in the tech world believed that Meta had created a dangerous precedent. Within days, the LLaMA-weights were released on 4chan. Researchers at Stanford University used Meta's technology to create their own AI system which was then made available online. According to screenshots seen in The New York Times, a Stanford researcher named Moussa Doombouya used it soon to generate problematic texts. The system, in one case, provided instructions on how to dispose of a body without being detected. The system also produced racist material including comments supporting Adolf Hitler's views. Doumbouya, in a private conversation among researchers that was seen by The Times, said that distributing the technology would be "like a grenade to anyone who walks into a grocery shop." He didn't respond to a comment request. Stanford removed the AI system immediately from the Internet. According to Tatsunori Hashimo, the Stanford professor leading the project, the project was intended to give researchers technology that could "capture the behaviors of cutting edge AI models." We removed the demo because we were increasingly worried about the misuse potential outside of a research environment. LeCun claims that the technology used is not as harmful as it may seem. He claimed that small groups of people could generate and spread hate speech and disinformation. He said that social networks like Facebook could restrict the spread of toxic material. He said that you can't stop people from spreading nonsense, dangerous information or anything else. "But you can prevent it from being spread." Meta believes that more people using Open-Source software will also help level the playing fields as it competes against OpenAI, Microsoft, and Google. Meta could be positioned for the next wave in innovation if every software developer around the globe uses Meta's tools. LeCun cited recent history as a reason why Meta is committed to open-sourcing AI technologies. He said that the development of the consumer internet is the result of open standards and community collaborations, which helped to build the fastest and most widespread knowledge sharing network in the history of the world. He said that progress is more rapid when the system is open. "You've got a more vibrant eco-system where everyone can contribute."
This article was originally published in
The New York Times