Select Page

Small Language Models Could Make AI More Accessible


By Trey Price, American Consumer Institute

Artificial Intelligence (AI) has become a major force with the rise of more powerful tech programs and new opportunities that were not previously feasible or available. Recent developments such as Microsoft’s Phi-3 have brought small language models (SLM), as opposed to the large language models (LLM) we typically associate with generative AI, into public discussions. While still in its infancy, SLMs could prove a more cost-effective way for more businesses to break into the AI market. 

LLMs such as ChatGPT are now well known. Essentially, these models are probabilistic algorithms trained on massive amounts of data and designed to create human-like responses to prompts and questions. While not new, this technology has become more sophisticated over time and can now be applied to many tasks that require a large amount of information, such as summarizing articles and creating new medications.

Developing these programs comes with challenges. Due to the amount of computing power and data required to train these programs, scale is a necessary prerequisite for success. In practice, this means that large companies are primarily driving development, which they do by partnering with AI startups who possess the necessary expertise. 

SLMs are garnering more attention as companies launch their own versions, like Google’s Phi-3. The main difference between LLMs and SLMs is that the latter have fewer parameters, or variables, that are changed during training to fine tune the program and make it better at predictions. SLMs require far less computational power than their larger counterparts which makes them capable of operating entirely on a personal device rather than through outside servers. Their development is therefore more feasible for small companies and yet, they are more powerful than one would assume. For instance, the Phi-3 rivals most LLMs. SLMs, while not as useful for general purposes, when trained on specific data can be very efficient and useful for tasks they are trained for. 

Since LLMs require large quantities of computational power and resources to develop, they have mostly been the domain of large companies partnering with experts, which has drawn the scrutiny of antitrust regulators fixated on punishing big companies for their size. SLMs are a free-market solution to this problem by allowing smaller companies to be competitive in the market.

LLMs still have a place, especially regarding general capabilities, but Phi-3 has already demonstrated that smaller models are more than capable of delivering similar benefits. Reducing barriers to entry could expand options for companies looking to implement AI. This has the potential to make work more efficient by streamlining administrative tasks. 

LLMs have advanced significantly over the last few years and are transforming what is possible. SLMs could be the next step in that advancement. By reducing the resource requirements of development, SLMs could potentially be developed by smaller companies, and bring these benefits to even more workplaces. 


Trey Price is a policy analyst with the American Consumer Institute, a nonprofit education and research organization. For more information about the Institute, visit us at www.TheAmericanConsumer.Org or follow us on X @ConsumerPal.