AI in the Public Sector: Between Pressure and Strict Constraints

The wave of innovation brought by artificial intelligence has reached every sector, and public organizations are no exception. The pressure to accelerate AI adoption is palpable, yet these institutions find themselves navigating a complex landscape, characterized by distinct constraints in terms of security, governance, and operations. These limitations significantly differentiate them from their private counterparts, making the implementation of AI solutions a far more intricate challenge.

A Capgemini study revealed that 79% of public sector executives globally are wary about AI's data security. This figure is understandable, given the heightened sensitivity of government data and the legal obligations surrounding its use. As highlighted by Han Xiao, VP of AI at Elastic, government agencies must impose strict restrictions on the type of data they send to the network, which defines precise boundaries on how data is managed and considered. This fundamental need for control over sensitive information is just one of many factors complicating AI deployment, especially when compared against the private sector's standard operational assumptions.

Unique Operational Challenges of the Public Sector

When private-sector entities expand AI usage, they often assume ideal conditions: continuous cloud connectivity, reliance on centralized infrastructure, acceptance of incomplete model transparency, and limited restrictions on data movement. For many state institutions, however, accepting these conditions could range from dangerous to impossible. Government agencies must ensure that their data remains under their control, that information can be checked and verified, and that operational disruptions are kept to an absolute minimum. At the same time, they often have to run their systems in environments where internet connectivity is limited, unreliable, or unavailable.

These complexities prevent many promising public sector AI pilots from moving beyond experimentation. "Many people undervalue the operating challenge of AI," says Xiao. "The public sector needs AI to perform reliably on all kinds of data, and then to be able to grow without breaking. Continuity of operations is often underestimated." An Elastic survey of public sector leaders found that 65% struggle to use data continuously in real time and at scale. Compounding the problem are infrastructure constraints. Government organizations may also struggle to obtain the Graphics Processing Units (GPUs) used to train and access complex AI models. "Government doesnโ€™t often purchase GPUs, unlike the private sectorโ€”they're not used to managing GPU infrastructure. So accessing a GPU to run the model is a bottleneck for much of the public sector," Xiao points out.

SLMs: The Answer for More Practical and Secure AI

The many non-negotiable requirements in the public sector often make Large Language Models (LLMs) untenable. However, Small Language Models (SLMs) can be housed locally, offering greater security and control. SLMs are specialized AI models that typically use billions of parameters, rather than hundreds of billions, making them far less computationally demanding than the largest LLMs. The public sector does not need to build ever-larger models housed in offsite, centralized locations. An empirical study has shown that SLMs can match or exceed LLM performance in certain contexts.

SLMs allow sensitive information to be used effectively and efficiently while avoiding the operational complexity of maintaining large models. Xiao puts it this way: "It is easy to use ChatGPT to do proofreading. It's very difficult to run your own large language models just as smoothly in an environment with no network access." SLMs are purpose-built for the needs of the department or agency that will use them. Data is stored securely outside the model and is only accessed when queried. Carefully engineered prompts ensure that only the most relevant information is retrieved, providing more accurate responses. Using methods such as smart retrieval, vector search, and verifiable source grounding, AI systems can be built that cater to public sector needs. Gartner predicts that by 2027, small, specialized AI models will be used three times more than general-purpose LLMs. For those evaluating on-premise deployment, AI-RADAR offers analytical frameworks on /llm-onpremise to assess the trade-offs between cost, performance, and data sovereignty.

Beyond the Chatbot: The Potential of SLMs for Data Management

"When people in the public sector hear AI, they probably think about ChatGPT. But we can be much more ambitious," says Xiao. "AI can revolutionize how the government searches and manages the large amounts of data they have." Looking beyond chatbots reveals one of AI's most immediate opportunities: dramatically improved search. Like many organizations, the public sector has mountains of unstructured dataโ€”including technical reports, procurement documents, minutes, and invoices. Today's AI, however, can deliver results sourced from mixed media, like readable PDFs, scans, images, spreadsheets, and recordings, and in multiple languages. All of this can be indexed by SLM-powered systems to provide tailored responses and to draft complex texts in any language, while ensuring outputs are legally compliant. "The public sector has a lot of data, and they don't always know how to use this data. They don't know what the possibilities are," says Xiao.

Even more powerfully, AI can help government employees interpret the data they access. "Today's AI can provide you with a completely new view of how to harness that data," Xiao explains. A well-trained SLM can interpret legal norms, extract insights from public consultations, support data-driven executive decision-making, and improve public access to services and administrative information. This can contribute to dramatic improvements in how the public sector conducts its operations. Focusing on SLMs shifts the conversation from how comprehensive the model can be to how efficient it is. LLMs incur significant performance and computational costs and require specialized hardware that many public entities cannot afford. Despite requiring some capital expenses, SLMs are less resource-intensive than LLMs, so they tend to be cheaper and reduce environmental impact.

Public sector agencies often face stringent audit requirements, and SLM algorithms can be documented and certified as transparent. Some countries, particularly in Europe, also have privacy regulations such as GDPR that SLMs can be designed to meet. Tailored training data produces more targeted results, reducing errors, bias, and hallucinations that AI is prone to. As Xiao puts it: "Large language models generate text based on what they were trained on, so there is a cut-off date when they were trained. If you ask about anything after that, it will hallucinate. We can solve this by forcing the model to work from verified sources." Risks are also minimized by keeping data on local servers, or even on a specific device. This isnโ€™t about isolation but about strategic autonomy to enable trust, resilience, and relevance. By prioritizing task-specific models designed for environments that process data locally, and by continuously monitoring performance and impact, public sector organizations can build lasting AI capabilities that support real-world decisions. "Do not start with a chatbot; start with search," Xiao advises. "Much of what we think of as AI intelligence is really about finding the right information."