Using AI chatbots to facilitate public services

Author: GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit)

Editor's Note: In this article, you will learn how Artificial Intelligence (AI) chatbots can be used to facilitate public service delivery in an efficient, responsible and safe way.  An AI chatbot is a software program that uses artificial intelligence to understand and respond to human language.This article will explain how an AI chatbot can be designed with important technological, legal and ethical considerations and how to effectively manage the risks involved. To gain a better understanding of the definitions and concepts discussed in the article below, please read this introductory text.

Using AI chatbots to facilitate public services

As governments around the world embrace Artificial Intelligence (AI) to enhance public services, AI-powered chatbots are becoming increasingly common in citizen-engagement platforms. From answering tax questions to helping people with small businesses inquiries, these digital assistants promise faster, more efficient interactions between citizens and the government. However, behind this promising technology, lies a growing concern about the potential risks to citizens and their data. When governments deploy AI chatbots without robust safeguards, they may expose sensitive personal information to misuse, bias, or breaches. To accelerate digital transformation while maintaining public trust, it is essential to embed privacy, reliability and, safety at every stage of the process. 

The Citizen Chatbot in South Africa

In 2024, the Policy Innovation Lab of Stellenbosch University developed an AI chatbot that was initially designed to be used in public participation processes in South Africa. The chatbot’s objective was to enable people to report public service delivery issues with ease, for example a broken waterpipe that is important for delivering water supply to villages. This reporting system would make it more efficient for the relevant government agency to become informed and act in a timely matter to address the issue in question. The AI chatbot is hosted via WhatsApp. When citizens open the chat, an automatic message appears informing them about the purpose of the chatbot. It then instructs the user to give their informed consent so that the chatbot can correctly identify and direct them towards the necessary action to ensure the correct service delivery. On the backend of this system, the citizen is interacting with a Large Language Model, in this case ChatGPT, an AI chatbot developed by OpenAI – an American AI organisation, which is prompted to extract the necessary information about the service delivery issue.

Figure 1: Fictional example of reporting a public service delivery issue to the Citizen Chatbot; Source: Policy Innovation Lab, Stellenbosch University

The conversation is followed by a de-identification process, during which, among other things, the telephone number associated with the WhatsApp account is deleted, and the time and location details are coarsened which is the process of removing personally identifiable information). The collected, de-identified data can then be analysed using topic analysis and presented in the form of an interactive map. This map makes it possible to sort the reports, compare the districts and propose the most suitable action.

Figure 2: Exemplary interactive data visualization map showingthe collected information; Source: Policy Innovation Lab, Stellenbosch University

Based on this chatbot, OpenUp – a South-African civic technology organisation - developed and applied a risks and harms framework to consider not only the legal implications, but also ethical ones. The process required careful attention to data security, the needs of potential users, and the broader cultural context in which these frameworks are operating. The following are key lessons learned from this experience.

The first step is to understand the context. South Africa faces significant digital inequality and limited digital capacity. The reality is that less than 20 percent of South African Internet users report that they use e-government services. This means when setting up an AI chatbot, we need to consider that online interaction with the government is a relatively a new practice in the country.

For the chatbot to be effective, it must be as effortless as possible for users to access and use. This means avoiding any additional steps, such as requiring citizens to download a new software. Since WhatsApp is already widely used by the target audience (93 percent of active social media users), it offers a convenient and familiar platform for hosting the chatbot. However, this choice also introduces certain risks, which will be discussed later.

Transparency begins with clear disclosure. Users should be informed upfront that they are communicating with an AI chatbot, not a human agent. This disclosure should be unambiguous, visible at the start of the interaction, and repeated when necessary —especially in complex or sensitive contexts, such as legal, financial, or health-related services. The Terms of Use document, while necessary, tends to be complicated and should further be optimized to ensure clear disclosure.

Perhaps most critically, users should be clearly informed about what data is being collected, how it will be used, and who will have access to it. This includes metadata, message content, and any personal identifiers provided during the conversation. For instance, the South-African citizen chatbot only stores de-identified data, which is relevant to its purpose.

In principle,, national data protection regulations ought to be respected. This also means adopting a data minimization approach: only collect what is strictly necessary to deliver the relevant service. Collected data must be stored securely, encrypted, and anonymized wherever possible to prevent unauthorized access or re-identification.

There should also always be a clearly designated authority accountable for the chatbot’s development, deployment, maintenance, and oversight. Citizens should be able to easily find out which authority is in charge and what standards the chatbot is expected to meet – technically, legally and ethically.

Without these measures, public confidence and trust in public institutions and their digital service delivery can quickly erode. It can also be worsened by the the lack of transparency or obfuscation that can fuel fears of surveillance, discrimination, or misuse. By contrast, a transparent approach reinforces the idea that digital tools serve the public interest, not just technological ambition.

Legally compliant and ethical use of Citizen Data:

Frameworks such as the OHCHR’s guidance and the Copenhagen Framework on Citizen Data offer essential principles for ensuring that citizen-generated data is used responsibly. These frameworks emphasize that legal compliance alone is not enough; ethical considerations and a human rights-based approach must also guide the use of Citizen Data. In the end, citizens should remain the primary beneficiaries of their data, with strong safeguards to ensure their digital participation genuinely serves their interests.

Citizens should receive clear and measurable benefits in exchange for sharing their personal data. For example, a South African citizen uses the chatbot to report a broken water pipe in their village. Such a report should trigger a specific administrative action – in a visible and timely manner. Trust is strengthened and user engagement increases when citizens see their feedback is responded to time with clear follow up actions, such as providing an update what steps will be taken to repair the water pipe.

To ensure trust, governments must also take precautionary measures to prevent misuse of the chatbot. This includes implementing strong authentication processes to block malicious bots and ensuring the chatbot limits interactions to its designated purpose. By automatically filtering out attempts to engage on unrelated or harmful topics, governments can protect both users and the integrity of the tool.

Figure 3: Example of how safety features in a citizen chatbot could look like; Source: Policy Innovation Lab, Stellenbosch University

AI chatbots come with a unique set of risks that go beyond traditional software systems, and governments must approach these with caution and foresight. One major concern is algorithmic bias — the tendency of AI systems to reflect and amplify existing inequalities in the data they were trained on. This can result in unequal treatment of certain groups, particularly in contexts where language, cultural nuances, or access to services play a role. It may also lead to inconsistencies in how public services and information are communicated and delivered, disproportionately affecting those who speak underrepresented languages or dialects, practice diverse cultures, or have historically faced barriers to accessing public services.

Another well-documented issue is hallucination, where AI models generate confident but false or misleading responses — potentially having negative implications when citizens interact with government chatbots. This risks spreading misinformation, causing confusion or even harm. Citizens may lose trust or be discouraged from using public services. Over time, this undermines credibility, accessibility, and public participation.

Compounding these risks is the black-box nature of many AI models: they produce outputs without offering clear reasoning, making it nearly impossible to trace how or why a particular response was generated. This threatens due process, as individuals can't know how or why decisions were made. It limits appealability, since there's no clear basis for challenging outcomes. Lack of transparency also weakens documentation and accountability in public service.

Lastly, over-reliance on AI can lead to technology dependency, where human expertise is devalued and essential public services become fragile in the face of outages, system errors, or model drift.

Therefore, the question of proportionality must be asked here: considering all the risks AI poses as well as the significant environmental costs associated with it, does the benefit truly outweigh the cost for the problem it's trying to solve?

Risks of using proprietary software and AI

Decision-makers must understand the technical differences between third-party software-as-a-service platforms and self-hosted or open-source alternatives. Each comes with distinct trade-offs in terms of data control, transparency, scalability, and long-term costs. Without this knowledge, public-sector teams may inadvertently lock themselves into fragile or opaque systems they can’t adapt or audit.

For instance, when using WhatsApp and ChatGPT for a government Chatbot, governments have only limited influence on how data is stored and processed. For this reason, precautions must be taken to protect citizens; only the necessary amount of data must be collected, excluding sensitive data.

Checklist for using AI chatbots as public service

Understand your User

Provide Transparency

Guarantee Accountability

Ensure Legally Compliant and Ethical Use of Citizen Data

GProvide Inclusive Fairness

Consider AI-specific Risks and Risks of Using Proprietary Software