
Artificial Intelligence (AI) is not just a technological tool. With data as its building block, AI offers sophisticated ways of generating, connecting and interpreting data. AI also presents new avenues for designing, implementing and evaluating data-informed policies. By leveraging AI-driven methods such as computer vision, natural language processing (NLP) and predictive analytics, policymakers can tackle complex challenges more efficiently and inclusively.
AI is among a set of emerging technologies in the field of public policy, and requires a measured and cautious approach for adoption and scaling. Within policymaking, each AI implementation should be treated as a pilot, exploring its potential for possible public sector applications. Responsible use of AI is critical and requires continuous human oversight to anticipate and mitigate risks, such as bias and ethical concerns.
While AI is transforming data-informed decision-making, policymakers in many countries still lack AI skills and have limited knowledge of AI infrastructure and enabling environments. The AI section of the Data to Policy Navigator aims to demystify AI for policymakers. It offers tips on how to harness the potential of AI for inclusive policymaking, with an emphasis on accountability, experimentation and careful oversight.
This section does not cover how to deploy AI-enabled public services, global governance of AI, the broader ethical concerns of AI or how to navigate international AI regulations.
AI technologies are deeply embedded in our lives. Most people rarely notice the presence of AI in their daily interactions, from email spam filters and search engines to translation services and recommender systems.
Before exploring AI applications in public policy, let’s define the term AI. It is important to note that different definitions of AI exist and continue to evolve.
Data is the foundational raw material for AI. Data quality, protection and governance are critical to ensuring that AI systems function effectively and ethically.
In the context of public policy, AI is a broad term encompassing technologies that enable computers to perform tasks or make decisions based on data inputs. AI includes machine learning, recommender algorithms, computer vision, natural language processing (NLP) and generative technologies like chatbots and voice interfaces.



In Mexico, women’s economic participation is limited due to disproportional household and care work. To address this gap, the Government of Mexico City, the German development agency - GIZ and other partners developed an intelligent, AI-driven map of care facilities..
NLP was used to process data from crowdsourced insights and from administrative records. The resulting AI-informed platform now helps policymakers promote gender equality and economic opportunities for women and identify locations for childcare centres.
All AI systems, from a straightforward machine learning model to sophisticated neural systems, rely on data to learn, reason or decide. Data is the foundational raw material for AI. Data quality, protection and governance are critical to ensuring that AI systems function effectively and ethically. In essence, AI can transform data into actionable insights, making it a key tool for policymakers navigating complex challenges.
AI technologies can either be ‘open source’ or ‘proprietary’. Each approach offers unique advantages and trade offs:
Choosing between open source and proprietary options impacts budget, data security, control, and ease of customization—all critical considerations for policymakers exploring AI options.
While many view open-source AI as AI systems that can be freely modified, shared, and used (such as large language models (LLM) from Meta and world foundation models from NVIDIA), AI as an extension of data poses new challenges as to what constitutes true ‘open source AI’, and the debate around the definition continues today.
As a consideration, the Digital Public Goods Alliance (DPGA) mandates that training and testing datasets for AI systems must have open data licenses for the products to be considered digital public goods (DPGs). However, AI systems often utilize diverse datasets from various sources for training. When these datasets include sensitive information with potential for misuse, access must be carefully managed. Consequently, due to the legal and ethical constraints of open sourcing the training datasets, many AI systems and products are currently not recognized as DPGs.
As AI technologies become widespread, adherence to international norms and regulations is essential. This ensures compliance with applicable laws, fosters interoperability with international systems and aligns local policies with global best practices. At the core of these standards are shared values such as transparency, accountability, fairness, safety and an emphasis on human rights.
Policymakers should align AI strategies with these evolving global standards to ensure ethical and effective deployment of AI.
Key international considerations include:
While all three — the OECD AI Principles, the UNESCO Recommendation on the Ethics of AI, and the ASEAN Guide on AI Governance and Ethics — stress transparency, fairness, accountability and respect for human rights, each has a slightly different focus and scope.
The OECD Principles, shaped by governmental and economic priorities, focus on policies for innovation, risk management, and fostering interoperability across jurisdictions. The UNESCO Recommendation has a stronger emphasis on cultural and societal dimensions, including protecting and promoting cultural diversity, ethical reflection, and the environmental impacts of AI. In comparison, the ASEAN Guide on AI Governance and Ethics focuses on practical implementation within the Southeast Asian context. It provides region-specific guidance for both organisations and governments fostering alignment among member states.


