Meet Lambert Hogenhout, whose pivotal role as the Chief Data & AI at the United Nations places him at the forefront of leveraging AI for transformative change. In this exclusive interview, we delve into the critical challenges and ethical considerations that accompany the rapid advancement of AI technology.
Join us as we explore the insights and warnings from one of the industry's leading thinkers on the future of AI, its role in global policies, and the ethical frameworks necessary to navigate this swiftly evolving landscape.
Welcome, Lambert! It's fantastic to reconnect after our somewhat harrowing discussions at the Leaders In AI Summit at the F1 Racetrack in Austin, where you spoke about the developing foundations of AI governance taking shape globally as leaders begin to piece together the AI puzzle. After a few months, it seems like most of the predictions you had on stage are starting to come true.
Given the rapid pace at which AI technology is evolving, how do you see its role in shaping the future on a global scale? How is the United Nations leveraging AI to address global challenges?
The potential of AI is exciting, both as a standalone technology and as a catalyst for scientific research and other fields. AI can provide us with new insights into areas like climate and weather, which can help us practically in sectors like agriculture and shipping, but at a larger scale, also contribute to better management of our planet. The UN has a multitude of projects that leverage AI, from analysis of satellite imagery, to planning logistics, to supporting communication in local languages. The AI for Good conference in Geneva showcases the latest ideas each year. More broadly though, I am concerned about the rapid pace at which we as a society are adopting AI. It will clearly have a profound impact on our society, affecting the future of work, communication, and decision-making processes. Some researchers even suggest that it could redefine what it means to be human. What worries me is that I don’t think we are very clear on where we are headed in the longer term.
Lambert, building on your insights about AI's rapid advancement, I'm curious about its impact on education. With advanced technologies like ChatGPT now accessible even to seven-year-olds in Africa for free, how do you see AI advancing skillsets and education globally?
Progressive advances in technologies—such as the computer, the Internet, the smartphone, and now conversational AI agents—have massively empowered billions of people worldwide with access to information and education. Compared to 40 years ago, there has been significant progress. Of course, there are still substantial disparities, and concerns remain. For instance, while these platforms are globally accessible, they often lack the ability to be tailored to local contexts, including language and culture, which is especially crucial in education.
"The potential of AI is exciting, both as a standalone technology and as a catalyst for scientific research and other fields. However, I am concerned about the rapid pace at which we as a society are adopting AI. It will clearly have a profound impact on our society, affecting the future of work, communication, and decision-making processes."
Chief Data & AI
Lambert, considering how AI is increasingly influencing global policies and daily life, I'm curious about the steps the United Nations is taking to ensure its ethical use. What frameworks and policies have you implemented for ethical AI and data governance? And how do these frameworks tackle the diverse ethical challenges posed by AI across different regions and cultures?
In the past few years, we have adopted principles on Personal Data Protection and Privacy and Principles for the Ethical Use of AI. Those were created to be aligned with the UN values. They are public - you can find them on our website. Then we have policies that lay out how these principles are put in practice. We also have standards for AI Impact assessments. That is the formal governance, but I believe a human complement in the form of a mindset is also necessary. Last year, we released a Responsible Tech Playbook (which is also public) that teams can use to evaluate their tech projects, systems, or ideas. This playbook helps you consider various types of risks within specific contexts. I think such an approach is crucial because not all issues related to the ethical use of data and AI can be anticipated and codified in rules. Each project must be evaluated in its unique context by various stakeholders involved.
Lambert, considering how AI is increasingly influencing global policies and daily life, I'm curious about the steps the United Nations is taking to ensure its ethical use. What frameworks and policies have you implemented for ethical AI and data governance? And how do these frameworks tackle the diverse ethical challenges posed by AI across different regions and cultures?
In the past few years, we have adopted principles on Personal Data Protection and Privacy and Principles for the Ethical Use of AI. Those were created to be aligned with the UN values. They are public - you can find them on our website. Then we have policies that lay out how these principles are put in practice. We also have standards for AI Impact assessments. That is the formal governance, but I believe a human complement in the form of a mindset is also necessary. Last year, we released a Responsible Tech Playbook (which is also public) that teams can use to evaluate their tech projects, systems, or ideas. This playbook helps you consider various types of risks within specific contexts. I think such an approach is crucial because not all issues related to the ethical use of data and AI can be anticipated and codified in rules. Each project must be evaluated in its unique context by various stakeholders involved.