Digital Cities Header

Conclusions from the SCEWC side-event “Policy Tools for Ethical Urban AI Governance“

With the deployment and development of AI moving rapidly, the conversation around the ethical use of AI is imperative. With the support of the Barcelona City Council and the Cities Coalition for Digital Rights (CC4DR), CIDOB's Global Cities Programme hosted a panel discussion titled "Policy Tools for Ethical Urban AI Governance" at the annual Smart City World Congress Expo in Barcelona. This side event brought together global experts promoting the responsible use of AI, particularly in urban settings. After addressing core principles such as fairness, non-discrimination, transparency, accountability, and cybersecurity, the panelists shared practical policy tools for local governments, civil servants, and policy-makers. This discussion aimed to create a space for sharing, learning, and integrating diverse perspectives on ethical urban AI.

Joan Batlle, project coordinator at Barcelona City Council's international relations department, opened the event, introducing CC4DR’s goal of promoting digital rights, exchanging best practices, and building innovative approaches. This work is crucial for civil servants and practitioners looking to integrate AI to improve citizens’ daily lives. Senior Researcher at CIDOB's Global Cities Programme Marta Galceran-Vercher shared her program's goals of tackling global challenges through an urban lens. This includes the issues arising from the digital transformation and advancement of AI. One key initiative is the Global Observatory on Urban AI (GOUAI)—a collaboration with CIDOB, CC4DR, UN-Habitat, and the cities of Barcelona and Amsterdam that addresses a critical research gap focusing on what ethical AI means in practice for cities. Through its Atlas, urban AI initiatives that align with the GOUAI’s principles are collected and mapped. Cities are invited to contribute to the Atlas, and share BOTH successes and failures, this can offer policymakers vital tools for improvement and replication.

Fairness and non-discrimination are foundational principles for ethical urban AI. UNESCO Chair of Urban Landscape, Leandry Junior Jieutsa defines these concepts as complex because it involves two questions:

What constitutes fairness? How is AI defined?

Fairness involves addressing social inequalities, particularly among cities with different resources and capacities. First, this requires identifying who is impacted and who has the solution, then developing holistic solutions to bridge that gap. Achieving fairness is challenging with AI because ultimately, AI mirrors our society. He argues that our society is not fair nor completely rational. AI reflects our human biases because it comes from our own data. Quality, unbiased data that reflects a just society is needed to create fair AI. Manuel Portela, a researcher at Universitat Pompeu Fabra, advocated for a bottom-up, social approach to promote fair AI practices. Algorithmic justice is a necessity for the visibility of those affected by AI and their inclusion in the development and deployment processes. Manuel proposed an interdisciplinary ap-proach that merges engineering, computer science, and social sciences to address intersectionality and discrimination.

Transparency and accountability are also essential pillars for ethical AI use. Senior Consultant on Digital Governance, Shazade Jameson, stressed the need for diverse perspectives when defining these principles. Clarity in the definition is vital because there is a narrow technical lens and one that has broader, socio-political dimensions. Shazade stated that accountability is a functional relationship that demands structured governance and feedback mechanisms. Unfortunately, existing tools are often catered to shareholders, not civil servants. However public institutions must prioritize accountability to its citizens, given the large number of people they serve. Transparency, on the other hand, should provide clarity throughout the AI lifecycle. This brings up more critical questions:

How was the technology designed? Who made the decisions, and why?

Daniel Sarasa shared important insights from the city of Zaragoza. He observed that besides collaborations with some universities, AI implementation is largely reliant on 3rd party contractors, as the AI department is underfunded, and civil servants lack the expertise to develop these technologies. Relying on third parties is risky, particularly in low-resource cities because they may lack the capacity to fully understand or scrutinize these technologies. Therefore, transparency in the procurement process is vital. Zaragoza aims to combine the often used, short-sighted, efficiency-focused AI application, with a long-term, future-oriented strategy that promotes digital literacy and uses art as a vehicle to foster curiosity about the potential of AI.

Daniel outlined 4 policy tools and practices from Zaragoza’s experience:

1. Updating existing policies to adapt to new needs, such as transparency ordinances.
2. Procurement clauses. Procurement and HR departments are necessary.
3. Exams for public servants to build competency in AI admin skills.
4. Sandbox environments, to test AI prototypes before implementation.

Additional policy tools useful to local governments include procurement standards, policy templates like AI vendor checklists, impact assessments, and adhering to frameworks like the UNESCO ethics of AI. Algorithm registries documenting AI systems used in public ad-ministration help maintain transparency. However, accountability requires feedback and reporting mechanisms. Cities can also draw from collective resources from organizations like CC4DR and other city networks. Local governments must make AI technology acces-sible and digestible to its citizens. Open-source initiatives or having a public representative are good examples of a “human in the loop” approach. As seen in Montreal and Zaragoza, having public discussions in familiar places like cafes and libraries can promote digital literacy and engage the public.

Define clear strategies while assessing risk: Is AI truly helpful in a given context - or not?

In practice, local governments play the role of AI developers, deployers, and regulators. From the beginning, it is essential to define clear strategies and objectives while assessing risk. This means considering existing inequalities and evaluating capacity, especially for less resourced cities. A key question here is whether AI is truly helpful in a given context. Risk management, especially for user-facing AI is imperative, and recognizing the possibility of the answer being “no, it is not necessary” is crucial for accountability. On a technical level, assessing if its worthwhile is important because AI requires plenty of time, rules, data and resources. In some cases, building upon existing infrastructure or improving data analytics may be more effective.

In urban governance, the rapid development of AI presents significant opportunities along with many challenges and risks. The conversation around responsible and ethical AI must be ongoing, dynamic, and inclusive. Sharing policy tools and best practices that are grounded in ethical principles is essential for the future of the digital transformation. Equally important is promoting digital literacy and fostering public engagement to ensure a responsible application of AI. CIDOB will soon release a monograph that features insights from some of the experts who participated in this panel discussion, aiming to continue the conversation on ethical AI and provide a framework that promotes equitable AI solutions in urban settings worldwide.

Join us today!

Review the checklist of digital rights actions and apply to become a member

Follow us on Twitter and LinkedIn