**6. Discussion: Better Artificial Intelligence for Better Cities**

Makridakis [178] asks the question of whether the AI revolution creates a utopian or dystopian future, or somewhere in between. The answer to this question fully depends on how we are going to tackle the drawbacks of AI, and how we are going to utilize AI in our cities, businesses and, more in general, lives. As Batty [179] remarks, it is hard to predict the exact future of cities, while it is possible to build future cities, meaning that we can actively work in the present to improve contemporary cities and our results will ultimately be the cities of the future. Following this line of thought, if we focus on the pitfalls of AI, we can then search for ways to actually make AI better. *Better* in the sense of more useful to make our cities and societies more sustainable. The key areas of improvement to reach AIs that are conducive to sustainability, are illustrated in Figure *Sustainability* **2020**, *12*, x FOR PEER REVIEW 5, and further elaborated below. 12 of 24

**Figure 5.** Areas of improvement for artificial intelligence (Source: Authors). **Figure 5.** Areas of improvement for artificial intelligence (Source: Authors).

The first issue to consolidate a sustainability-oriented AI is *stakeholder engagement*. In general, AI technologies are created exclusively by technology companies without any or much consultation with

of the sustainability potential of AI [180,181]. This is, in essence, a matter of inclusion and democracy. Given that the ethos of sustainability is about achieving a *common future*, we argue that no common future can be envisioned and realized unless proper forms of democratic governance are in place. Specifically, in relation to AI, this means that each AI technology affecting cities should be discussed by all urban stakeholders, instead of being imposed in a top-down manner by influential tech

The second issue is the *trust* problem. The blackbox nature of the decisions taken by AIs without much transparency (which, at times, are wrong), the possibility of AI failing in a life-or-death context, and cybersecurity vulnerabilities all limit public trust. AI technology needs to earn the trust not only in the public and the way people perceive it, but also in the minds of companies and government agencies that will be investing in AI [182–184]. This is a challenging problem because, as Greenfield [121] notes, AI is an arcane technology meaning that, although it is already part of the everyday of

The next area of improvement concerns the *agility* issue. AI systems should be competent enough to deal with complexity and uncertainty, which are extremely common features of contemporary cities [185]. Besides, AI systems should focus on the problem to be solved, rather than just on the data whose collection is arguably meaningless from a sustainability point of view, unless it serves the purpose of addressing a previously identified SDG. In addition, AI technology needs to be as frugal and affordable as possible. This is critical for a wider uptake of AI across cities through public sector funds [186,187]. Expensive AIs are ultimately elitist AIs, which only a rich minority can afford. Elitist AIs can only be unevenly distributed, thus creating a divide among richer and poorer cities, as well as internal fractures within individual cities where small premium enclaves coexist next to

The fourth issue is the *monopoly*. A monopolistic structure behind technology development and deployment is problematic as a lack of competition limits technological variation. Avoiding AI monopolies can make AI technologies more affordable and support current efforts in 'open AI'

many people, its mechanics and actual functioning are understood by only a few.

The first issue to consolidate a sustainability-oriented AI is *stakeholder engagement*. In general, AI technologies are created exclusively by technology companies without any or much consultation with wider interest groups or stakeholders. Active collaboration among a wide and inclusive range of stakeholders—ideally in the form of quadruple helix model participation of public, private, academia and community—in the development and deployment stages, in particular, will improve the caliber of the sustainability potential of AI [180,181]. This is, in essence, a matter of inclusion and democracy. Given that the ethos of sustainability is about achieving a *common future*, we argue that no common future can be envisioned and realized unless proper forms of democratic governance are in place. Specifically, in relation to AI, this means that each AI technology affecting cities should be discussed by all urban stakeholders, instead of being imposed in a top-down manner by influential tech companies.

The second issue is the *trust* problem. The blackbox nature of the decisions taken by AIs without much transparency (which, at times, are wrong), the possibility of AI failing in a life-or-death context, and cybersecurity vulnerabilities all limit public trust. AI technology needs to earn the trust not only in the public and the way people perceive it, but also in the minds of companies and government agencies that will be investing in AI [182–184]. This is a challenging problem because, as Greenfield [121] notes, AI is an arcane technology meaning that, although it is already part of the everyday of many people, its mechanics and actual functioning are understood by only a few.

The next area of improvement concerns the *agility* issue. AI systems should be competent enough to deal with complexity and uncertainty, which are extremely common features of contemporary cities [185]. Besides, AI systems should focus on the problem to be solved, rather than just on the data whose collection is arguably meaningless from a sustainability point of view, unless it serves the purpose of addressing a previously identified SDG. In addition, AI technology needs to be as frugal and affordable as possible. This is critical for a wider uptake of AI across cities through public sector funds [186,187]. Expensive AIs are ultimately elitist AIs, which only a rich minority can afford. Elitist AIs can only be unevenly distributed, thus creating a divide among richer and poorer cities, as well as internal fractures within individual cities where small premium enclaves coexist next to disadvantaged districts.

The fourth issue is the *monopoly*. A monopolistic structure behind technology development and deployment is problematic as a lack of competition limits technological variation. Avoiding AI monopolies can make AI technologies more affordable and support current efforts in 'open AI' development. This, in turn, would also promote the democratization of AI research and practice, as well as decrease the risk of the formation of a *singleton* [188,189]. According to Bostrom [4], a singleton is a world order in which one super intelligent agent is in charge. This is an unlikely situation when it comes to Level 1 and 2 AIs, but it might not be a remote possibility if only one tech company in the world has the capacity to build an artificial super intelligence.

Another critical issue is *ethics*. We need to develop AI in a way that it respects human rights, diversity, and the autonomy of individuals. The European Commission's recent ethical guidelines for AI development offer a good starting point [190]. However, as stated by Mittelstadt [191], principles alone cannot guarantee the development of an ethical AI. Hence, we need to develop globally an AI ethics—a multicultural system of moral principles that takes the risks of AI seriously—together with a mechanism to monitor ethics violations. Ethics should ensure the design of AI technologies for human flourishing around the world [192,193], but this is a very complex matter given that, as the work of Awad et al. [194,195] clearly demonstrates, universally valid and accepted ethical principles do not exist.

The sixth issue relates to *regulation* and regulatory challenges. AI cannot achieve sustainability and the common good if it is not regulated. In a situation in which different AI users (or potentially different mindful and super intelligent AIs) can do whatever they want, it is extremely unlikely that the common good will be achieved. Different actors will follow diverse trajectories and reach heterogenous (and not necessarily mutually beneficial) outcomes. This poses a big risk for society—particularly for disadvantaged groups, historically-marginalized groups, and low-income countries. Thus, we

need well-regulated and responsible AIs with disruption mitigation mechanisms in place. Such regulation should also protect public values [196,197], and extent to the built environment. It is well documented in urban studies that, when urban development is unregulated, key sustainability themes (such as justice and environmental preservation) get neglected and overshadowed by economic interests [198,199]. Therefore, the regulation of AI and the regulation of the built environment should go hand in hand as a dual policy priority.

The last issue concerns the development of AI for *social good*, and for the benefit of every member of society [200]. AI and data need to be a shared resource employed for the good of society, rather than for serving the economic agenda of corporations and the interests of political elites. An *AI for all* would require establishing AI commons [201] and a similar attempt has been previously made to establish digital commons [202]. AI commons are supposed to allow anyone, anywhere, to enjoy the multiple benefits that AI can provide [203]. AI commons should be studied and pursued to enable AI adopters to connect with AI specialists and AI developers, with the overall aim of aligning every AI towards a shared common goal [204]. From an urbanistic perspective, this is arguably the biggest challenge, because opening up AI as a common good requires also opening up urban spaces, thinking about the city as a truly public resource rather than a territory balkanized by neoliberal ambitions.
