Next Article in Journal
Mechanical Properties and Microstructure of Rubber Concrete under Coupling Action of Sulfate Attack and Dry–Wet Cycle
Previous Article in Journal
Influence of Dockless Shared E-Scooters on Urban Mobility: WTP and Modal Shift
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

E-Government 3.0: An AI Model to Use for Enhanced Local Democracies

Faculty of Public Administration, National University of Political Studies and Public Administration, 012244 Bucharest, Romania
Sustainability 2023, 15(12), 9572; https://doi.org/10.3390/su15129572
Submission received: 25 March 2023 / Revised: 7 May 2023 / Accepted: 13 June 2023 / Published: 14 June 2023

Abstract

:
While e-government (referring here to the first generation of e-government) was just the simple manner of delivering public services via electronic means, e-gov 2.0 refers to the use of social media and Web 2.0 technologies in government operations and public service delivery. However, the use of the term ‘e-government 2.0’ is becoming less common as the focus shifts towards broader digital transformation initiatives that may include AI technologies, among others, such as blockchain, virtual reality, and augmented reality. In this study, we present the relatively new concept of e-government 3.0, which is built upon the principles of e-government 2.0 but refers to the use of emerging technologies (e.g., artificial intelligence) to transform the delivery of public services and improve governance. The study objective is to explore the potential of e-government 3.0 to enhance citizen participation, improve public service delivery, and increase responsiveness and compliance of administrative systems in relation to citizens by integrating emerging technologies into government operations using as a background the evolution of e-government over time. The paper analyzes the challenges faced by municipalities in responding to citizen petitions, which are a core application of local democracies. The author starts by presenting an example of an e-petition system (as in use today) and analyses anonymized data of a text corpus of petitions directed to one of the Romania municipalities. He will propose an AI model able to deal faster and more accurately with the increased number of inputs, trying to promote it to municipalities who, for some reason, are still reluctant to implement AI in their operations. The conclusions will suggest that it may be more effective to focus on improving new algorithms rather than solely on ‘old’ technologies.

1. Introduction

The rapid adoption of digital technologies in public service delivery and governance has transformed the way governments operate, interact with citizens, and deliver public services. The evolution of e-government has progressed from basic electronic delivery of services (e-government 1.0) to the use of social media and Web 2.0 technologies (e-government 2.0), resulting in significant changes in the roles of citizens and governments in public service delivery [1,2].
However, the emergence of new technologies such as artificial intelligence (AI), blockchain, and the Internet of Things (IoT) has paved the way for a new era of e-government, referred to as e-government 3.0 [3,4,5,6]. This new version of the concept holds the promise of transforming public service delivery and governance by integrating emerging technologies into government operations [7,8].
The article will explore the potential of e-government 3.0 to enhance citizen participation, improve public service delivery, and increase responsiveness and compliance of administrative systems in relation to citizens. The author will start its analysis based on empirical evidence from a case study on the Brasov municipality, Brasov is considered one of the smartest Romanian cities as of today [9,10,11], to assess the impact of e-government 3.0 on a particular aspect of governance and public administration: e-petitioning. Ultimately, this paper seeks to contribute to the ongoing scientific debate on the future of e-government and the potential of emerging technologies to transform public service delivery and governance.
As stated in a few of the author’s previous studies [12,13], while municipalities aim to enhance citizen participation and openness by developing online platforms for communication, one of the widely used tools is e-petitioning, which is utilized by both local and central governments. Although social media networks have also been used by public organizations to engage with their constituents and receive feedback, most administrative courts do not consider social media discussions as legal actions, unlike e-petitions, which do have official status. Although AI can assist in addressing citizens’ concerns on social media also, e-petitioning should be regarded as the preferred AI-powered solution for resolving administrative issues.
The role of urban computing in sustainable smart cities, outlining recent developments, use cases, and research challenges in the field were addressed by Hashem et al. [14], while Zhao et al. investigates the influence of digital and technological advancement on sustainable economic growth and analyzes the impact of variables such as E-government Development Index (EGDI), Internet Users’ (IU) growth, and information and communications technology (ICT) exports [15]. Both articles highlight the importance of harnessing technological advancements to achieve sustainable development and provide insights into the opportunities and challenges of doing so. Modern citizens demand prompt, efficient, and high-quality services from their public authorities, especially since trust in governments and their services has been diminishing worldwide [16,17].
Consequently, citizens demand better infrastructure, improved services, and adaptive leadership. However, due to increasing demands and constrained public budgets, effective solutions are often delayed, and administrative capacities may be lacking [18]. Therefore, the literature on public management suggests that AI applications can play a crucial role in generating and sustaining good governance by mitigating these challenges, such as long delays, unskilled personnel, and overall administrative inefficiencies.
The study hypothesis is that e-government 3.0 has the potential to increase the responsiveness of administrative systems, thus enhancing citizen participation. The author will explore this hypothesis through a synthetic case study proposing an automated text analysis method over the e-petition systems in use today. Additionally, the study hypothesizes that AI applications can play a crucial role in generating and sustaining good governance by mitigating challenges such as long delays, unskilled personnel, and administrative inefficiencies.
The study objective is to explore the potential of e-government 3.0 to enhance citizen participation, improve public service delivery, and increase responsiveness and compliance of administrative systems in relation to citizens by integrating emerging technologies into government operations.
After the introductory section, the article will feature a significant number of studies, articles, and analyses with the purpose of linking the field of governmental studies to the rapidly evolving field of artificial intelligence. In the third section, the reader will be guided from the scholarly research to the dataset that the author intends to utilize to substantiate the hypothesis. Additionally, Section 4 of the article will introduce a machine learning (ML) model that is trained and validated on a set of data obtained from one of Romania’s smart cities (Brasov). This section will include dedicated subsections that will explain the behavior of the model and the expected outputs in a lightly technical manner. To achieve this, the author will start by examining past successes in machine learning that were used to validate optimistic views regarding the future of e-government 3.0. The findings presented in Section 5 and the subsequent discussion in Section 6 will validate the assumption that AI technologies are necessary for the proper development of government-to-citizen (G2C) interaction. The author’s vision for the use of AI, research limitations, and future work will also be outlined in Section 6. Finally, the article concludes with the last section.

2. Literature Review

Modern citizens expect prompt, effective, and high-quality services from their public authorities, and the decline in trust in governments and their services is a worldwide phenomenon [16,17,19]. This has led to a growing demand for better infrastructure, improved services, and adaptive leadership. However, limited public budgets and increasing demands create serious constraints for meeting these expectations, leading to delays in presenting effective solutions, under-skilled personnel, and overall poor administrative capacities [20].
Technological advances offer solutions to both businesses and governments, and integration of artificial intelligence has the potential to positively impact global productivity and environmental outcomes, and the development of sustainable business models is necessary [21].
The literature on public management suggests that AI applications can address these challenges and help generate and sustain good governance [22,23,24]. For example, AI can improve public service delivery by enhancing the quality and efficiency of services [24,25], automating administrative tasks [26], and supporting decision-making processes [27,28]. Moreover, AI can enhance transparency and accountability, as well as increase citizen participation and engagement [29,30]. Ibtissem et al. used advanced statistical methods to investigate the challenges faced by emerging economies in addressing issues of poor governance in public services [31]. Thus, AI has the potential to transform public management and governance, helping public authorities to better meet citizens’ expectations and improve trust in government services.
AI can play a crucial role in e-petitioning by summarizing and triaging petitions, providing automated responses to routine queries [32,33], and identifying petitions that require further analysis from specialized departments [34,35]. It can assist in decision-making by providing evidence for a more comprehensive reply that is compliant with national or international regulations [36]. AI can filter petitions to verify their eligibility, compare subjects and frequencies, and measure organizational efficiency [37]. By performing these tasks, AI can save time, energy, and resources and limit redundancies and time waste [38]. It can also use the ‘compare and comply’ functions to navigate regulations and ensure that official replies are correct and complete. AI can identify urgencies in petitions’ texts using sentiment analysis and trigger faster reactions from the government, increasing confidence in public authorities [39,40]. Learning and reasoning are also critical components to consider in utilizing AI in e-petitioning [41].
After reviewing the research outlined in Sustainability (issues 2020–2023), Mathematics (issues 2020–2023), Government Information Quarterly (issues 2020–2023), and International Journal of Web Services Research (issues 2020–2023), one can conclude that much of the focus is on e-government in general and little on the use of top technologies (AI, machine learning (ML), natural language processing (NLP), and robotic/intelligent process automation (RPA/IPA) technologies, seen here as top technologies) for improving governance processes.
As early as 1999, Jon M. Kleinberg from Cornell University [42] studied the network structure of a hyperlinked environment and developed a set of algorithmic tools for extracting information from the link structures of such environments. At the time, the study focused on a variety of contexts on the World Wide Web. Later, in 2011, Hreňo et al. [43] described the approach to semantic interoperability of e-government services. Piaggesi [44] researched the future of connectivity and provided a snapshot of Latin America, recommending that the role of government in providing universal service is very important for a proper transition to e-government 3.0. Verma [45] made a comprehensive bibliometric review of 353 research articles published between 2010 and 2021 to discern the performance of public servants. The author concluded that governance structures, together with the whole society, are becoming smarter by using smart technologies. However, by reading the text, one can admit that this is a projection of the author’s hopes for the foreseeable future, but there is no clear indication of when this will happen.
A group comprising seven social and computer science specialists at McKinsey & Company created a chart in which they mapped the most encouraging technologies based on their potential applications in domains that could be beneficial to society. They relied on a study conducted in 2018 and concluded that the most valuable technologies are deep learning, natural language processing, image and video classification, object detection, and language understanding. All of these technologies are related to information verification and validation [46].
Moreover, Madan and Ashok [47], through a systematic literature review, identified contextual variables as factors influencing AI adoption, as discussed in the literature. The authors concluded that governance maturity is identified as an important component of managing AI implementation. Additionally, Ahn and Chen [48] explored the perception of public employees regarding the use of AI technologies in government. The authors found that government employees hold a positive view regarding the benefits and potential of AI technologies in the public sector, heaving high expectations over the integration of AI, believing it will enhance the efficiency and quality of government operations.
Kumari et al. [49] proposed techniques based on sentiment analysis meant to improve the performance of employees connected with users by different platforms. Furthermore, Lu et al. [50] focused on applying a cross-domain aspect-based sentiment analysis model to word embeddings.
Similarly, Yu et al. [51] proposed a model containing a sentence encoder together with a semantic and syntax learning module for sentiment classifier, which is considered important for the present study on citizen petitions. If implemented in e-petitioning systems of government 3.0, the actual state of web apps will greatly improve, and citizens will have a more streamlined and efficient way to engage with their government.
Eom, Lee, and Zankova [52,53], focusing on dilemmatic situations in which to use technologies, provided an overview of previous literature on digital government transformation, stating that governments, by adopting actor-based computing models, along with large-scale data, can enhance their ability to identify real-world complexity, discern patterns in data, and leverage them to enhance its actions. This, in turn, can result in cost savings and better anticipation of future events.
McKinsey & Company conducted a recent study [54] where they showed enthusiasm for Generative AI software that can display creativity, which was previously considered a trait exclusive to humans. Some of the applications of these tools align with the topic of this article, including writing, documenting, and reviewing texts, as well as extracting information from large amounts of legal documents and answering intricate questions.
Andrew Ng, a Stanford professor and co-founder of Coursera and Google Brain, in a keynote speech at the AI Frontiers conference, said [55]: ‘About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform every major sector in coming years’.

3. Materials and Methods

For the present study, officials from the Romanian city of Brasov agreed to supply anonymized data, which comprised 12,935 petitions directed to the municipality in the year 2022 (1 January 2022–31 December 2022) via multiple communication channels (e-mail, phone apps, instant messages, Web platform and by phone)—Appendix A.
Each of them was converted using 118 indicators, seen as vectors, labeled in 47 classes that are also seen as layers. Previously, the responsibility of carrying out this task fell on the city hall employees, seen as experts who dealt with petitions as part of their daily duties. During labeling, experts were also clustering data based on similar text content.
As sample data, for the inference phase, a number of 1295 petitions were taken into consideration. At the end of this process, therefore, before starting the analyses, the sample in use consisted of 152,810 items.
At this stage, the author is willing to mention that other criteria of analysis were also taken into consideration: marital status (if directly or indirectly disclosed by the sender), a platform he/she used (mobile, laptop/PC), references to other documents such as legislation or norms. All those are considered extra information but are important for building the statistics.
On this data set, a cleaning operation was performed in order to fully anonymize the data–all of the petitions had names, emails phone numbers, or similar data that could link the content to the sender; therefore, a full set of indicators were dropped resulting in a total number of 151,515 valid inputs. The author wants to mention here that the dataset received for this experiment consisted in petitions that were already answered by the city hall employees; therefore, they were considered valid, and it was most probable that a live model would face multiple invalid inputs. There are more limitations in a dedicated section at the end of the article.
Cleaning operations also consisted of the following:
  • Tokenization: Split text into individual words or tokens to allow for further processing;
  • Removing punctuation;
  • Spell correction;
  • Removing URLs and HTML tags;
  • Removing special characters;
  • Removing emoticons;
  • Removing offensive and bad words.
For the analysis itself, Google Colab [56] was used for its free access to a machine learning environment that allows one to write and run Python code, including machine learning algorithms. Moreover, the platforms allow using pre-trained models from popular machine learning frameworks such as TensorFlow and PyTorch. For this experiment, the author was using TensorFlow alone as the development platform with adjusted open-source software such as BERT [57] for text analyses using Index-Based Encoding and Bag of Words (BoW) techniques [58,59] fed up with texts from the data set used for this experiment. For tabular data (obtained after the triage were the first couple of indicators, such as age, marital status, and activism) TabNet [60] was used. Visual representations for the article were reproduced with the help of Tensor Flow Playground [61].
Few words describing the city: Brasov is located in central Romania at a reasonably high altitude (with heavy snows in the winter, when people tend to comply more about the inefficiencies of public administration during this season) and is serving as the capital of its county. It boasts a population of roughly 238,000 residents [48] (about 1.24% of Romania’s population and approximately eight times smaller than Bucharest, the country’s capital) and is recognized for its strong commercial and industrial sectors, making its population a very active one, with an average age of about 42 years (less than the country average). The city is governed by both a mayor and a city council.

4. The AI Model Proposed

4.1. Related Works

In the legal field, AI excels in handling repetitive and routine tasks [62], as is the case with AI in general. One of the earliest and most notable applications of AI in law was in the discovery phase of a trial, specifically with document classification. The initial approach involved searching for keywords to automate this process, but this was flawed because an idea or concept can be expressed in various ways, and certain keywords may be missed. Eventually, machine learning (ML) and natural language processing (NLP) algorithms were used; teams of lawyers classified samples of documents, and then the algorithms analyzed the patterns of words and combinations to identify which documents were responsive to the request [63]. This saved a significant amount of time for future queries. However, the results were not a binary classification; instead, the algorithm produced a probability score of a document being responsive [64]. Those with a high probability score are turned over, while those with a low score are disregarded. The ones placed in between would require human review.
Since most petitions require legislative input for answering, a similar model can be used. According to the statistics made by Brasov city hall, petitions follow common themes that arise in various scenarios [65]. Considering this, the process of document classification is considered routine, given that the public servants are creating a protocol and repeatedly applying it. However, automating this process is a lengthy endeavor since the information being analyzed, whether a document was responsive or not, is presented in text format. Without a method for a computer to comprehend language, the routine aspect of the work could not be achieved. Now that language processing has advanced enough to enable this, the process can be smoothly executed.
Moreover, if recently developed NLP systems of Generative AI, such as ChatGPT [66,67], are put in place, answering petitions after a proper classification of legal documents, as mentioned above, will be just a ‘compare and comply’ routine tasks [12].

4.2. Input Layer

Initially, by the use of Authority and Hubs Distribution algorithms [42], the system evaluated the degree of association between words found in the subject lines of all the petitions in order to classify the citizens’ requests. This involved scoring the strength of connections among the words.
As shown in Figure 1, the words in the subject lines (59 nodes, or unique words out of a total of 66 and 44 edges/connections between the nodes) are interlinked (the darker the links and dots are, the stronger the connection is), with no isolated words. However, the connections between them are still weak at this point but help in classifying the main text corpus.

4.3. Hidden Layers

The issue now is that the system does not know anything about the content of a petition or Facebook messages, so there is a strong need to translate it into a form that the computer can understand, and this is a feature vector. One possible way to perform this translation is to ask experts (public servants) about the content and concatenate the answers in a binary vector; this is what AI experts call supervised learning. Labeling content can be performed for each petition on the training set [68]. In fact, there is no need to allocate resources for labeling activities; the system can simply observe human actions and can analyze the patterns of words and combinations to properly label each petition.
Dealing with vectors and labels helps in translating the problem into a geometric form. If each one of those vectors is represented as a point in space, and if there is a corresponding label with those points, then the system may learn from the data.
In Figure 2, there are users of the system who sent a petition to the city hall (training set). On the bottom part, one can see petitions (set as a vector of importance) that could be treated with ease by the municipality, while on the upper part, there is an important issue that needs to be taken into consideration on a fast peace.
The line between left and right is the line the system is drawing on how likely the topic addressed is recurrent, which may already be answered for another citizen or even solved.
The slant line, however, represents the classifying function. For this article, the author generically defined it as h(x).
However, Figure 2 represents just one vector of the model. For the purpose of this article, the author has chosen to present the ‘importance’ vector for a better understanding. There could be unlimited vectors grouped into an unlimited number of layers based on location, recurrence, reason (personal vs. general), and so on, with the ‘importance’ vector being just one of them. Overlapping all these different layers makes the system much more complex and, therefore, much more accurate in scoring the importance of precision (the ratio of positive predictions that are correct; those petitions from the upper right corner) and recall (the ratio of all positives that the final model is catching: the number of petitions misplaced by the system over the total number) [69].

4.4. Training Model

Figure 3 gives the visual representation of the training model. The model is elastic; it gives the possibility to be adjusted by the administrators by allowing them to add multiple layers and vectors.
Layers represent a group of interconnected items that, when computed together, helps the model perform better. Layers are based on features seen as vectors and considered important by the system administrators, such as language (e.g., using bad or offensive words), reason (e.g., already known malfunctions of different systems in the city: power supply, water linkage), geographical (e.g., areas that are confronted with the same problem, potholes for example, and different citizens keep sending messages with it), activities (e.g., music from a nearby festival), and the reliability of user (based on its previous posts on city hall social media official pages/petitions/messages, using sentiment analysis tools). In other words, the connection between vectors is made by measuring the weights of the edges (as seen in Figure 1).
Once the model is trained and compiled, the system assigns a score to each petition and performs specific actions based on a predetermined threshold.
When the system is faced with challenging texts (for example, questions addressed by citizens that demand intricate solutions that the system has not encountered before), the software generates multiple responses, each with a corresponding probability that signifies how confident it is about the correctness of the answer (e.g., 0.92, 0.84, 0.76–known as confidence numbers). The system administrators will determine a threshold or cut-off value that serves as a guideline for deciding which responses the machine can handle. Specifically, if the probability of a given response surpasses the threshold (e.g., 0.90), then the machine can take care of it and answer fully. However, if it falls below the threshold, the query needs to be addressed to a human operator.

4.5. How to Increase Precision and Recall

In general, the city hall has the identities of the citizens who address it by petitioning the places where they live. Therefore, it understands the problems they face and the problem they might complain about. It can then use this information to generate a list of profiles belonging to people who are frequent complainants (a large number of complainants tend to repeat their actions even if they receive a positive opinion from the city). In addition, it can use public information about active users on its official Facebook page. Correlating this information with sentiment analysis predictions, the system can be more efficient with improved effectiveness in assigning scores based on which it can consider certain actions.
It is interesting to mention here that the so-called Jevons Paradox [70,71,72], which states that improvements in efficiency and technology correlated with cost reduction, which initially leads to a decrease in resource use, may result in an overall increase in consumption/aggregate demand. As a result, the author predicts that the increased ease of use and speed of the system may lead to higher demand, offsetting any efficiency gains and placing greater pressure on administrative resources. This is mostly because the easiness of using the system, correlated with the speed at that the apps are answering/solving the issue, will encourage more use, leading to increased demand that may offset the savings gained from increased efficiency. This paradox highlights the need for a holistic approach to resource management that takes into account not just efficiency gains but also the behavior and citizens’ responses to these gains.

4.6. Output Layer

Combating recall can be difficult, but since the robustness of a petition system should not be as strong as that of a financial one (e.g., dealing with fraud detection), such a system will definitely release the pressure from the public servants when dealing with large amounts of citizens complains just by acting as follows:
  • It takes a soft action–backlog, by sending the petition for human investigation while helping with extracting relevant information from the legislative framework in order to help the public servant in giving an accurate answer to the complaint;
  • It takes strong action, acting on behalf of humans (independently), generating narratives, and giving all necessary information to the citizen. It could also actively engage in a dialog using more advanced NLP capabilities (such as newly released GPT-4 [73]) if necessary;
  • Pass action. In this scenario, the AI system could respond in a gentle manner, using language and phrases intended to de-escalate any potential argument with a confrontational citizen.
In any of the above cases, the public servants will receive a lot of help from such a system, while citizens will also receive more trust in the local government, knowing that the officials are active in solving their problems. If we sum up all the options above, we can see the efficiency of the system. Moreover, software bots can be used to gather information on public perceptions of various actions of officials or different agencies and, based on sentiment analysis, can study public mood and give clotted feedback to the municipality in order for it to improve its services [74,75].
For a better understanding, Figure 4. provides a visual representation of the model pipeline. In the real world, however, the data might not be as well balanced as they are in the presented outcome.
X1 and X2 are to be seen, for the purposes of this paper, as data inputs, such as X1 from petitions and X2 from messages posted on the official Facebook page of the institution.
Data curation refers to the process of cleaning input data (as mentioned in Section 3) for use in the model, making it relevant and reliable. In this experiment, the author replaced abbreviated words with their meaning together by converting them into vectors (numerical form) by Index-Based Encoding and Bag of Words (BoW) techniques [58,59].
The results of human investigation, which are considered as ‘soft action’, will be fed back into the system after the output phase. This feedback is aimed at readjusting the vectors to improve accuracy for the next input.

5. Results

In order to test the model, the author took real data from one of the big Romanian cities, Brasov, as described in the Materials and Methods section, and fed them into the system.
A full set of connections (made on 10% full texts from the training set at the inference phase) are to be seen in Figure 4.
Table 1 is a sample of the data set used for training.
Below, in Table 2, one can see the results extracted for the purpose of interpretability, as given by the machine based on the inputs presented in Table 1.
Explanations: * examples for I8, I9, I10, and other composite indicators transformed into single vectors.
  • Is a particular word such as ‘thing’ present in the context? Detection;
  • What type of thing is ‘thing’? Classification;
  • How could ‘thing’ be grouped or ungrouped? Segmentation
The scores retrieved by the machine are not important for the present article. However, based on the full set of values, the system can understand the connections between the vectors and decide what to do with the petition, as explained in Section 4.6. The real value of such a system relies on the speed it can perform the triage for the incoming petitions and its accuracy, as will be presented below. Without it, the queue rate, the ratio of all petitions that are waiting for human observation, could scale up the capacity of the Integrated Technical Dispatch of Brasov city, seen here as a ‘gatekeeper’. For example, one human annotator works 8 h per day, and a single annotation takes 5 min; then, a traditional system is capable of handling roughly 100 inputs per day, which is the system’s capacity. One can perform the calculus and see that in the case of Brasov, the actual, traditional system exceeds the needs (1) [76].
12,935/251 working days per year in 2022 × 5 min ≈ 4 h 20 min/day
However, in case we are to expand the system to cover larger cities (for example, Bucharest, the Romanian capital, which is eight times larger than Brasov) or the entire country for specific central governmental agencies, the situation would be different. Moreover, during instances of a natural occurrence where unexpected surges may arise, humans typically lack the ability to promptly address the situation. Additionally, taking into account the time required to process information and respond to it, it is easy to envision the substantial benefits of such a system. Although queue rates can be unpredictable, the machine is undoubtedly capable of performing at a faster pace than humans and, with appropriate training, can achieve greater accuracy. Furthermore, machines do not rely on specific working hours, weekdays, or taking leaves.
In the picture above, in Figure 5, one can observe the color density, which shows the strength of the connections made by the system with words that are present in other petitions. In other words, the system is able to ‘understand’ the text in a more or less similar manner as humans do.
After the strengths have been computed, the machine typically has the ability to assess the petitions, albeit with some degree of inaccuracy, as shown in Figure 6 below, and decide the action. For example, if one complaint is addressing water linkage in one neighborhood of the city, the petition might contain words such as ‘water’, ‘pipe’, ‘road’, and others in this category but is unlikely to have ‘thefts’, ‘wild animals’, ‘traffic lights’, and so on. When these instances appear, however, the system will forward the text to a human operator observing his/her behavior and (re)adjusting the model. Moreover, human operators can help the machine with these adjustments for a better future prediction.
The chart depicted in Figure 6 shows the results of testing six distinct prediction models. The Gradient Boosted Tree model proved to be the most precise, with a relative error of 7.84%, but also had a moderate efficiency, taking approximately 16 s to process the text and determine its course of action. In comparison, other models, such as the Decision Tree, were faster, taking only 2 s, but had a higher error rate of 12.32%.
The author would like to clarify that the intention was not to measure the error rate of the city hall experts, which may have been lower than any of the models tested in the study. However, the results did support the hypothesis that machine speed could be an advantage, and with adequate training, the accuracy of these models could also improve.

6. Discussion

The advancement of technology has enabled AI to make remarkable strides in managing critical aspects of ‘compare and comply’ functions. In their activities, public administration officers scrutinize lots of data, mostly legislation and internal norms, for being able to avoid unforeseen legal complications. This is still a rather difficult problem to be solved by machines since specific concepts can be formulated in different ways. However, the system is not trying to replace humans but to help them perform better; therefore, the critics that are targeting AI systems mistakes, known as ‘adversarial examples’, are not to be seen as bugs but as features [77]. Nevertheless, the role of automation is to make the tasks easier by allowing software to scan legislative documents, understand the meaning, compare it with the citizen demand, and determine which documents are to be referred to in the answer, resulting in significant time and effort savings.
The methodology employed, as described in the article, could potentially be adapted to the sentiment analysis problem associated with e-petitions, contingent upon access to a Romanian-language lexicon that includes positive and negative terms (teams of experts in this field are working right now on building this [78,79]). Identification of the most extreme positive and negative terms will enable the classification of ‘obvious’ entries, with the self-supervised approach subsequently handling the remaining entries by cross-referencing them against these extreme examples.
Apart from the e-petition issue referenced, the techniques detailed in the article could prove applicable to a diverse range of other-related challenges. Specifically, it would entail classifying the simplest, most extreme text entries as positive or negative, utilizing the word lexicon, and subsequently using these outputs as labels for machine learning classifiers. The remaining text entries could then be processed through the classifier to obtain positive, negative, neutral, or uncertain classifications [80,81].
The study results indicate that automation can lead to significant time and effort savings by enabling software to scan legislative documents, comprehend their significance, and contrast them with citizen demands. Although machines cannot entirely replace humans, they can substantially augment their abilities and efficiency in handling intricate tasks.

6.1. Limitation

The efficiency of the system was mentioned. Of course, this is a debatable issue since the machines are not a panacea. The system is far from perfect. Nonetheless, if there is a rare occurrence of a false positive or false negative, as seen in Figure 4, the system can request additional information from the citizen or escalate the issue to a human operator. It is crucial to involve humans in AI systems to ensure accountability and accuracy; in other words, caution is required.
Moreover, in a live environment, the system may potentially misbehave; biases might pop up, and that could jeopardize the output resulting in court cases. In order to avoid this, it is important to identify and address potential biases to ensure fairness and ethical use of the system. Bias can arise from a variety of sources, such as imbalanced training data, algorithmic limitations, etc. Failing to address these biases can result in discrimination against certain groups of people or in inaccurate predictions resulting in bad outputs, which can have serious consequences, including, as mentioned, legal action. To mitigate the risk of bias, it is important to establish best practices for data labeling by experts and preprocessing by administrators, together with ongoing monitoring and evaluation of the system’s performance. This can include techniques such as data augmentation, model interpretability, and fairness metrics, as well as involving diverse actors in the design and implementation of the system.

6.2. Future Work

In the case of petition analysis, the system should be trained on a large corpus of text data, which should be labeled with relevant metadata such as issue type, urgency, sentiment analysis, and many others. By analyzing the patterns in the data, the system can learn to identify common themes and topics and make connections between related words and phrases. This allows the system to accurately classify new petitions and prioritize them based on their level of urgency and importance.
Moreover, in an e-petition platform enhanced with AI, GPT-4 API (Application Programming Interface) could be integrated as an intelligent chatbot to provide personalized and efficient customer support. Newly released GPT-4, as a large language model (Generative Pre-trained Transformer), has the ability to understand and respond to natural language queries, making it an ideal candidate for handling user inquiries in real time.
As an idea for future work, the integration of GPT-4 in the petition system can be further enhanced by incorporating advanced machine learning techniques. This can improve the accuracy and relevance of GPT-4’s responses by enabling it to learn from user feedback and adapt to changing user needs.
Additionally, the author is willing to mention that at the present stage, the system developed together with the present article has no graphical user interface, being mostly an algorithmic approach to an AI problem. However, the present paper does not consist in promoting this one in particular but in developing apps capable of helping public servants, institutions, and, in the end, the citizens.
The application of transfer learning and self-supervised learning techniques would prove especially advantageous in the implementation of such a system, which could then be utilized by other public administration entities, including museums and other institutions that serve citizens directly. With consistent input data, it would be feasible to modify the output layer to gain a deeper understanding of citizens’ needs without the need to start anew.
While the financial aspect may not be immediately evident in public management, the use of advanced technologies has the potential to generate tangible benefits, such as increased citizen trust and engagement.

6.3. Theoretical, Practical, and Policy Implications

Theoretical implications suggest that the model developed in this study could potentially be adapted to solve the sentiment analysis problem associated with e-petitions. Moreover, practical implications reveal that AI has the capability to assist public administration officers in managing large volumes of data, saving significant time and effort. By scanning legislative documents, comprehending their meaning, comparing them with citizen demands, and determining which documents to reference in the response, AI can alleviate the workload of public servants by swiftly and accurately processing vast amounts of text 24/7.
Overall, this study demonstrates that AI can effectively assist public administration officers in managing large amounts of data while also identifying potential biases and ensuring ethical use of the system. Furthermore, AI has the potential to generate tangible benefits, increasing citizen trust and engagement, and can be employed in other public administration entities as well.

7. Conclusions

The author conducted this experiment in order to explore the computations involved in the context of learning how to operate text-based inputs. The findings suggest that these models can, theoretically, be implemented and empirically execute a range of actions depending on the model’s capacity and noise in the dataset, seen here as blurry text sequences inside petitions. Furthermore, it was demonstrated that the AI models could ease the workload of public servants by computing large amounts of text with high speed and accuracy seven days a week 24 h per day. While the experiment was centered on linear functions, with relatively few layers and indicators, seen here as vectors, the methodology can be extended to many other learning problems involving richer function classes. For instance, it can be applied to a network that performs non-linear feature computation in its initial layers.
Additionally, this experimental approach can be used to study larger-scale examples of contextual learning, such as language models, and determine whether their behaviors can be explained by interpretable learning algorithms. Although there is still much work to be done, the results provide initial evidence that what today is seen as an online but asynchronous way of dealing with citizens’ requests, in the future, may not be as difficult as it seems and can be put in practice using standard machine learning tools. Furthermore, the solutions provided by the artificial intelligence tools will help in creating better communication with the public administration and finding better solutions to citizens’ problems. Implementing NLP techniques in public administration processes is just one of the first steps in the e-government 3.0 era.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to express sincere gratitude to the staff of the Brasov City Hall Computer Department for their invaluable assistance in providing the necessary information and insightful discussions on the software applications. Their support and guidance provided the author with a strong foundation for the development of this article.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Table A1. Petitions by state and category *.
Table A1. Petitions by state and category *.
IDItemActiveResolvedTotal
1Unauthorized display/trade -3434
2Road improvements 24412501494
3Animals in public domain 25860
4Damage to utility networks 31216531965
5Requests for information 925202529
6Unauthorized construction/works -160160
7Waste disposal 2107109
8Destruction of public domain 65864
9Fountain -66
10Public lighting 3851854
11Investments 10515
12Road markings -1818
13Illegal parking 5882887
14Public/residential parking 5159164
15Free passage permit -1616
16Environmental issues -7777
17Sanitation 614431449
18Road signs 2970972
19Electronic services/Web portal 121830
20Administrative Service Complaints9413
21Emergency situations -2828
22Public transport 43142185
23Taxi transport -44
24Public disturbance 2356358
25Abandoned vehicle 1318319
26Zero plastic in green areas -88
27Green areas/urban furniture 1217901802
Total685 **12,93513,620
* Integrated Technical Dispatch–general activity report for the period 1 January 2022–31 December 2022; ** 685 requests were registered in late December. They were mostly tackling issues that stemmed from Brasov’s high-altitude location, which causes massive snowfalls during the winter.
Table A2. Petitions by origin *.
Table A2. Petitions by origin *.
IDItemTotal
1Email968
2Smartphone8074
3Instant message2
4Web platform1463
5Phone3113
Total13,620
* Integrated Technical Dispatch; general activity report for the period 1 January 2022–31 December 2022.

References

  1. Vrabie, C. Elemente de E-Guvernare [Elements of E-Government]; Pro Universitaria: Bucharest, Romania, 2016. [Google Scholar]
  2. Porumbescu, G.; Vrabie, C.; Ahn, J.; Im, T. Factors Influencing the Success of Participatory E-Government Applications in Romania and South Korea. Korean J. Policy Stud. 2012, 27, 2233347. [Google Scholar] [CrossRef] [Green Version]
  3. European Commission. eGovernment and Digital Public Services; European Commission: Brussels, Belgium, 2022; Available online: https://digital-strategy.ec.europa.eu/en/policies/egovernment (accessed on 23 April 2023).
  4. Vlahovic, N.; Vracic, T. An Overview of E-Government 3.0 Implementation; IGI Global: Hershey, PA, USA, 2015. [Google Scholar]
  5. Jun, C.N.; Chung, C.J. Big data analysis of local government 3.0: Focusing on Gyeongsangbuk-do in Korea. Technol. Forecast. Soc. Chang. 2016, 110, 3–12. [Google Scholar] [CrossRef]
  6. Terzi, S.; Votis, K.; Tzovaras, D.; Stamelos, I.; Cooper, K. Blockchain 3.0 Smart Contracts in E-Government 3.0 Applications. arXiv 2019, arXiv:1910.06092. [Google Scholar]
  7. European Commission. Public Administration and Governance in the EU; European Commission: Brussels, Belgium, 2023; Available online: https://reform-support.ec.europa.eu/system/files/2023-01/DG%20REFORM%20Newsletter02_january2023.pdf (accessed on 23 April 2023).
  8. Twizeyimana, J.D.; Andersson, A. The public value of E-Government—A literature review. Gov. Inf. Q. 2019, 36, 167–178. [Google Scholar] [CrossRef]
  9. Vrabie, C. Digital Governance (in Romanian Municipalities). A Longitudinal Assessment of Municipal Web Sites in Romania. Eur. Integr. Realities Perspect. 2011, 906–926. [Google Scholar] [CrossRef]
  10. Invest Brasov. Brașov–Best Smart City Project Award, Invest Brasov. Available online: https://investbrasov.org/2022/04/24/cum-influenteaza-schimbarea-mediului-de-lucru-productivitatea%EF%BF%BC/ (accessed on 21 March 2023).
  11. SCIA. Campionii Industriei Smart City, Romanian Association for Smart Cities, 1 April 2022. Available online: https://scia.ro/campionii-industriei-smart-city-editia-6/ (accessed on 21 March 2023).
  12. Vrabie, C. Artificial Intelligence Promises to Public Organizations and Smart Cities. In Digital Transformation; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 3–14. [Google Scholar]
  13. Vrabie, C. Digital Governance (in Romanian Municipalities) and Its Relation with the IT Education–A Longitudinal Assessment of Municipal Web Sites in Romania. In Public Administration in Times of Crisis; NISPAcee PRESS: City Warsaw, Poland, 2011; pp. 237–269. [Google Scholar]
  14. Hashem, I.; Usmani, R.; Almutairi, M.; Ibrahim, A.; Zakari, A.; Alotaibi, F.; Alhashmi, S.; Chiroma, H. Urban Computing for Sustainable Smart Cities: Recent Advances, Taxonomy, and Open Research Challenges. Sustainability 2023, 15, 3916. [Google Scholar] [CrossRef]
  15. Zhao, S.; Zhang, Y.; Iftikhar, H.; Ullah, A.; Mao, J.; Wang, T. Dynamic Influence of Digital and Technological Advancement on Sustainable Economic Growth in Belt and Road Initiative (BRI) Countries. Sustainability 2022, 14, 15782. [Google Scholar] [CrossRef]
  16. Bonnell, C. In Business We Trust. In People Trust Businesses More Than Governments, Nonprofits, Media: Survey; Associated Press: New York City, NY, USA, 2023; Available online: https://eu.usatoday.com/story/money/2023/01/16/trust-business-more-than-government-nonprofits-media-survey/11062453002/ (accessed on 23 April 2023).
  17. Vangelov, N. Ambient Advertising in Metaverse Smart Cities. SCRD J. 2023, 7, 43–55. [Google Scholar]
  18. Iancu, D.C.; Ungureanu, M. Depoliticizing the Civil Service: A critical review of the public administration reform in Romania. Res. Soc. Chang. 2010, 2, 63–106. [Google Scholar]
  19. Vrabie, C. Informing citizens, building trust and promoting discussion. Glob. J. Sociol. 2016, 6, 34–43. [Google Scholar] [CrossRef]
  20. Iancu, D.C. European compliance and politicization of public administration in Romania. Innov. Issues Approaches Soc. Sci. 2013, 6, 103–117. [Google Scholar]
  21. Hamrouni, B.; Bourouis, A.; Korichi, A.; Brahmi, M. Explainable Ontology-Based Intelligent Decision Support System for Business Model Design and Sustainability. Sustainability 2021, 13, 9819. [Google Scholar] [CrossRef]
  22. Chen, Y.-C.; Ahn, M.; Wang, Y.-F. Artificial Intelligence and Public Values: Value Impacts and Governance in the Public Sector. Sustainability 2023, 15, 4796. [Google Scholar] [CrossRef]
  23. Noordt, C.V.; Misuraca, G. Artificial intelligence for the public sector: Results of landscaping the use of AI in government across the European Union. Gov. Inf. Q. 2022, 39, 101714. [Google Scholar] [CrossRef]
  24. Reis, J.; Santo, P.E.; Melão, N. Artificial Intelligence in Government Services: A systematic literature review. Springer Nat. 2019, 1, 241–252. [Google Scholar]
  25. Thakhathi, V.G.; Langa, R.D. The role of smart cities to promote smart governance in municipalities. SCRD J. 2022, 6, 9–22. [Google Scholar]
  26. KPMG. Manage the Effects of Robotic Process Automation to Enable a Future-Proof Workforce; KPMG Advisory: Amstelveen, The Netherlands, 2019. [Google Scholar]
  27. Sánchez, J.; Rodríguez, J.; Espitia, H. Review of Artificial Intelligence Applied in Decision-Making Processes in Agricultural Public Policy. Processes 2020, 8, 1374. [Google Scholar] [CrossRef]
  28. Schachtner, C. Smart government in local adoption—Authorities in strategic change through AI. SCRD J. 2021, 5, 53–62. [Google Scholar]
  29. Etscheid, J. Artificial Intelligence in Public Administration. In Electronic Government. EGOV 2019. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  30. Kolkman, D. The usefulness of algorithmic models in policy making. Gov. Inf. Q. 2020, 37, 101488. [Google Scholar] [CrossRef]
  31. Ibtissem, M.; Mohsen, B.; Jaleleddine, B. Quantitative relationship between corruption and development of the Tunisian stock market. Public Munic. Financ. 2018, 7, 39–47. [Google Scholar]
  32. Munshi, A.; Mehra, A.; Choudhury, A. LexRank Algorithm: Application in Emails and Comparative Analysis. Int. J. New Technol. Res. (IJNTR) 2021, 7, 34–38. [Google Scholar] [CrossRef]
  33. Zalwert, M. LexRank Algorithm Explained: A Step-by-Step Tutorial with Examples, 5 May 2021. Available online: https://maciejzalwert.medium.com/lexrank-algorithm-explained-a-step-by-step-tutorial-with-examples-3d3aa0297c57 (accessed on 25 March 2023).
  34. Scholl, H.J. Manuel Pedro Rodríguez Bolívar, Regulation as both enabler of technology use and global competitive tool: The Gibraltar case. Gov. Inf. Q. 2019, 36, 601–613. [Google Scholar] [CrossRef]
  35. Vrabie, C.; Dumitrascu, E. Smart Cities: De la Idee la Implementare, Sau, Despre cum Tehnologia Poate da Strălucire Mediului Urban; Universul Academic: Bucharest, Romania, 2018. [Google Scholar]
  36. Timan, T.; Veenstra, A.F.V.; Bodea, G. ArtificiaI Intelligence and Public Services; European Parliament: Strasbourg, France, 2021. [Google Scholar]
  37. Wu, H.; Wang, Z.; Qing, F.; Li, S. Reinforced Transformer with Cross-Lingual Distillation for Cross-Lingual Aspect Sentiment Classification. Electronics 2021, 10, 270. [Google Scholar] [CrossRef]
  38. Mehr, H. Artificial Intelligence for Citizen Services and Government; Harvard Ash Center: Cambridge, MA, USA, 2017; Available online: https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf (accessed on 18 March 2023).
  39. Reshi, A.; Rustam, F.; Aljedaani, W.; Shafi, S.; Alhossan, A.; Alrabiah, Z.; Ahmad, A.; Alsuwailem, H.; Almangour, T.; Alshammari, M.; et al. COVID-19 Vaccination-Related Sentiments Analysis: A Case Study Using Worldwide Twitter Dataset. Healthcare 2022, 10, 411. [Google Scholar] [CrossRef]
  40. Alabrah, A.; Alawadh, H.; Okon, O.; Meraj, T.; Rauf, H. Gulf Countries’ Citizens’ Acceptance of COVID-19 Vaccines—A Machine Learning Approach. Mathematics 2022, 10, 467. [Google Scholar] [CrossRef]
  41. Zschirnt, S. Justice for All in the Americas? A Quantitative Analysis of Admissibility Decisions in the Inter-American Human Rights System. Laws 2021, 10, 56. [Google Scholar] [CrossRef]
  42. Kleinberg, J.M. Authoritative Sources in a Hyperlinked Environment. J. ACM 1999, 46, 604–632. [Google Scholar] [CrossRef] [Green Version]
  43. Hreňo, J.; Bednár, P.; Furdík, K.; Sabol, T. Integration of Government Services using Semantic Technologies. J. Theor. Appl. Electron. Commer. Res. 2011, 6, 143–154. [Google Scholar] [CrossRef] [Green Version]
  44. Piaggesi, D. Hyper Connectivity as a Tool for the Development of the Majority. Int. J. Hyperconnect. Internet Things 2021, 5, 63–77. [Google Scholar] [CrossRef]
  45. Verma, S. Sentiment analysis of public services for smart society: Literature review and future research directions. Gov. Inf. Q. 2022, 39, 101708. [Google Scholar] [CrossRef]
  46. Chui, M.; Harrysson, M.; Manyika, J.; Roberts, R.; Chung, R.; Nel, P.; Heteren, A.V. Applying Artificial Intelligence for Social Good; McKinsey Global Institute: New York City, NY, USA, 2018; Available online: https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good (accessed on 9 April 2023).
  47. Rohit Madan, M.A. AI adoption and diffusion in public administration: A systematic literature review and future research agenda. Gov. Inf. Q. 2023, 40, 101774. [Google Scholar] [CrossRef]
  48. Ahn, M.J.; Chen, Y.-C. Digital transformation toward AI-augmented public administration: The perception of government employees and the willingness to use AI in government. Gov. Inf. Q. 2022, 39, 101664. [Google Scholar] [CrossRef]
  49. Kumari, S.; Agarwal, B.; Mittal, M. A Deep Neural Network Model for Cross-Domain Sentiment Analysis. Int. J. Inf. Syst. Model. Des. 2021, 12, 1–16. [Google Scholar] [CrossRef]
  50. Lu, Z.; Hu, X.; Xue, Y. Dual-Word Embedding Model Considering Syntactic Information for Cross-Domain Sentiment Classification. Mathematics 2022, 10, 4704. [Google Scholar] [CrossRef]
  51. Yu, H.; Lu, G.; Cai, Q.; Xue, Y. A KGE Based Knowledge Enhancing Method for Aspect-Level Sentiment Classification. Mathematics 2022, 10, 3908. [Google Scholar] [CrossRef]
  52. Eom, S.J.; Lee, J. Digital government transformation in turbulent times: Responses, challenges, and future direction. Gov. Inf. Q. 2022, 39, 101690. [Google Scholar] [CrossRef]
  53. Zankova, B. Smart societies, gender and the 2030 spotlight—Are we prepared. SCRD J. 2021, 5, 63–76. [Google Scholar]
  54. Chui, M.; Roberts, R.; Yee, L. Generative AI is Here: How Tools Like ChatGPT Could Change Your Business; McKinsey & Company: Atlanta, GA, USA, 2022; Available online: https://www.mckinsey.com/capabilities/quantumblack/our-insights/generative-ai-is-here-how-tools-like-chatgpt-could-change-your-business (accessed on 9 April 2023).
  55. Ng, A. AI is the New Electricity; O’Reilly Media: Sebastopol, CA, USA, 2018. [Google Scholar]
  56. Google. Google Colaboratory. Available online: https://colab.research.google.com/#scrollTo=-gE-Ez1qtyIA (accessed on 25 February 2023).
  57. Google Research. BERT. 2020. Available online: https://github.com/google-research/bert, (accessed on 25 February 2023).
  58. Silipo, R.; Melcher, K. Text Encoding: A Review; Towards Data Science: Toronto, ON, Canada, 2019; Available online: https://towardsdatascience.com/text-encoding-a-review-7c929514cccf#:~:text=Index%2DBased%20Encoding,that%20maps%20words%20to%20indexes (accessed on 8 April 2023).
  59. Kameni, J.; Flambeau, F.; Tsopze, N.; Tchuente, M. Explainable Deep Neural Network for Skills Prediction from Resumes, December 2021. Available online: https://www.researchgate.net/publication/357375852_Explainable_Deep_Neural_Network_for_Skills_Prediction_from_Resumes?channel=doi&linkId=61cb04a1b8305f7c4b074a9b&showFulltext=true (accessed on 8 April 2023).
  60. DreamQuark. TabNet: Attentive Interpretable Tabular Learning. arXiv 2019, arXiv:1908.07442. [Google Scholar]
  61. Tensor Flow, Deep Playground, Tensor Flow. Available online: https://github.com/tensorflow/playground (accessed on 19 March 2023).
  62. Brynjolfsson, E.; McAfee, A. The business of artificial intelligence. Harv. Bus. Rev. 2017, 95, 53–62. [Google Scholar]
  63. Davenport, T.H.; Ronanki, R. Artificial intelligence for the real world. Harv. Bus. Rev. 2018, 96, 108–116. [Google Scholar]
  64. Schrage, M. The key to winning with AI: Improve your workflow—Not your algorithm. MIT Sloan Manag. Rev. 2018, 59, 1–9. [Google Scholar]
  65. Vrabie, C. Smart-EDU Hub. In Proceedings of the ‘Accelerating innovation’ Smart Cities International Conference (SCIC), 10th ed. Bucharest, Romania, 8–9 December 2022; Smart Cities and Regional Development (SCRD) Open Access: Bucharest, Romania, 2022. Available online: https://www.smart-edu-hub.eu/about-scic10/conference-program10 (accessed on 18 March 2023).
  66. OpenAI. Introducing ChatGPT.; OpenAI: San Francisco, CA, USA, 2022; Available online: https://openai.com/blog/chatgpt (accessed on 18 March 2023).
  67. Sutskever, I. Fireside Chat with Ilya Sutskever and Jensen Huang: AI Today and Vision of the Future; Stanford University: San Francisco, CA, USA, 2023. [Google Scholar]
  68. Akyürek, E.; Schuurmans, D.; Andreas, J.; Ma, T.; Zhou, D. What learning algorithm is in-context learning? Investigations with linear models. arXiv 2022, arXiv:2211.15661. [Google Scholar]
  69. Flender, S. Deploying Your Machine Learning Model Is Just the Beginning; Towards Data Science: Toronto, ON, Canada, 2022; Available online: https://towardsdatascience.com/deploying-your-machine-learning-model-is-just-the-beginning-b4851e665b11 (accessed on 19 March 2023).
  70. Alcott, B. Jevons’ paradox. Ecol. Econ. 2005, 54, 9–21. [Google Scholar] [CrossRef]
  71. Sorrell, S. Jevons’ Paradox revisited: The evidence for backfire from improved energy efficiency. Energy Policy 2009, 37, 1456–1469. [Google Scholar] [CrossRef]
  72. Fich, L.; Viola, S.; Bentsen, N. Jevons Paradox: Sustainable Development Goals and Energy Rebound in Complex Economic Systems. Energies 2022, 15, 5821. [Google Scholar] [CrossRef]
  73. OpenAI. GPT-4 Is OpenAI’s Most Advanced System, Producing Safer and More Useful Responses; OpenAI: San Francisco, CA, USA, 2023; Available online: https://openai.com/product/gpt-4 (accessed on 23 April 2023).
  74. Barnhart, B. The Importance of Social Media Sentiment Analysis (and How to Conduct It); Sprout Social: Chicago, IL, USA, 2019; Available online: https://sproutsocial.com/insights/social-media-sentiment-analysis/ (accessed on 6 May 2022).
  75. Dabhade, V. Conducting Social Media Sentiment Analysis: A Working Example; Express Analytics: Irvine, CA, USA, 2021; Available online: https://www.expressanalytics.com/blog/social-media-sentiment-analysis/ (accessed on 6 May 2022).
  76. LeaveBoard. Zile Lucrătoare 2022, LeaveBoard. Available online: https://leaveboard.com/ro/zile-lucratoare-2022/ (accessed on 22 March 2023).
  77. Ilyas, A.; Santurkar, S.; Tsipras, D.; Engstrom, L.; Tran, B.; Madry, A. Adversarial Examples Are Not Bugs, They Are Features. In Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  78. Neagu, D.; Rus, A.; Grec, M.; Boroianu, M.; Bogdan, N.; Gal, A. Towards Sentiment Analysis for Romanian Twitter Content. Algorithms 2022, 15, 357. [Google Scholar] [CrossRef]
  79. Cioban, Ș. Cross-Domain Sentiment Analysis of the Natural Romanian Language. In Digital Economy. Emerging Technologies and Business Innovation; Springer Link: Berlin/Heidelberg, Germany, 2021; pp. 172–180. [Google Scholar]
  80. Keras. ResNet and ResNetV2, Keras. Available online: https://keras.io/api/applications/resnet/#resnet50-function (accessed on 8 April 2023).
  81. Sazzed, S.; Jayarathna, S. SSentiA: A Self-supervised Sentiment Analyzer for classification from unlabeled data. Mach. Learn. Appl. 2021, 4, 100026. [Google Scholar] [CrossRef]
Figure 1. Analysis and visualization of keywords (in Romanian language) as they are found in the subject lines of petitions, first classification task.
Figure 1. Analysis and visualization of keywords (in Romanian language) as they are found in the subject lines of petitions, first classification task.
Sustainability 15 09572 g001
Figure 2. Generic model of predicting the importance (set as vector) of the topic addressed based on how active the citizens/users of the petitioning system are.
Figure 2. Generic model of predicting the importance (set as vector) of the topic addressed based on how active the citizens/users of the petitioning system are.
Sustainability 15 09572 g002
Figure 3. Training model (simplified); visual representation with Google TensorFlow (Source: github.com [61] – accessed on 19 March 2023).
Figure 3. Training model (simplified); visual representation with Google TensorFlow (Source: github.com [61] – accessed on 19 March 2023).
Sustainability 15 09572 g003
Figure 4. Model pipeline (visual representation with Google TensorFlow (Source: github.com [61]).
Figure 4. Model pipeline (visual representation with Google TensorFlow (Source: github.com [61]).
Sustainability 15 09572 g004
Figure 5. Analysis and visualization of keywords as they are found in the main corpus of 130 petitions. To provide readers with an understanding of the complexity, this 130-text sample (≈10% of the training set) is presented. It would have been impractical to use the full dataset for visual representation, as the excessive number of connections would have rendered the figure incomprehensible: (a) The complexity of the sample as seen by the machine; (b) An example of one-word (in this case it was chosen ‘transportul’–Romanian for ‘transportation’) connections with context in other petitions.
Figure 5. Analysis and visualization of keywords as they are found in the main corpus of 130 petitions. To provide readers with an understanding of the complexity, this 130-text sample (≈10% of the training set) is presented. It would have been impractical to use the full dataset for visual representation, as the excessive number of connections would have rendered the figure incomprehensible: (a) The complexity of the sample as seen by the machine; (b) An example of one-word (in this case it was chosen ‘transportul’–Romanian for ‘transportation’) connections with context in other petitions.
Sustainability 15 09572 g005
Figure 6. Prediction models used experimentally for this article.
Figure 6. Prediction models used experimentally for this article.
Sustainability 15 09572 g006
Table 1. Sample of the dataset used for training.
Table 1. Sample of the dataset used for training.
IDItemValueObservations/Details
I1Gender *0/1/20—not known/1—man/2—women
I2Age group **0 to 60—not known/6 > 70
I3If it is on behalf of a company/firm0/10—ns/1—no/2—yes
I41Geographical 1 ***0 to 88divided in 8 major subgroups, each redivided into another 8 subgroups (0 for undisclosed)
I42Geographical 20/1/20—ns/1—the sender is living in a block of apartments/
2—in a house (with land/garden)
I5Type of petition0/1/2/3/4/50—ns/1—demand/2—complain/
3—referral/4—audience/5—proposal
I6Attachment0/1no/yes
I7Subject of petition0 to 9based on the words written in the Subject field
(different from I3); 0 for ns
I81Active ****0/1/20—first/1—second/2—multiple
I82Active on official social media page0/1/20—first/1—second/2—multiple
I91Content 10/1if it refers to a neighbor/s (as a specific person/s)
I920/1if it refers to the neighborhood
I101Content 20/1/20—no/1—if is regarding parking (in connection with I81)/2—if it regards parking (in connection with I82)
I1020/1if it regards public utilities (in connection with the I82)
I11Content 30/10—no/1—the content refers to the sender’s own facilities (in connection with I32)
I12[…] *****[…][…]
* extracted from the First name (in Romanian language, the vast majority of First names that end with ‘a’ belong to women; ** if directly (mentioning it in plain text) or indirectly (mentioning he/she is a student or a retired person, etc.) disclosed by sender; *** based on the address; **** if the person dropped more than one petition; ***** as mentioned earlier, there are several additional indicators that follow.
Table 2. Correlation matrix; sample results based on Table 1.
Table 2. Correlation matrix; sample results based on Table 1.
I1I2I3I4 *I5I6I7I8 *I9 *I10 *I11I12
I11.0000−0.09550.03540.0838−0.01270.11090.0129−0.04920.42080.34290.4265[…]
I2−0.09551.00000.0697−0.3778−0.2118−0.0464−0.05060.0533−0.0357−0.1504−0.0788[…]
I30.03540.06971.0000−0.0231−0.0205−0.0147−0.0167−0.08760.02350.08960.0111[…]
I4 *0.0838−0.3778−0.02311.00000.11470.01460.00970.01300.04660.0844−0.0051[…]
I5−0.0127−0.2118−0.02050.11471.0000−0.04430.0345−0.0245−0.02870.05120.0208[…]
I60.1109−0.0464−0.01470.0146−0.04431.0000−0.0062−0.00250.11030.12260.1061[…]
I70.0129−0.0506−0.01670.00970.0345−0.00621.0000−0.08030.0919−0.03160.0499[…]
I8 *−0.04920.0533−0.08760.0130−0.0245−0.0025−0.08031.0000−0.0220−0.0073−0.0710[…]
I9 *0.4208−0.03570.02350.0466−0.02870.11030.0919−0.02201.00000.27370.4082[…]
I10 *0.3429−0.15040.08960.08440.05120.1226−0.0316−0.00730.27371.00000.2322[…]
I110.4265−0.07880.0111−0.00510.02080.10610.0499−0.07100.40820.23221.0000[…]
I12[…][…][…][…][…][…][…][…][…][…][…]1.0000
* obtained after adjusting the h(x) function (Figure 1) with the values from associated vectors.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vrabie, C. E-Government 3.0: An AI Model to Use for Enhanced Local Democracies. Sustainability 2023, 15, 9572. https://doi.org/10.3390/su15129572

AMA Style

Vrabie C. E-Government 3.0: An AI Model to Use for Enhanced Local Democracies. Sustainability. 2023; 15(12):9572. https://doi.org/10.3390/su15129572

Chicago/Turabian Style

Vrabie, Catalin. 2023. "E-Government 3.0: An AI Model to Use for Enhanced Local Democracies" Sustainability 15, no. 12: 9572. https://doi.org/10.3390/su15129572

APA Style

Vrabie, C. (2023). E-Government 3.0: An AI Model to Use for Enhanced Local Democracies. Sustainability, 15(12), 9572. https://doi.org/10.3390/su15129572

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop