Next Article in Journal
COVID-19 and Well-Being in Remote Coastal Communities—A Case Study from Iceland
Next Article in Special Issue
Circular Economy and Cooperatives—An Exploratory Survey
Previous Article in Journal
Marine Accidents in the Brazilian Amazon: The Problems and Challenges in the Initiatives for Their Prevention Focused on Passenger Ships
Previous Article in Special Issue
Methodology to Improve the Acceptance and Adoption of Circular and Social Economy: A Longitudinal Case Study of a Biodiesel Cooperative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperatives and the Use of Artificial Intelligence: A Critical View

by
Maria Elisabete Ramos
1,
Ana Azevedo
2,*,
Deolinda Meira
2 and
Mariana Curado Malta
2,3
1
CeBER, Faculty of Economics, University of Coimbra, 3004-512 Coimbra, Portugal
2
CEOS—Polytechnic of Porto, 4200-465 Porto, Portugal
3
ALGORITMI Research Center/LASI, 4800-058 Guimarães, Portugal
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(1), 329; https://doi.org/10.3390/su15010329
Submission received: 18 November 2022 / Revised: 12 December 2022 / Accepted: 20 December 2022 / Published: 25 December 2022

Abstract

:
Digital Transformation (DT) has become an important issue for organisations. It is proven that DT fuels Digital Innovation in organisations. It is well-known that technologies and practices such as distributed ledger technologies, open source, analytics, big data, and artificial intelligence (AI) enhance DT. Among those technologies, AI provides tools to support decision-making and automatically decide. Cooperatives are organisations with a mutualistic scope and are characterised by having participatory cooperative governance due to the principle of democratic control by the members. In a context where DT is here to stay, where the dematerialisation of processes can bring significant advantages to any organisation, this article presents a critical reflection on the dangers of using AI technologies in cooperatives. We base this reflection on the Portuguese cooperative code. We emphasise that this code is not very different from the ones of other countries worldwide as they are all based on the Statement of Cooperative Identity defined by the International Cooperative Alliance. We understand that we cannot stop the entry of AI technologies into the cooperatives. Therefore, we present a framework for using AI technologies in cooperatives to avoid damaging the principles and values of this type of organisations.

1. Introduction

Cooperatives are organisations of an atypical entrepreneurial nature. These atypical features are reflected in the “Statement of Cooperative Identity” defined by the International Cooperative Alliance (ICA) in Manchester in 1995, based on a set of seven principles (Cooperative Principles—these Principles are as follows: Voluntary and Open Membership, Democratic Member Control, Member Economic Participation, Autonomy and Independence, Education, Training, and Information, Cooperation among Cooperatives, Concern for Community) on a set of values (Cooperative Values—cooperatives are based on the values of self-help, self- responsibility, democracy, equality, equity, and solidarity. In the tradition of their founders, cooperative members believe in the ethical values of honesty, openness, social responsibility, and caring for others.) that shape those principles, and on a definition of a cooperative (ICA has established that “A cooperative is an autonomous association of persons united voluntarily to meet their common economic, social, and cultural needs and aspirations through a jointly-owned and democratically-controlled enterprise.”) [1]. This cooperative identity defined by the ICA, emphasising cooperative principles, provides a common orientation for worldwide laws. So, we found no major differences in concepts and practical solutions in the different countries [2].
One of the big challenges caused by the COVID-19 pandemic was the need for many organisations to have their staff working from home, especially during lockdown times. The most advanced organisations in terms of Digital Transformation (DT) were the ones that adapted the quickest and gained leverage as a result. Many social economy organisations have undergone a forced DT c.f. [3,4]. However, this fact has given organisations the impetus to reflect on opportunities for innovation. On the other hand, society has understood the advantages of digitalisation, and the use of digital in the lives of ordinary citizens has greatly increased. We now live in a very different world compared to the one existing before COVID-19. The dematerialisation of processes and activities has brought great benefits to organisations. It is clear that all organisations, including those of the social economy, need to keep up with this trend. The European Union (EU) has focused on Digital Innovation (DI) and DT of the Social Economy (SE) in the last few years. The EU developed a study that established the state of the art of DT in organisations of the SE of nine European countries c.f. [5] and even supported a workshop entitled “Blockchain, digital social innovation and social economy. The future is here!” on 7 February 2019. Some of the technologies and practices that might enhance the DT on organisations and leverage the SE organisations in the context where they operate are artificial intelligence (AI), big data, open source, and blockchain, among others [5]. These technologies can be used solo or integrated into digital social platforms. However, some of these practices may endanger the intrinsic nature of Cooperatives concerning democratic governance, autonomy, and independence. Like any other organisation in the market, Cooperatives are exposed to business pressure for efficiency and, consequently, to pressure to introduce AI tools in their decision-making processes. Given the risks inherent in AI, cooperatives should reject AI tools incompatible with the cooperative principles they are bound to observe in their organisation and operation.
Cooperatives currently face many challenges, which may force them to reinvent themselves from a legal point of view. However, we foresee some problems in the future: how can cooperative governance keep up with this trend of using AI? This paper aims to draw attention to some practices used in AI, namely at the level of the algorithm (which makes the “intelligent” decisions), that may go against the cooperative principles of democratic member control, autonomy and independence, and the mission of serving communities to respond to social and societal challenges.
This article presents a case study since we will use the “Portuguese Cooperative Code” in our analysis and discussion. Nevertheless, there is a proximity throughout the world of the Cooperatives’ legal frameworks. Thus, we consider our framework helpful and applicable to other countries besides Portugal.
Our article is a novelty in the literature since no studies of the implications of using AI in Cooperatives exist. Some studies can be found on the implications of AI for society and firms in the public and private sectors [6,7,8,9,10,11,12,13,14,15,16,17]. Additionally, some studies approach the implications of AI in the Governance domain [9,18,19] and the Governance of AI [20,21,22,23]. Organisations from the SE sector, like Cooperatives, due to their specific nature, constitute an important area for a reflection regarding the impacts of AI.
This paper unfolds in the following way: the following section presents aspects of governance in cooperatives, namely, main features, the central role of members, transparency in the cooperatives’ governance, and the cooperatives’ governance structures. Following the concepts of digital transformation and digital innovation are introduced. Section four presents the main concepts of AI, focusing on the decision-making process, the definition of AI, the effects of AI on decision-making and the framework of the European Union for AI. Section five presents a critical view of the use of AI in cooperatives, ending with the presentation of a framework for the use of AI in cooperatives. Finally, this paper ends with the conclusion.

2. Governance in Cooperatives

2.1. Main Features

A cooperative is defined in Article 2.1 of the Portuguese Cooperative Code, approved by Law 119/2015, August 31 (PCC), as an “autonomous association of persons, united voluntarily, of variable composition and capital, which, through cooperation and mutual assistance on the part of its members and in accordance with cooperative principles, aims not at profit but at satisfying the economic, social, or cultural needs and aspirations of their members”.
It follows from the definition that the cooperative does not have an autonomous purpose vis-à-vis its members, but rather is an instrument to satisfy the individual needs of each and every member, who will use it as cooperators to work, consume, sell, and/or provide services. For this reason, it is said that cooperatives have a mutualistic scope, because of which, the performance of the cooperative’s governance organs will necessarily and primarily be oriented towards the promotion of the cooperators’ interests, that is, to the satisfaction of their economic, social, and cultural needs.
Compliance with cooperative principles is mandatory for the Portuguese legislator. Cooperative principles are enshrined in the Portuguese Constitution (CRP) text. Article 61.2 of the CRP states that “Everyone is accorded the right to freely form cooperatives, subject to compliance with cooperative principles.” Article 82.4.a) of the CRP states that the cooperative sub-sector covers “means of production that cooperatives possess and manage in accordance with cooperative principles.” Hence, a disregard for cooperative principles in business operations is a cause for dissolution (Article 112.1.h) of the PCC). These principles are embodied in Article 3 of the PCC [24]:
(i)
voluntary and open membership;
(ii)
democratic member control;
(iii)
economic participation by members;
(iv)
autonomy and independence;
(v)
education, training, and information;
(vi)
cooperation among cooperatives; and
(vii)
concern for the community.
Cooperative governance is characterised as:
(i)
participatory governance, due to the principle of democratic control by members;
(ii)
oriented towards its members, following the mutualist aim of the cooperative;
(iii)
autonomous and independent, under the principle of autonomy and independence; and
(iv)
transparent, due to the members’ right to information enshrined in the PCC and the control and supervision of the Board of Directors by the General Assembly and Supervisory Board.
Participatory governance stems from the fact that cooperatives are organisations of a markedly personalistic nature, given the importance granted to cooperators within the organisation. Cooperatives as organisations—under the cooperative principle of voluntary and open membership (Article 3 of the PCC)—are open to new members, provided that the personal circumstances of the aspiring member are appropriate to the cooperative’s social object. This means that cooperatives are set up as “companies of persons” with a broad social basis. This personalistic focus implies active participation in cooperative governance by members, meaning that under the cooperative principle of education, training, and information (Article 3 of the PCC), cooperatives promote “education and training for their members, elected representatives, managers, and employees so that they can contribute effectively to the development of their cooperatives”.
In terms of cooperative governance, education and training should [25]:
(i)
provide the members of the cooperative with knowledge about the cooperative’s principles and values, namely to induce them to actively participate in their cooperative, to deliberate properly at meetings, and to elect their bodies consciously and monitor their performance;
(ii)
teach the leaders and managers of the cooperative to exercise the power they have been appropriately given and preserve the trust placed in them by the other members to obtain knowledge of the cooperative and show a level of competence in keeping with the responsibilities of their office;
(iii)
provide workers with the technical expertise needed for proper performance; and
(iv)
foster a sense of solidarity and cooperative responsibility in the cooperative’s community.
The principle that cooperative governance must be autonomous and independent (Article 3 of the PCC) means that relations between cooperatives and the State must not affect the formers’ governance. In the Portuguese legal system, as stated in Article 85 of the CRP, the State must provide incentives and support for the activities of cooperatives. “Incentives” means rules that promote cooperative activity, whereas “support” means administrative measures aimed at the same purpose. Nonetheless, State-supported cooperative activities must not be understood in the sense of State-led cooperative activities. Therefore, the State’s duties cannot jeopardise the right of cooperatives to pursue their activities freely, and the granting of stimulus and support cannot interfere with their cooperative governance. In addition, external capital sources cannot affect the autonomy of cooperatives or democratic control by their members [26].
It should be noted that cooperative governance is not restricted to the cooperative’s internal relations. Indeed, the very fact that a cooperative pursues goals in both the business and the social areas, in such a way that one complements the other, cannot but be reflected in the governance of the cooperative. Thus, the paradigm of cooperative governance is in line with the fundamental principles of “corporate social responsibility” (CSR) and is based on adopting best practices regarding the organisation, equal opportunities, social inclusion, and sustainable development.
In the context of cooperative governance; CSR is not voluntary. In other words, given the cooperative legal framework, namely the fact that a cooperative’s bodies must comply with the principle of concern for the community (Article 3 of the PCC)—which provides that “cooperatives work for the sustainable development of their communities through policies approved by their members”—it should be argued that there is a legal duty on the part of governance bodies. However, they are focused on meeting members’ needs to work to achieve the sustainable development of their communities according to criteria they themselves have approved.
In short, the cooperative bodies that are responsible for cooperative governance have a duty to integrate the fundamental values of CSR into their activities [27].

2.2. The Cooperative Governance Structures

The structure of the governing bodies of cooperatives can be characterised as hierarchical and tripartite (Figure 1). Pursuant to Article 27.1 of the PCC, the bodies of a cooperative are:
(i)
the General Assembly;
(ii)
the Board of Directors; and
(iii)
the Supervisory Board.
The General Assembly is considered to be the supreme body of a cooperative. It is composed of all its members, and their resolutions are binding on all bodies of the cooperative and all its members (Article 33.1 of the PCC).
The term “supreme body” of the cooperative assumes a threefold meaning [28,29]:
(i)
the most important and decisive issues in the life of the cooperative fall within the remit of the general assembly (Article. 38 of the PCC);
(ii)
the general assembly elects the members of the corporate bodies from among the collective of cooperators (Article 29(1) of the PCC).
(iii)
the resolutions adopted by the general assembly, according to the law and the bylaws, are binding on all the other bodies of the cooperative and all its members (Article 33(1) of the PCC).
According to Article 28 of the PCC, the management and supervisory board of a cooperative may be structured in one of the following ways: (a) a Board of Directors and a Supervisory Board; (b) a Board of Directors with an Audit Committee and an auditor; or (c) an Executive Board of Directors, a General and Supervisory Board, and an auditor.
Each cooperative must choose the management and supervision model it wishes to adopt, and this choice must be moulded on the cooperative’s statutes (Article 16.1.d) of the PCC). In cooperatives with 20 or fewer members, it is possible to have a sole director (Articles 28.2 and 45 of the PCC) and a single supervisor, as provided in the statutes.
The Board of Directors is primarily an executive body (Articles 47 and 62 of the PCC), while the Supervisory Board is a control and oversight body (Articles 53, 61, and 64 of the PCC) [30].

2.3. The Central Role of Members of the Cooperatives

Cooperatives are jointly owned by their members, who also democratically control the enterprise. In cooperative governance, the central importance of cooperator members will have significant consequences concerning the following aspects: equal treatment of members regardless of their financial participation; equal voting rights of all members (“one member, one vote”); the adoption of decisions is conditional to a majority of votes; and election of the representatives of a cooperative by its members.
Among these consequences, we should highlight the democratic decision-making process, which is considered one of the cooperatives’ most significant peculiarities—reflected in their governance. This aspect is based on the cooperative principle of democratic member control as stated in Article 3 of the PCC, which establishes that “cooperatives are democratic organisations are controlled by their members, who actively participate in setting their policies and in making decisions. The men and women who serve as elected representatives are accountable to the membership. In primary cooperatives, members have equal voting rights (“one member, one vote”), and cooperatives at other levels are also organised democratically.” From this principle stems the requirement of active participation on the part of members in establishing the cooperative’s policies and in the cooperative decision-making process by taking part in General Assemblies (Articles 21.1.b) and 22.2.a) of the PCC).
Moreover, for cooperatives, direct and active involvement on the part of members in the cooperative activities is a sine qua non requirement (Article 22.2.c) of the PCC). Democratic control by members is also based on the rule of equal voting rights [31].
The centrality of members is also projected in the composition of the cooperative bodies. Whatever the supervision model, the supervisory body is, as a rule, composed of cooperative members (Article 29(1) of the PCC). Legislators designed this mechanism to ensure that members of the cooperative’s governance organs would focus on promoting the interests of members. By allowing the direct representation of members in administrative and supervisory bodies, they have experience in dual roles as both beneficiary and manager. They are permanently aware of the importance of ensuring that the interests of the cooperators do not deviate from the primary purpose of the cooperative [27].

2.4. Transparency in the Cooperatives’ Governance

The value of transparency in governance is central to cooperatives and is inseparable from active participation in the cooperative democratic control. Transparency is ensured first by recognising members’ rights to information that provides them with a working knowledge of how business is conducted and the cooperative’s social status and, consequently, control of its management [32]. The principle of transparency requires that it is known what information and rationale were taken into consideration for the decision and how the decision was made. This right to information can distinguish an active and a passive side.
The multiple obligations of the cooperative bodies, including the Board of Directors, to disclose facts and make documents relating to the life of the cooperative available is included in the passive component. Of particular note is the duty to provide information that falls on the Board of Directors by making the management report, the year’s accounting documents, the business plan, and the budget for the following year available to members at the headquarters of the cooperative, accompanied by a professional opinion of the Supervisory Board (Article 47.a) of the PCC and Article 263.1 of the CSC, which is applicable by reference to Article 9 of the PCC).
The active component of the right to information is enshrined in Article 21.1.d) of the PCC, which states that members have the right to request information from the cooperative’s competent bodies and to examine its records and accounts. It follows from this rule that the active information actors are the members and the passive actors are the “competent bodies of the cooperative.”
As for the subject of the right to information, members may obtain information without any limitations on the content of the information requested, except for situations in which the provision of information violates the secrecy rules imposed by law (Article 21.3 of the PCC).
Concluding Section 2, we can state that jointly the control of the members, transparency in governance, active participation of the members in governance, equal voting rights, and direct representation of members in administrative and supervisory defines the Cooperative as a democratic ategorizing (Figure 2).

3. Framing Artificial Intelligence in the Decision-Making Process

3.1. Digital Transformation and Digital Innovation

Digital Transformation (DT) is a term coined in the last few years; it is considered crucial for the competitiveness of organisations. This importance is visible in several governments of the European Union. For instance, in the Portuguese Government, there is a Minister of “Economy and Digital Transition”; in the Spanish Government, a Minister “for Economic Affairs and Digital Transformation”; in the French Government, a Secretary of State “for the Digital Transition”; in the Germany Federal Government a Minister “of the Digital”, among other.
DT is about integrating digital technologies into the organisations’ procedures to speed up processes, make organisations more efficient and perhaps more intelligent, and enable digital innovation (DI). According to Yoo et al. [33], DI is “the carrying out of new combinations of digital and physical components to produce novel products”. DT is the ability to use digital technologies and integrate them into the various business areas of the ategorizing, enhancing interoperability with business partners, other organisations, customers or even government agencies. A DT process is not only introducing technology in organisations. DT is a complex process that implies changing ways of doing things, changing old procedures and adding new procedures, and training human resources to deal with these new procedures and the technologies introduced. Furthermore, for this reason, a DT process is not easy to implement in organisations because it implies some disruption in the functioning of the status quo; it means taking people out of their comfort zone to learn how to do things differently and with different tools.
European organisations broadly use technologies to support their management activities and provide services. However, DT asks organisations to improve software applications’ interoperability, use big data, and use devices that integrate the Internet of Things (IoT) and AI techniques in the software systems. Moreover, these systems should be open-source software, and organisations should use cloud computing and Distributed Ledger Technology to enhance DT. All these new technologies or practices can be integrated into digital platforms to transform how organisations interact with their stakeholders. The more common now is to use technologies to support the services. However, DT will allow climbing the stairs on the level of innovation so that technologies will permeate the service, and finally, technologies can ultimately “be” the service [5].
DI allows organisations to open up to new digital products or services enabled by technologies to create new lines of business and new fields of work in the organisations. In the social economy context, these innovative technologies allow communities, e.g., to organise their activities around digital platforms. Digital platforms profit from global reach, dematerialisation, customisation and personalisation, which are characteristics that enhance activities on the Internet [34]. There are examples of these digital platforms in several parts of the world (See https://platform.coop/, accessed on 14 November 2022); Scholz [35] analyses the activities developed by “platform cooperatives”, a particular type of digital platform, and created a framework for categorising them.
Open source technologies, the Internet of Things (IoT), Distributed Ledger Technology (including blockchain), Big Data, Cloud Computing, and AI practises and technologies have great potential for digital innovation [5]. In this paper, we focus on the use of AI by cooperatives in the decision-making process.

3.2. The Decision-Making Process

The decision-making process can be divided into four phases (Figure 3). Starting from the initial problem or opportunity, we have the following phases:
(i)
the Intelligence phase involves the profound examination of the environment, producing reports, queries, and comparisons;
(ii)
the Design phase involves creativity by developing and finding possible alternatives and solutions;
(iii)
the Selection phase involves the comparison and selection of one of the alternatives obtained in the previous phase;
(iv)
the Implementation phase involves putting the selected solution into action and adapting it to a real-life situation.
There is the possibility of returning to any of the previous phases whenever necessary.
To support the decision-making process, organisations use more and more the so-called Decision Support Systems (DSS). It is essential to emphasise that DSS may provide support in all the phases of the decision-making process. It is also fundamental to note that, as their name implies, DSS do not make decisions; they support decision-making made by managers that have a fundamental role in all the phases presented in Figure 1.
Decision Support Systems (DSS) can be found all over organisations and are widely used in diverse situations. Its roots date to the late 1960s when scientists developed the first scientific works in this area [37]. Now, DSS are recognised to positively impact the decision-making process at the organisational and personal levels.
DSS can also be introduced as the area of the Information Systems discipline that focuses on supporting and improving management decision-making [38]. DSS can also be presented as a computer-based solution supporting complex decision-making and solving complex semi-structured or unstructured problems [39,40].
There are several types of DSS, but we can consider four components, or subsystems, that are common to all [36]:
  • the data management subsystem allows access to a multiplicity of data sources, types and formats;
  • the model management subsystem allows access to a multiplicity of capabilities with some suggestions or guidelines available;
  • the user interface subsystem allows the users to access and control the DSS;
  • the knowledge-based management subsystem allows access to various AI tools that provide intelligence to the other three components and mechanisms to solve problems directly.
In the last decades, a new approach to decision support, called Automated Decision Systems (ADS), took part in the DSS field [36]. ADS are rules-based systems that solve a specific and repetitive management problem, such as price definition or decision of a loan request. This approach is becoming very popular in retail, baking, and transport industries for automatic decision-making without human intervention and extensively uses AI techniques and algorithms. It is worth emphasising that ADS somehow contradicts the concept of DSS, which is based on the fact that humans are the ones that make the decisions, supported by the insights provided. Nevertheless, automation is already changing the way people work and live. As a society, we must learn how to deal with these changes.

3.3. Artificial Intelligence

It is common sense that intelligence comes from learning with its own errors and successes, leading to better decisions when facing new situations to achieve more success. The contemporary concept of Artificial Intelligence (AI) comes from applying this assumption to computers, the artificial world.
AI has its roots in the XX century with Alan Turing, who proposed the so-called “Turing’s test” [41]. This test states that if one can interact with a computer (machine) and not be able to distinguish it from a human being, then the machine can be considered intelligent. Alan Turing was looking for a machine that looks as much as possible with the human way of thinking and behaviour. Since then, this concept has suffered a long evolution with several convulsions, mapped by the winners of the Turing Award Winners nominated since 1966 (The list of the Turing Award Winners can be found here: https://amturing.acm.org/byyear.cfm, accessed on 14 November 2022).
According to Russell and Norvig (p. 52 [42]), “AI is concerned mainly with rational action. An ideal intelligent agent takes the best possible action in a situation.” This is possible with specific algorithms based on techniques such as Machine Learning (ML) and its most recent evolutions, such as Deep Learning (DL).
We can say that “an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output” [43].
ML is based on the concept of learning by examples. ML algorithms use historical data (the examples) as input and apply several techniques to build or train a model as output. The model is used to formulate a hypothesis and select the ones that will optimally fit future examples, considering several performance metrics, which depend on the types of techniques and models [42,44]. Thus, those models are used to predict the future by providing the best possible decision or decisions. Several types of ML models can be trained, such as classification or association rules, trees, linear or regression models, and perceptrons, among others. Most DL models are based on multilayer perceptrons, a form of algebraic circuits, with parameters that are adjusted during the training process. “Deep” comes from the fact that there are many layers with many connections among them [42,44].
It is crucial to evaluate the success of the ML algorithms, determining which are the best models. ML learning algorithms’ success depends on the volume and quality of the data used as input. Typically, high volumes of data produce better models; this is particularly relevant for DL. Data quality is assured by a process known as pre-processing of the data [42,44,45]. From a technological point of view, to evaluate the success of ML algorithms, there are several performance measures, such as accuracy, recall, precision, or mean-squared error, among others. Such as the models, these measures are also included as outputs of the algorithms and allow to compare several different models and choose the most successful ones. Typically, DL algorithms are the ones that achieve better performance and can solve very difficult problems, outperforming humans in many cases.
Nevertheless, DL models work as black boxes since the steps that are used to provide a decision are unknown to the users. Additionally, some ML models work as black boxes, as opposed to models such as rules or trees, known as white or transparent boxes. This is a relevant issue when evaluating the success of a model, in addition to the performance measures. Thus, besides performance, trust and transparency should also be considered to evaluate the success of the ML algorithms and their AI applications [42].
To achieve trust, two basic principles should be considered: “Interpretability” and “Explainability”. Interpretability means one “can inspect the actual model and understand why it got a particular answer for a given input, and how the answer would change when the input changes”. Explainability means “the model itself can be a hard-to-understand black box, but an explanation module can summarise what the model does” (p. 729 [42]).

3.4. The Effects of Artificial Intelligence on Decision-Making

AI is a very powerful tool that significantly impacts the decision-making process in organisations, presenting several positive but also negative aspects. Positive aspects are, for instance, that AI can help save and improve lives, help people with disabilities, replace tedious and dangerous tasks, have the potential to democratise access to advanced technologies, and make businesses more productive. Some negative aspects include conducting to a lack of security and privacy, having the potential to produce lethal weapons, and unexpected errors or side effects, among others.
All involved in developing and implementing AI tools must demonstrate that AI systems are fair, trustworthy, and transparent, fostering the positive aspects and preventing the negative ones. Ultimately, it is impossible to attain all these aspects, raising some ethical issues. Government institutions, non-profit organisations, and companies have already defined some ethical principles. As presented in Russel and Norvig [42], AI should: ensure safety, establish accountability, ensure fairness, uphold human rights and values, respect privacy, reflect diversity and inclusion, promote collaboration, avoid concentration of power, provide transparency, acknowledge legal or policy implications, limit harmful uses of AI, and contemplate implications for employment.
Based on the history of IA presented in [42], we present a resumé of the main AI technologies and their pros and cons in Table 1.

3.5. The European Union and the Standardisation of the Concept of Artificial Intelligence

As presented above, a first approach to AI designates systems designed to think or act like humans for a specific purpose. Recently, the proposal of the European Union (EU) for a regulation on artificial intelligence defines “Artificial intelligence system” (AI system), as “a computer program developed with one or several of the techniques and approaches listed in Annex I (“Annex I” does not refer to this article; it refers to the annex I of the document in https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206), (accessed on 14 November 2022) capable, in view of a given set of objectives defined by humans, of creating outputs, such as content, predictions, recommendations or decisions, which influence the environments with which it interacts” (proposal for a regulation of the European Parliament and of the Council establishing harmonised rules on artificial intelligence (Artificial Intelligence Regulation) and amending certain Union legislative acts {SEC(2021) 167 final} - {SWD(2021) 84 final} - {SWD(2021) 85 final}).

4. Discussion

The application of AI in the decision-making process of the Cooperatives raises the question of who makes the decision and how the decision is made. In this section, we will critically analyse these issues based on the abovementioned concepts and ideas in this article. At the end of this section, we present a framework for the use of DSS in Cooperatives, contributing to the reflection on the introduction of AI in this type of organisations.

4.1. A Critical View on the Use of Artificial Intelligence in Cooperatives

The disruptive nature of AI-based DSS lies in the circumstance that they do not work according to deductive logic but rather draw inferences from a set of information (p. 19 [46]). In particular, ML algorithms learn from data provided or from previous decisions made by themselves. One of the requirements for this result to be possible is that big data is available. Big data renders the data opaque and, moreover, enables algorithms to relate data that is itself unrelated to each other (p. 25 [46]). Uncontrolled inferences are created [47]. For example, in the insurance industry, an English newspaper showed situations of algorithmic discrimination in which the insurance company charged a higher premium to policyholders named Mohamed compared to policyholders named Smith [48]. In fact, in the realm of big data, it is already known nowadays that closed, outdated, irrelevant, false, or falsified data can be called for decision-making and used by AI. Thus, the conditions are created for AI to replicate or amplify past discriminations that historical progress tried to eliminate (p. 33 [48,49]).
AI can be used to aid the functioning of the management body and representation of the cooperative; think, for example, about the potential of AI in the credit business [50] or the field of high-frequency trading [51]. Thus, it is perceived that the decision to introduce AI tools in Cooperatives may, on the one hand, constitute a competitive advantage in the market and, simultaneously, represent a form of attack on cooperative values and principles. So, AI, as long as it supports the decision of the management body (i.e., use of DSS) or makes decisions (i.e., use of ADS), is relevant for the purposes of the duties of care of cooperative directors [52,53].
The duties of care that bind the directors require that the management be informed about the capabilities and risks of using AI [35], so the use of DSS tools might be safeguarded. This implementation of the duties of diligence and care can be fulfilled if the management body includes members with technological skills, creating, for example, technology committees [54]. As there is no legal rule requiring a technology committee or other structure at the board of directors or executive board level specifically dedicated to managing the challenges of AI, the directors’ duties of care and diligence require them to seek accurate information on the potential risks of the AI used.
In certain circumstances, AI constitutes a powerful tool for competitive differentiation due to the technological advances it incorporates and the efficiency gains it provides. In that case, such gains do not always correspond to a duty for management body members to adopt AI tools. It is not surprising that, under certain circumstances, a duty not to adopt AI is constituted, considering, for example, the risk of decisions that violate cooperative values and principles, such as discriminatory decisions. Article 46, 1, b) of the PCC constitutes a sufficient legal-cooperative ground to support the legal duty of not adopting AI. This has the consequence of making the decision not to adopt AI lawful and not liable to trigger the civil liability of administrators for, e.g., loss of business opportunities [55].
Indeed, digital literacy issues are crucial. Suppose the cooperative is constitutionally and legally prevented from recruiting non-cooperators or non-investor members to the management and representative bodies [6]. In that case, it will be necessary to provide the organisation with people with adequate skills who can produce the appropriate information for decision-making on the introduction of AI tools.
Currently, the PCC law prevents an algorithm from being admitted as a cooperating or investing member, even if attributed to the quality of “e.person”. Legal persons can indeed be admitted as cooperators or investor members, but it seems to us that we cannot assimilate an algorithm to the status of a legal person. For one thing, legal persons have a collective legal personality at the current law stage, and algorithms do not. However, the central problem is not, in fact, the non-attribution of electronic legal personality to algorithms. The problem is more profound and has to do with the difference in nature. While the legal person is still an instrument at the service of humans for their purposes and is managed by humans, the algorithm is in a technical position to take decisions autonomously.
The expression “AlgoBoards” means the composition of the board of directors by algorithms (p. 40 [54]). Furthermore, this step is not only the future; it is already present in companies [56]. Consider the example of ALICIA T., the name given to the AI-appointed manager of the Finnish software company Tieto (in 2017, in the course of the “Future Investment Initiative”, Saudi Arabia declared Sophia, a humanoid robot, a Saudi citizen. Cf. https://www.washingtonpost.com/news/innovations/wp/2017/10/29/saudi-arabia-which-denies-women-equal-rights-makes-a-robot-a-citizen/ (accessed on 6 November 2022)). Additionally, in the USA, article 141 of Delaware General Corporation Law entitled “Board of directors; powers; number, qualifications, terms and quorum; committees; classes of directors; nonstock corporations; reliance upon books; action without meeting; removal”, admits the presence of AI in the management body, when it prescribes that: “(a) The business and affairs of every corporation organised under this chapter shall be managed by or under the direction of a board of directors, except as may be otherwise provided in this chapter or in its certificate of incorporation […].”
In this technological future, some anticipate that in an environment dominated by technology, humans are unprepared to monitor the algorithms that learn on their own. Therefore, humans may not accept being administrators [54]. It is added that AI is more effective than humans in performing the governing body functions because algorithms better process information without conflicts of interest [54].
The governing body should be aware that algorithms are not necessarily impartial, as so-called “conflict coding” is identified—the algorithm provider will design it so that the product is accepted by the governing body and not necessarily in the interest of the cooperative and cooperators. Finally, despite the theory (see Russel and Norvig [42]) that says that AI should provide transparency, among other characteristics, algorithms do not guarantee transparency [54], especially in the DL types.
Despite having a better performance, DL algorithms should never be used in cooperatives since they do not provide, for now, the rationale for the decisions. ADS that respect the “Interpretability” characteristic are the type of DSS that should be used in Cooperatives since they are not black boxes. It also adds that humans are better than AI at complex interactions with humans [54]. Algorithms do not have the capacity to reconcile divergent views [54]. Algorithms do not feel empathy. They can not also have a conscience and integrate ethics into their decisions. These are all issues that integrate the decisions of cooperatives since cooperative members believe in the ethical values of honesty, openness, social responsibility and caring for others.
Nevertheless, the future is still human, particularly in cooperatives. The Portuguese Cooperative Law prevents algorithms from fully or partially replacing human members in the bodies of cooperatives or companies and requires that members of the bodies are cooperators or investor members. In either case, the members of the bodies must be natural or legal persons. Algorithms may never integrate organs.
AI technology has several limitations that should not be ignored when deciding to incorporate it into the cooperative organisation (p. 40 [54]), namely:
  • data dependency—distortion of past data, learning from “bad examples”, insufficient data, the unpredictability of human behaviours;
  • the indispensability of human judgement;
  • conflicts with human ethical standards;
  • the incompleteness of the “legal environment” precludes a yes/no answer.
For these reasons, the decision on the adoption and incorporation of AI in the cooperative (even if only as an auxiliary decision-making tool, i.e., DSS) requires the management body to perform the appropriate due diligence in order to comply with the duty of information, according to the standard of the judicious and orderly manager, in charge of the members of the management body and of the representation of the cooperative [57]. Moreover, the risk-based approach favoured by the Proposal of the EU for a Regulation of AI can be instrumental. This proposal acknowledges that “In addition to the many beneficial uses of artificial intelligence, such technology can be misused and grant new and powerful tools for manipulative, exploitative and social control practices. Such practices are harmful and should be prohibited as they are contrary to the EU values such as human dignity, freedom, equality, democracy, and the rule of law. They are also contrary to fundamental EU rights, including the right to non-discrimination, personal data and privacy protection, and the child’s rights” (Proposal for a Regulation of the European Parliament and of the Council establishing harmonised rules on artificial intelligence (Artificial Intelligence Regulation) and amending certain Union legislative acts (Brussels, 21.4.2021 COM (2021) 206 final 2021/0106(COD), recital 15).
Thus, “the use of certain AI systems designed to distort human behaviour, which are likely to cause physical or psychological harm, should be prohibited. Such AI systems use subliminal components that are not detectable by humans or exploit vulnerabilities of children and adults associated with their age and physical or mental disabilities. These systems intend to substantially distort a person’s behaviour in a way that causes or is likely to cause harm to that or another person” (Proposal for a Regulation of the European Parliament and of the Council establishing harmonized rules on artificial intelligence (Artificial Intelligence Regulation) and amending certain Union legislative acts (Brussels, 21.4.2021 COM (2021) 206 final 2021/0106(COD), recital 16).
The depth of the duty of information required from the management and representative body of the cooperative will depend on the technology adopted and the risks it poses to the cooperative values and principles. It seems crucial to us that the management and representative body of the cooperative takes care to know, by informing itself, whether the AI used is in line with the cooperative values and principles.

4.2. A Framework for the Use of Artificial Intelligence in Cooperatives

According to our perspective, a machine helping to make decisions, with recommendations or the ability to make forecasts, might not be an issue in democratic organisations such as a cooperative since, ultimately, decisions are taken by the cooperative members through their participation in the cooperative bodies. They need to be aware of the issues that a decision taken with the help of a DSS might entail and the danger that can come from that. The issue for us is the use of ADS since this type of systems can make decisions automatically by themselves; without any human intervention, machines have the power to decide and act according to the decision. Most of the algorithms used in the ADS systems use methods that cannot explain the rationale for the decision, which poses a severe problem of transparency and, ultimately, accountability for decisions made.
Table 2 presents a critical view based on our analysis; it is a framework for the use of AI for decision-making in Cooperatives.

5. Conclusions

A cooperative is a collectively owned enterprise democratically managed by the members. The democratic structure of cooperatives is manifested in the prominence of the general assembly, which is qualified as “the supreme organ of the cooperative”. Cooperative governance reflects its mutualistic nature by ensuring that members democratically control the cooperative and can actively participate in policy formulation and critical decision making based on the rule of “one member, one vote”. The democratic nature of cooperative governance is also based on the fact that the corporate bodies’ members must be cooperative members, which is a fundamental right of the members.
The democratic structure of cooperatives can be threatened by the intrinsic nature of some of these technologies.
The current Portuguese Cooperative Law prevents algorithms from being appointed members of cooperative bodies. The decision to incorporate AI tools in cooperative decision-making processes should be a decision informed about the advantages and risks that such technologies present. Moreover, the risk-based approach favoured by the Proposed Regulation on AI can be advantageous to this end. Furthermore, the management body should consider whether the eopardizing’s structure is appropriately equipped to receive, incorporate and monitor the performance of AI tools. In certain situations, compliance with the duty of care will require decisions not to incorporate AI tools.
This paper proposes a framework for the use of AI by Cooperatives. However, many questions remain. Several AI systems are eopardizing the democratic structure of cooperatives since there is no transparency in decision-making, and big data can have bias, which implies that decisions and actions taken from this data also contain bias.
AI algorithms still need to be capable of having consciousness. Conscience is humans’ capacity to distinguish good from the bad—if we are conscientious, we know which values we want to promote. Values are based on an ethical framework for good, and an ethical framework is based on fundamental principles common to all societies. Cooperatives are based on a system of values and must comply with “corporate social responsibility”. Values such as solidarity, equity, and caring for the community and the planet, among others, are intrinsic to Cooperatives. We wonder whether an algorithm, a system without consciousness, will ever be able to understand this human dimension, which is the core of the functioning of organisations of the social and solidarity economy in general and of the Cooperatives in particular. The possibility of implementing algorithms with consciousness is, as far as we understand, far away since neuroscientists are still trying to understand how the human brain works.

Author Contributions

Conceptualisation, M.E.R., A.A., M.C.M. and D.M.; methodology, M.E.R., M.C.M., A.A. and D.M.; formal analysis, M.E.R., A.A., D.M. and M.C.M.; investigation, M.E.R., A.A., D.M. and M.C.M.; writing—original draft preparation, review and editing, M.E.R., A.A., D.M. and M.C.M.; funding acquisition, A.A. and D.M.. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financed by Portuguese national funds through FCT—Fundação para a Ciência e Tecnologia, under the project UIDB/05422/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work is supported by Portuguese national funds through FCT—Fundação para a Ciência e Tecnologia, under the project UIDB/05422/2020. This work is supported by Portuguese national funds through FCT—Fundação para a Ciência e Tecnologia, under the project UID/SEC/00319/201.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fici, A. Cooperative identity and the law. Eur. Bus. Law Rev. 2013, 24, 37–64. [Google Scholar] [CrossRef]
  2. Hiez, D.; Fajardo-García, G.; Fici, A.; Henrÿ, H.; Hiez, D.; Meira, D.A.; Muenker, H.-H.; Snaith, I. Introduction. In Principles of European Cooperative Law: Principles, Commentaries and National Reports; Intersentia: Cambridge, UK, 2017; pp. 1–16. [Google Scholar]
  3. Curado Malta, M.; Azevedo, A.I.; Bernardino, S.; Azevedo, J.M. Digital Transformation in the Social Economy Organisations in Portugal: A preliminary study. In Proceedings of the 33rd International Conference of CIRIEC: New Global Dynamics in the Post-Covid Era: Challenges for the Public, Social and Cooperative Economy, Valencia, Spain, 13–15 June 2022. [Google Scholar]
  4. Meira, D.; Azevedo, A.; Castro, C.; Tomé, B.; Rodrigues, A.C.; Bernardino, S.; Martinho, A.L.; Curado Malta, M.; Sousa Pinto, A.; Coutinho, B.; et al. Portuguese social solidarity cooperatives between recovery and resilience in the context of COVID-19: Preliminary results of the COOPVID Project. CIRIEC-España Rev. Econ. Pública Soc. Coop. 2022, 104, 233–266. [Google Scholar] [CrossRef]
  5. Gagliardi, D.; Psarra, F.; Wintjes, R.; Trendafili, K.; Mendoza, J.P.; Turkeli, S.; Giotitsas, C.; Pazaitis, A.; Niglia, F. New Technologies and Digitisation: Opportunities and Challenges for the Social Economy and Social Enterprises. 2020. Available online: https://www.socialenterprisebsr.net/wp-content/uploads/2020/10/New-technologies-and-digitisation-opportunities-and-challenges-for-the-SE_ENG.pdf (accessed on 14 November 2022).
  6. Makridakis, S. The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
  7. Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A.; et al. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
  8. Sharma, M.; Luthra, S.; Joshi, S.; Kumar, A. Implementing challenges of artificial intelligence: Evidence from public manufacturing sector of an emerging economy. Gov. Inf. Q. 2022, 39, 101624. [Google Scholar] [CrossRef]
  9. Charles, V.; Rana, N.P.; Carter, L. Artificial Intelligence for data-driven decision-making and governance in public affairs. Gov. Inf. Q. 2022, 39, 101742. [Google Scholar] [CrossRef]
  10. Pietronudo, M.C.; Croidieu, G.; Schiavone, F. A solution looking for problems? A systematic literature review of the rationalizing influence of artificial intelligence on decision-making in innovation management. Technol. Forecast. Soc. Change 2022, 182, 121828. [Google Scholar] [CrossRef]
  11. Boustani, N.M. Artificial intelligence impact on banks clients and employees in an Asian developing country. J. Asia Bus. Stud. 2022, 16, 267–278. [Google Scholar] [CrossRef]
  12. McBride, R.; Dastan, A.; Mehrabinia, P. How AI affects the future relationship between corporate governance and financial markets: A note on impact capitalism. Manag. Financ. 2022, 48, 1240–1249. [Google Scholar] [CrossRef]
  13. Frankish, K.; Ramsey, W.M. The Cambridge Handbook of Artificial Intelligence; DiMatteo, L.A., Poncibò, C., Cannarsa, M., Eds.; Cambridge University Press: Cambridge, UK, 2022; ISBN 9781009072168. [Google Scholar]
  14. Noponen, N. Impact of Artificial Intelligence on Management. Electron. J. Bus. Ethics Organ. Stud. 2019, 24, 43–50. [Google Scholar]
  15. Wood, A.J.; Graham, M.; Lehdonvirta, V.; Hjorth, I. Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy. Work. Employ. Soc. 2019, 33, 56–75. [Google Scholar] [CrossRef]
  16. Köstler, L.; Ossewaarde, R. The making of AI society: AI futures frames in German political and media discourses. AI Soc. 2022, 37, 249–263. [Google Scholar] [CrossRef]
  17. Alshahrani, A.; Dennehy, D.; Mäntymäki, M. An attention-based view of AI assimilation in public sector organizations: The case of Saudi Arabia. Gov. Inf. Q. 2022, 39, 101617. [Google Scholar] [CrossRef]
  18. Ashok, M.; Madan, R.; Joha, A.; Sivarajah, U. Ethical framework for Artificial Intelligence and Digital technologies. Int. J. Inf. Manag. 2022, 62, 102433. [Google Scholar] [CrossRef]
  19. Buhmann, A.; Fieseler, C. Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence. Bus. Ethics Q. 2022, 1–34. [Google Scholar] [CrossRef]
  20. Ulnicane, I.; Knight, W.; Leach, T.; Stahl, B.C.; Wanjiku, W.-G. Governance of Artificial Intelligence. In The Global Politics of Artificial Intelligence; Chapman and Hall/CRC: Boca Raton, FL, USA, 2022; pp. 29–56. [Google Scholar]
  21. Chhillar, D.; Aguilera, R.V. An Eye for Artificial Intelligence: Insights Into the Governance of Artificial Intelligence and Vision for Future Research. Bus. Soc. 2022, 61, 1197–1241. [Google Scholar] [CrossRef]
  22. Schneider, J.; Abraham, R.; Meske, C.; Vom Brocke, J. Artificial Intelligence Governance For Businesses. Inf. Syst. Manag. 2022, 1–21. [Google Scholar] [CrossRef]
  23. Sigfrids, A.; Nieminen, M.; Leikas, J.; Pikkuaho, P. How Should Public Administrations Foster the Ethical Development and Use of Artificial Intelligence? A Review of Proposals for Developing Governance of AI. Front. Hum. Dyn. 2022, 4, 20. [Google Scholar] [CrossRef]
  24. Namorado, R. Os Princípios Cooperativos; Fora do Texto: Coimbra, Portugal, 1995. [Google Scholar]
  25. Meira, D. Projeções, conexões e instrumentos do princípio cooperativo da educação, formação e informação no ordemanento português. Boletín Asoc. Int. Derecho Coop. 2020, 57, 71–94. [Google Scholar] [CrossRef]
  26. Meira, D.; Ramos, M.E. Projeções do princípio da autonomia e da independência na legislação cooperativa portuguesa. Boletín Asoc. Int. Derecho Coop. 2019, 55, 135–170. [Google Scholar] [CrossRef]
  27. Meira, D. Cooperative Governance and Sustainability: An Analysis According to New Trends in European Cooperative Law. In Perspectives on Cooperative Law; Tadjudje, W., Douvitsa, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; pp. 223–230. [Google Scholar]
  28. Münkner, H.H. Chances of Co-Operatives in the Future: Contribution to the International Co-Operative Alliance Centennial 1895–1995; Papers and Reports; Marburg Consult für Selbsthilfeförderung: Marburg, Germany, 1995; ISBN 9783927489295. [Google Scholar]
  29. Henrÿ, H. Guidelines for Cooperative Legislation; International Labour Organization: Geneva, Switzerland, 2012; ISBN 9221172104. [Google Scholar]
  30. Meira, D. Portugal. In Principles of European Cooperative Law: Principles, Commentaries and National Reports; Fajardo-García, G., Fici, A., Henrÿ, H., Hiez, D., Meira, D.A., Muenker, H.-H., Snaith, I., Eds.; Intersentia: Cambridge, UK, 2017; pp. 409–516. [Google Scholar]
  31. Meira, D. A relevância do cooperador na governação das cooperativas. Coop. Econ. Soc. 2013, 35, 9–35. [Google Scholar]
  32. Fajardo, G.; Fici, A.; Henrÿ, H.; Hiez, D.; Meira, D.; Münkner, H.-H.; Snaith, I. Principles of European Cooperative Law; Intersentia: Cambridge, UK, 2017; pp. 409–516. [Google Scholar]
  33. Yoo, Y.; Henfridsson, O.; Lyytinen, K. Research commentary—The new organizing logic of digital innovation: An agenda for information systems research. Inf. Syst. Res. 2010, 21, 724–735. [Google Scholar] [CrossRef]
  34. Laudon, K.C.; Traver, C.G. E-Commerce 2020–2021: Business, Technology and Society, eBook; Pearson Higher Education: London, UK, 2020. [Google Scholar]
  35. Scholz, T. Platform Cooperativism. Challenging the Corporate Sharing Economy; Rosa Luxemburg Stiftung: New York, NY, USA, 2016. [Google Scholar]
  36. Sharda, R.; Delen, D.; Turban, E. Business Intelligence and Analytics: Systems for Decision Support; Pearson Higher Education: London, UK, 2014; ISBN 9781292009209. [Google Scholar]
  37. Power, D.J. A Brief History of Decision Support Systems—Version 4.0. Available online: http://dssresources.com/history/dsshistory.html (accessed on 1 November 2022).
  38. Arnott, D.; Pervan, G. Eight key issues for the decision support systems discipline. Decis. Support Syst. 2008, 44, 657–672. [Google Scholar] [CrossRef]
  39. Nemati, H.R.; Steiger, D.M.; Iyer, L.S.; Herschel, R.T. Knowledge warehouse: An architectural integration of knowledge management, decision support, artificial intelligence and data warehousing. Decis. Support Syst. 2002, 33, 143–161. [Google Scholar] [CrossRef] [Green Version]
  40. Shim, J.P.; Warkentin, M.; Courtney, J.F.; Power, D.J.; Sharda, R.; Carlsson, C. Past, present, and future of decision support technology. Decis. Support Syst. 2002, 33, 111–126. [Google Scholar] [CrossRef]
  41. Turing, A.M. Computing Machinery and Intelligence. Mind 1950, LIX, 433–460. [Google Scholar] [CrossRef]
  42. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: London, UK, 2021; ISBN 9781292401133. [Google Scholar]
  43. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms, 3rd ed.; MIT Press: Cambridge, MA, USA, 2009; ISBN 9780262033848. [Google Scholar]
  44. Witten, I.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining: Practical Machine Learning Tools and Techniques; Elsevier: Amsterdam, The Netherlands, 2011; ISBN 9780123748560. [Google Scholar]
  45. Sharda, R.; Delen, D.; Turban, E. Business Intelligence, Analytics, and Data Science: A Managerial Perspective, 4th ed.; Pearson Education: Upper Sadle River, NJ, USA, 2016. [Google Scholar]
  46. Scopino, G. Key Concepts: Algorithms, Artificial Intelligence, and More. In Algo Bots and the Law: Technology, Automation, and the Regulation of Futures and Other Derivatives; Cambridge University Press: Cambridge, UK, 2020; pp. 13–47. [Google Scholar]
  47. Eder, N. Privacy, Non-Discrimination and Equal Treatment: Developing a Fundamental Rights Response to Behavioural Profiling. In Algorithmic Governance and Governance of Algorithms; Ebers, M., Cantero Gamito, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 23–48. [Google Scholar]
  48. Junqueira, T. Tratamento de Dados Pessoais e Discriminação Algoritmica; Thomson Reuters Revista dos Tribunais: São Paulo, Brazil, 2020; ISBN 978-6550654351. [Google Scholar]
  49. Dignum, V. Ethical and Social Issues in Artificial Intelligence, An Introductory Guide to Artificial Intelligence for Legal Professionals; González-Espejo, M.J., Pavón, J., Eds.; Wolters Kluwer: Alphen aan den Rijn, The Netherlands, 2020; ISBN 9789403509433. [Google Scholar]
  50. Maia, P. Intelligent Compliance. In Artificial Intelligence in the Economic Sector Prevention and Responsibility; Antunes, M.J., de Sousa, S., Eds.; Instituto Jurídico: Coimbra, Portugal, 2021; pp. 3–50. [Google Scholar]
  51. Martins, A. Algo trading. In Antunes, Artificial Intelligence in the Economic Sector Prevention and Responsibility; Antunes, M.J., de Sousa, S., Eds.; Instituto Jurídico: Coimbra, Portugal, 2021; pp. 51–84. [Google Scholar]
  52. Costa, R. Artigo 64.o—Deveres fundamentais. In Código das Sociedades Comerciais em comentário; de Abreu, J.M., Ed.; Almedina: Coimbra, Portugal, 2017; Volume 1, pp. 757–800. [Google Scholar]
  53. Costa, R. Artigo 46.o—Deveres dos titulares do órgão de administração. In Código Cooperativo anotado; Meira, D., Ramos, M.E., Eds.; Almedina: Coimbra, Portugal, 2018; pp. 257–274. [Google Scholar]
  54. Enriques, L.; Zetzsche, D.A. Corporate technologies and the tech nirvana fallacy. Hast. LJ 2020, 72, 55. [Google Scholar] [CrossRef]
  55. Ramos, M.E. O Seguro de Responsabilidade Civil dos Administradores:(Entre a Exposição ao Risco ea Delimitação da Cobertura); Almedina: Coimbra, Portugal, 2010. [Google Scholar]
  56. Montagnani, M.L.; Passador, M.L. Il consiglio di amministrazione nell\’era dell\’intelligenza artificiale: Tra corporate reporting, composizione e responsabilità. Riv. Soc. 2021, 66, 121–151. [Google Scholar]
  57. Ramos, M.E. Direito das Sociedades; Almedina: Coimbra, Portugal, 2022; ISBN 9789894000785. [Google Scholar]
Figure 1. The hierarchical and tripartite structure of the governing bodies of the Cooperative.
Figure 1. The hierarchical and tripartite structure of the governing bodies of the Cooperative.
Sustainability 15 00329 g001
Figure 2. The Democratic Governance of the Cooperatives.
Figure 2. The Democratic Governance of the Cooperatives.
Sustainability 15 00329 g002
Figure 3. The four phases of the decision-making process, as presented by Simon (Source: [36]).
Figure 3. The four phases of the decision-making process, as presented by Simon (Source: [36]).
Sustainability 15 00329 g003
Table 1. Pros and Cons of AI technologies.
Table 1. Pros and Cons of AI technologies.
EraTechnologies and ToolsPrósCons
1943–1956
(the inception of AI)
  • Artificial neurons set up elementary neural networks with logic connectors
  • Roots for machine learning, genetic algorithms, and reinforcement learning
  • Proof of some mathematical theorems, creating the possibility of machines doing things till then reserved for humans
  • Doubts about the possibility of a machine being “intelligent” due to technological limitations at the time
1952–1969
(early enthusiasm, great expectations)
  • General purpose problem-solving mechanisms
  • Foundations on reinforcement learning
  • Lisp programming language
  • Systems based on learning and reasoning
  • Microworlds
  • Adalines and Perceptrons
  • Demonstration of complex mathematical theorems
  • First applications on real-life situations (derive plans of action)
  • Heavy computational time needed
  • Applications only to simple problems
1966–1973
(a dose of reality)
  • Continuation of the development of general-purpose search mechanisms applied to more complex systems
  • Reflection on better ways of implementing AI
  • Failure to scale up to larger and more complex problems
1969–1986
(expert systems)
  • Knowledge-intensive systems applied to specific domains
  • Heuristics
  • PROLOG programming language
  • Robots
  • Applications in real-world problems in the areas of chemistry, medicine, and language understanding, among others
  • Failures on more complex domains
  • Systems could not learn from experience
1986–present
(the return of neural networks)
  • The reinvention of the back-propagation learning algorithms—connectionist models
  • Learn from examples
  • Superintelligent AI systems that may evolve in unpredictable ways
  • Wide range of risks, such as lethal autonomous weapons, biased decision-making, and impact on employment, among others
  • There are many ethical consequences, such as privacy and security issues, societal bias, lack of transparency, and robots rights
1987–present
(probabilistic reasoning and ML)
  • Probability, machine learning, experimental results
  • Shared benchmark datasets repositories
  • Hidden Markov Models
  • Deployment of practical robots expanded
  • Better theoretical understanding of the core problems of AI
2001–present
(big data)
  • Enormous datasets—big data
  • Increasing of learning accuracies of the algorithms due to the large datasets available for training
  • Successful applications and advances in language and image processing
2011–present
(deep learning)
  • Convolutional neural networks
  • Autoencoders
  • Stochastic Deep Networks
  • Recurrent Neural Networks
  • Systems exceeded human performance in several areas, such as vision tasks, translation, and medical diagnosis, among others
  • Resurgence of the interest in AI
Sources: [42,44,45]
Table 2. A framework for the use of AI for decision-making in Cooperatives.
Table 2. A framework for the use of AI for decision-making in Cooperatives.
Type of SystemUse?Comments
DSSYesThe cooperative should create technical committees to advise on how the system works and how to choose the appropriate DSS system. These committees should be able to accompany the bodies in the decisions taken from these systems, keep the system capable, and identify and control possible biases in the data.
ADSBlack boxes
(e.g., DL algorithms)
NoThe steps to achieve a decision are unknown. There is a problem of transparency, an essential value in cooperatives (and any democratic organisation).
White boxes (e.g., trees or rules models)Yes, with conditionsThe use of ADS systems is inadvisable. Nevertheless, this use can be considered if it respects the principle of interpretability. We can foresee that these systems can be used in the following circumstances: that the decision taken is accompanied by the rules that led to that decision being taken and that there is, as with DSS systems, a technical committee that permanently accompanies the decisions taken automatically. The bodies are informed of the decisions with reports to ensure complete transparency. Additionally, it is fundamental to identify and control possible biases in the data.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramos, M.E.; Azevedo, A.; Meira, D.; Curado Malta, M. Cooperatives and the Use of Artificial Intelligence: A Critical View. Sustainability 2023, 15, 329. https://doi.org/10.3390/su15010329

AMA Style

Ramos ME, Azevedo A, Meira D, Curado Malta M. Cooperatives and the Use of Artificial Intelligence: A Critical View. Sustainability. 2023; 15(1):329. https://doi.org/10.3390/su15010329

Chicago/Turabian Style

Ramos, Maria Elisabete, Ana Azevedo, Deolinda Meira, and Mariana Curado Malta. 2023. "Cooperatives and the Use of Artificial Intelligence: A Critical View" Sustainability 15, no. 1: 329. https://doi.org/10.3390/su15010329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop