**1. Introduction**

Distributed computing is a computing field that studies distributed systems (DS) whose components are located on different networked computers spread over different geographies and which communicate by transmitting messages in order to achieve a common goal [1]. In this scenario and with the absence of a global clock, the event of failure of an independent component in the system must tolerate the failure of individual computers. Each computer has only a limited and incomplete view of the system. Various hardware and software architectures are used for DS. In peer-to-peer (P2P), there are no special machines that provide services or manage network resources and the responsibilities are evenly distributed among all machines called peers. Peers can serve as both clients and servers. In the context of DS, the Byzantine Generals Problem (BGP) is a distributed computational problem that was formalized by Leslie Lamport, Robert Shostak and Marshall Pease in 1982 [2]. The BGP is a metaphor that deals with the questioning of the reliability of transmissions and the integrity of the interlocutors. A set of elements working together must indeed manage possible failures between them. These failures will then consist of the presentation of erroneous or inconsistent information. The managemen<sup>t</sup> of these faulty

**Citation:** Kara, M.; Laouid, A.; AlShaikh, M.; Hammoudeh, M.; Bounceur, A.; Euler, R.; Amamra, A.; Laouid, B. A Compute and Wait in PoW (CW-PoW) Consensus Algorithm for Preserving Energy Consumption. *Appl. Sci.* **2021**, *11*, 6750. https://doi.org/10.3390/ app11156750

Academic Editor: Gianluca Lax

Received: 30 May 2021 Accepted: 25 June 2021 Published: 22 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

components is also called fault tolerance which leads us to talk about synchronous and asynchronous systems. Fischer, Lynch and Paterson (FLP) [3] have shown that consensus can not be reached in a synchronous environment if even a third of the processors are maliciously defective. In an asynchronous system, even with a single faulty process, it is not possible to guarantee that consensus will be reached (the system does not always end). FLP says that consensus will not always be reached, but not that it ever will be. This study concerns asynchronous systems, where all processors operate at unpredictable speeds [4]. Theoretically, a consensus can not always be achieved systematically asynchronously. But despite this result, it is possible to obtain satisfactory results in practice, as for instance by the non-perfect algorithms of Paxos Lamport [5] (in the context of obvious failures and no Byzantine faults). Lamport, Shostak and Pease showed in [2], via the Byzantine Generals Problem, that If *f* is the number of faulty processes, it takes at least 3 *f* + 1 processes (in total) for the consensus to be obtained. Under these conditions, the PoW technique has ensured the consensus perfectly, but its major problem is the enormous consumption of energy. In this paper, we propose a consensus protocol for an asynchronous environment. We minimize the energy consumption to ensure the synchronization by the application of several rounds of consensus, where in each round the nodes make a Proof-Wait, i.e., show the PoW then make a wait. The remainder of this paper is organised as follows: In Section 2, we will discuss the consensus problem. In Section 3, we provide an overview of the consensus protocol. In Section 4, we present our proposed protocol. Section 5 demonstrates the validity of our protocol. Section 6 shows the compute and wait implementation. In Section 7, we will discuss the hardness of the proposed cryptosystem. Finally, we conclude with Section 8.
