2.2.2. Energy Cost

The energy cost of the system depends mainly on the use of the processor. In the deployed system, the most expensive computational stage is done in the cloud, so three working states can be defined in the sensors: the "idle" mode, in which the sensor is waiting for work orders, the "capture" mode, in which de sensor accesses the camera and saves the image in the local storage, and the "networking" mode, which optimizes the image with the defined sub-regions and sends them to the smart managemen<sup>t</sup> layer.

Table 1 summarizes the power consumption between di fferent versions of Raspberry Pi (all fice versions). The ZeroW version was chosen because it provides wireless connectivity (not available on Zero), and because of the very low power consumption it requires (0.6 Wh in idle mode, 1 Wh in capture mode, and 1.19 Wh in networking mode). In this way, a small 10 W solar panel could be enough to provide the energy required by the sensor.


**Table 1.** Power consumption comparison in mAh of di fferent versions of Raspberry Pi.

#### 2.2.3. Maintenance, Installation, and Operability

The use of a general purpose processor, such as the Broadcom BCM2835, facilitates rapid prototyping, as well as the integration of existing software modules. In particular, the integration of the functionality o ffered to the smart managemen<sup>t</sup> layer is done in a straightforward way thanks to this approach.

On the other hand, the impact of maintenance costs and the addition of new functionality is minimized by using a cloud-based approach where each sensor is configured through specific parameters. A unique identifier and server address are specified for each sensor. From the server, the sensor receives a JSON message with the parameters to be used in each analysis experiment. By using this configuration package per sensor, it is possible to adjust the specific capture configuration of each sensor in the network, based on its position, weather conditions, or lighting level at each time of day. For example, a sensor that may be better positioned to identify license plates will be able to take lower resolution captures (saving processing costs) than a sensor that is located further away from the tra ffic. Even the same sensor may need to make higher resolution captures in adverse weather situations, such as rain or fog.

The JSON message has the same format:

{

```
"begTime": "2020-06-10T09:00:00",
"endTime": "2020-06-10T11:00:00",
"resolution": "1024x720",
```

```
"mode": "manual",
     "exposure_time": 1000,
     "freq_capture": 1000,
     "iso": 320,
     "rectangle_p1": [
         280,
         262
],
     "rectangle_p2": [
          1024,
         574
]
}
```
The fields begTime and endTime indicate the date and time of the start and the end of the capture session. Resolution indicates the capture resolution of the sensor with values supported by the hardware up to a maximum of 3280 × 2464 pixels. If the field mode is set to manual, it is possible to indicate the shutter speed or exposure time, which defines the amount of light that enters the camera sensor. The parameter exposure\_time defines the fraction of a second (in the form 1/exposure seconds) that the light is allowed to pass through. The field freq\_capture indicates the number of milliseconds that will pass between each capture. The field iso defines the sensitivity of the sensor to light (low values for captures with good light level). Finally, the fields that begin with the keyword rectangle allow us to define capture sub-regions within an image. The upper left and lower right corners define the valid capture rectangle within the image. The rest of the pixels are removed from the image, facilitating the transmission of the image through the network and avoiding storage and processing costs in regions where plate numbers will never appear (see Figure 5).

**Figure 5.** Example of definition of clipping parameters in capture sub-regions in three sensors of the deployed system, with comparative analysis of storage size for each frame. (To protect personal data, the first three digits of the license plate have been blurred).

The use of parameters that are used to define sub-regions in the captured images, their size can be drastically reduced. Any 3G connection is more than enough to cover the bandwidth requirements of each processing sensor, without any loss of image quality. Even under more adverse transmission conditions (such as Enhanced Data rates for GSM Evolution (EDGE) or General Packet Radio Service (GPRS) coverage with maximum speeds between 114 and 384 Kbps), the frame could be stored using a higher level of JPG compression without significant loss of image quality (up to a level of 65 would be acceptable), and therefore without putting at risk the identification of the license plate (see Figure 6).

**Figure 6.** Different compression levels of the JPG standard. With values below 60, with significant loss of high frequency information, the image quality significantly compromises the success rate of the license plate detection algorithms. (To protect personal data, the first three digits of the license plate have been blurred).
