Next Article in Journal
State-of-the-Art Review of Hempcrete for Residential Building Construction
Previous Article in Journal
User-Centered Evaluation of the ARTH-Aid ExoGlove: Perspectives of Patients and Therapists in Rheumatoid Arthritis Rehabilitation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Development of a High-Accuracy IoT System for Real-Time Load and Space Monitoring in Shipping Containers

1
Department of Electronical Engineering, Telecommunications and Computers (DEETC), Instituto Superior de Engenharia de Lisboa (ISEL), 1959-007 Lisbon, Portugal
2
Technologies and Engineering School (EET), Instituto Politécnico da Lusofonia (IPLuso), 1700-098 Lisbon, Portugal
3
UNINOVA-CTS, NOVA University of Lisbon, Campu de Caparica, 2829-516 Monte de Caparica, Portugal
*
Authors to whom correspondence should be addressed.
Designs 2025, 9(2), 43; https://doi.org/10.3390/designs9020043
Submission received: 6 March 2025 / Revised: 19 March 2025 / Accepted: 25 March 2025 / Published: 1 April 2025

Abstract

:
In a scenario where fuel costs are notably high and the policies that we are currently witnessing tend to limit the fossil fuel resource that powers most heavy goods transport services, the optimization of space in vehicles transporting these goods, such as trucks and shipping containers, becomes an indisputable and urgent need. This urgency is manifested in the need to minimize the costs associated with transport, given its increasing growth. This experiment aims to study and implement an Internet of Things (IoT)-based solution to the problem previously presented. The developed system comprises a computer and a millimeter-wave (mmWave) sensor. The computer processes the data captured by the sensor through code in Python language and displays, through a web page allocated in a cloud/server, the volume occupied by the load, as well as the percentage of occupied and free space, considering the volume provided by the user. The validation tests consisted of checking the results in 2D and 3D, all carried out in a controlled environment focused on the detection of static objects. For the 3D analysis, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm was used to obtain the points for extracting the volume of the detected object. Several objects with different dimensions were used and the error ranged from 0.6% to 7.61%. These results denote the confirmation of the reliability and efficacy of the presented solution. With this, it was concluded that this new solution has significant potential to enter the market and compete with other existing technologies.

1. Introduction

In recent years, technology has been developing faster and faster, so it is necessary for all electronic devices to be in constant communication with each other. This great technological advance became known as the Internet of Things (IoT) and has become increasingly important in the advancement of technology and increasingly used in all uses of it.
The optimization of space in freight transport vehicles, such as trucks and shipping containers, is urgent, as these operations are becoming increasingly costly. Therefore, maximizing load efficiency is essential, as each bit of utilized space can generate significant economic benefits. To achieve this goal using an IoT-based solution, it is necessary to use components that allow the detection and location of objects, such as sensors.
Since partnerships and external support are highly advantageous for the execution of an experiment, this one benefits from the support of the company Sensefinity [1], which specializes in IoT solutions aimed at transforming logistics and transport operations through the connectivity of smart devices and real-time data analysis. As such, Sensefinity’s mission is to empower companies to operate more efficiently and sustainably through IoT.
IoT is the interconnection of physical objects connected to the Internet (such as devices, vehicles, buildings, among others) integrated with electronics, software, and sensors. This allows everyday objects connected to the Internet to become interconnected devices, which, when networked, enable the collection and exchange of data [2].
In recent years, IoT has emerged as one of the most innovative technologies in our daily lives and is considered a mechanism that revolutionizes how technology can be applied. The exponential growth of IoT usage has transformed the way we live and work, leading to developments such as industrial automation and smart cities. Currently, the implementation of this technology is visible in most sectors, and it is expected to continue growing steadily [3]. Currently, the number of IoT devices connected across various applications is 17.08 billion, with projections estimating that this number will reach 29.42 billion by 2030. This will result in IoT devices representing about 75% of all Internet-connected devices [4].
Industrial IoT (IIoT) refers to smart devices used by businesses to create operational efficiency. These industrial smart devices, ranging from sensors to equipment, provide business owners with detailed, real-time data that can then be used to improve business processes. With the advancement of IoT and IIoT, more sensors and devices are being used to enhance business activities, such as improving logistics and distribution, optimizing machine performance, increasing efficiency and productivity, reducing asset costs, and delivering many other benefits [5,6].
As specified by the study conducted by the consulting firm Grand View Research, the global IIoT market is expected to reach USD 1693.30 billion and expand at a rate of 23.2% from 2024 to 2030 [7].
The Industry IoT Consortium (IIC), a group of over 200 companies focused on advancing the adoption of IIoT, identifies one of the most prevalent uses of this technology as the monitoring of cargo, goods, and transportation [8].
The experiment proposed here is related to the application of IIoT mentioned earlier. These industrial sensors, which can collect and send data for processing, can be utilized to optimize cargo transport by enhancing current solutions for load arrangement in containers. This optimization reduces the number of trips needed, as maximizing the placement of cargo for transport makes the most of each container’s space. This, in turn, helps lower costs, reduce fuel consumption, and decrease the emissions of gasses produced on each trip.
This study aims to address the following research questions: can IoT-based systems provide a more efficient approach to cargo optimization compared to existing solutions that use other approaches? Do mmWave-based sensors perform better than other sensors available on the market? What are the main challenges in implementing IoT-based cargo optimization solutions in real-world logistics operations?
By investigating these questions, this research seeks to contribute to the development of more efficient and sustainable logistics solutions through IoT and mmWave sensor technologies.
The validation tests were conducted to verify the accuracy and reliability of the results obtained in both 2D and 3D. All the tests were carried out in a controlled environment, prepared in such a way that detection of static objects had been secured, thus granting minimization of external interference and reproducibility of the experiments. In the case of two-dimensional (2D) analysis, the projections of the detections onto a reference plane were evaluated, allowing an initial check of the spatial consistency of the identified objects. This process includes the analysis of the distribution of the points, defining the contours of the objects, and making a comparison with the real dimensions known beforehand. For the three-dimensional (3D) analysis, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm was used. This algorithm grouped the points of interest and reduced outliers so that segmentation corresponding to the detected objects would become sharper. By processing, it was possible to isolate the representative points of each object and thus extract information concerning its volume.
The application of DBSCAN proved to be particularly effective in separating the real objects from the noise inherent to the data acquisition, and thus only valid detections went into consideration for volumetric estimates. Such a manner of analyzing in three dimensions provided more refined and trustworthy measurements and allowed a more thorough characterization of the objects in a controlled environment.
This article begins by presenting background theory about detection, localization of objects, and the current state of mmWave usage in Section 2. Section 3 outlines the materials and methods for our experience and in Section 4 the results we achieved are presented. Section 5 is the conclusion and includes some remarks about future work.

2. Background and State of the Art

This section presents the background theory that forms the foundation of this manuscript, with a particular focus on millimeter-wave (mmWave) technology. First, we look at the working principle of detection and localization of objects and Section 2.2 then presents the current state-of-the-art of mmWave applications.

2.1. Background of Object Detection Technologies

This section presents various sensing devices that can be used for object detection. The discussion will cover several possibilities, including ultrasonic sensors, infrared sensors, laser (LiDAR) sensors, and millimeter-wave (mmWave) sensors.
Ultrasonic sensors determine the distance to objects by measuring the time it takes for an ultrasonic pulse to travel to the object and return, converting the reflected sound wave (at ultrasonic frequencies) into an electrical signal. While some sensors of this nature use separate ultrasonic emitters and receivers, it is also possible to combine both functions into a single device, using one ultrasonic element to alternate between sending and receiving signals in a continuous cycle. These types of sensors can measure distances to objects up to 4.5 m, making them a versatile tool for accurately measuring both short and long distances without the need for contact with the object [9]. However, their cost-effectiveness compared to other object detection and distance measurement technologies makes them especially suitable for scenarios with budget limitations. These sensors, as the name suggests, operate at frequencies above the range audible to humans, known as ultrasonic frequencies, which are above 20 kHz. This type of technology is capable of safely detecting transparent or shiny objects, as well as objects whose color may change. Ultrasonic sensors offer several advantages, including the following:
  • They do not rely on visible light like other types of technology, allowing them to be used in both indoor and outdoor environments without concerns about environmental conditions affecting performance levels;
  • They have self-diagnostic capabilities, providing users with quick access to the collected information [9].
Infrared (IR) sensors detect objects by measuring changes in the infrared radiation emitted by the heat of those objects. This type of technology is sensitive to radiation in the infrared wavelength range of 780 nm to 50 µm. These sensors are mass-produced and low-cost since they need to meet relatively low requirements [10].
This energy is converted into an electrical signal by a detector, which can then be interpreted or measured [11,12]. Despite these advantages, especially in the fields of security and protection, this type of technology presents a set of limitations and challenges that can affect its effectiveness and accuracy such as environmental conditions, since temperature changes may trigger false alarms and high humidity can absorb infrared radiation. Physical obstructions are another concern; if there is a barrier between the sensor and the object, it may not be able to detect its presence, which necessitates careful and appropriate positioning. There are also limitations regarding distance and sensitivity, along with susceptibility to interference from other sources of infrared radiation. It is also important to note that there is a waiting time before a state change is recognized by the sensor, which can be inconvenient.
Nevertheless, the importance of IR sensors lies in their versatility, reliability, and precision and makes these sensors the ideal choice for various security applications, from basic motion detection to more sophisticated security camera systems [12]. These types of sensors have been used in research conducted by the AutoIF Laboratory in Japan, which reported an object measurement accuracy of 98.3% [13].
Due to the development of advanced embedded technology, onboard sensors like LiDAR and cameras have gradually become standard configurations for object tracking [14].
A laser sensor is an electrical device that uses a focused light beam to detect an object’s presence, absence, or distance. Like a laser pointer, it emits light that is easily visible, even in sunlight, making configuration and troubleshooting easier.
In this type of sensor, light travels from the sensor to the object and is then reflected to the sensor. The sensor then calculates how long it took for the light to travel from the sensor to the object and returns to determine the distance to the object [14]. This operation is illustrated in Figure 1.
This type of sensor offers several advantages, including its effectiveness in conditions where airborne particles hinder visibility, making it superior to other sensor types. The light it emits is not affected by another light sources; therefore, laser sensors can even be used in sunlight. Due to the size of the light spot produced by the sensor, it is highly effective in detecting small objects. These sensors can also be used when precise positioning is required, and they offer a very long range [14].
It is also important to note that by using a light beam that travels more directly than LEDs, laser sensors produce almost no scattered light and, consequently, generate fewer false detections, even when installed in narrow spaces [15].
However, LiDAR is too expensive for home use, while throughout the development process, researchers have shown that haze can prove to be a big issue for LiDAR sensors. In addition, depth cameras only have a limited tracking range and accuracy while requiring a clear view and the right lighting conditions. Additionally, a significant drawback of camera systems is their intrusive nature, which raises privacy concerns [16].
Millimeter wave is a special class of radar technology that uses short wavelength electromagnetic waves. Radar systems transmit electromagnetic wave signals that objects in their path then reflect. By capturing the reflected signal, a radar system can determine the range, velocity, and angle of the objects.
The wavelength in the mmWave range is considered short in the electromagnetic spectrum, and this is one of the advantages of this technology, as the size of system components, such as the antennas required to process mmWave signals, is reduced. Another advantage of short wavelengths is the high accuracy. An mmWave system operating at 76–81 GHz (with a corresponding wavelength of about 4 mm) will be able to detect movements that are as small as a fraction of a millimeter.
The exploration of mmWave technology in sensing systems relies on a detailed understanding of how millimeter waves interact with the environment and the specific electronic components used to process them. In practical terms, Texas Instruments (TI) [17] has developed a range of devices and platforms that enable the efficient application and exploration of millimeter waves. Among these, the family of millimeter wave radars, such as the Industrial Wave Radar (IWR) and Automotive Wave Radar (AWR) series, stands out. A complete mmWave radar system includes transmit (TX) and receive (RX) radio frequency (RF) components; analog components such as clocking; and digital components such as analog-to-digital converters (ADCs), microcontrollers (MCUs), and digital signal processors (DSPs). Traditionally, these systems were implemented with discrete components, which increased power consumption and overall system cost [17].
Millimeter wave architecture can be shown in Figure 2. In the next paragraphs, a description of the operation in relation to physical principles is provided, starting with the RF signal generator (1) and then addressing beamforming and steering (2), active sensing (3), and digital signal processing (4):
1. RF Signal Generator: mmWave technology employs signal generators that operate at extremely high frequencies. TI leverages integrated Phase-Locked Loop (PLL) synthesizers to generate precise RF signals within the 76 GHz to 81 GHz range. These signals serve as the foundation for transmission and reception in both communication and sensing applications [18].
2. Beamforming and Steering: Using phased-array antenna systems, millimeter waves can be manipulated to form directional beams, enabling beamforming, focusing the signal on a specific target. Beamsteering—changing the beam’s direction without physically moving the antenna by adjusting the phase of the signal at each array element [18].
3. Active Sensing: Frequency Modulated Continuous Wave (FMCW) continuously emits a frequency-modulated signal. The difference between the frequencies of the emitted and reflected signals (Doppler frequency shift) is used to calculate an object’s distance and velocity [18].
4. Digital Signal Processing: The reflected signals that are received carry environmental information that needs to be processed. This involves the use of Fast Fourier Transform (FFT) to convert time-domain signals into the frequency domain for spectral analysis, and Adaptive Filters to reduce noise and highlight relevant information [18].
The operating mode of the mmWave radar is as follows: the synthesizer generates a chirp signal, the transmitter transmits the chirp signal, the transmitted chirp is reflected off the objects in front of the radar, and the reflected signal is received by the receiving antenna. The RX and TX signals are combined to produce the resulting Intermediate Frequency (IF) signal. Subsequently, through the processing of this IF signal, all key parameters can be estimated. When multiple objects are in front of the radar, the transmitted chirp signal generates several reflected signals from the different objects, and consequently, the IF signal will have several tones corresponding to each reflection. As previously mentioned, this type of technology features an ADC, whose purpose is to digitize the IF signal. Then, an FFT is performed on the digitized IF signal, resulting in a frequency spectrum with separate peaks for the different tones, each peak denoting the presence of an object at a specific distance [19]. This entire process is illustrated in Figure 2.
Figure 2. Architecture of the mmWave radar sensor (adapted from [19]).
Figure 2. Architecture of the mmWave radar sensor (adapted from [19]).
Designs 09 00043 g002

2.2. State of the Art: mmWave Technology

Millimeter wave sensors are now widely used in various civilian applications, including obstacle detection, motion recognition, localization, and search, due to low-cost chip technology and increased reliability. As a result, these improvements in radar technology and digital signal processing lead to good accuracy, speed, and better resolution than traditional radars. In addition to the benefits listed above, mmWave radar has superior penetration capabilities compared to other types of sensors under various weather conditions, such as rain, fog, and snow. Millimeter wave radar sensors can be seamlessly integrated with other types of sensors when required. They are more commonly utilized in a wide range of applications compared to alternatives like ultrasonic sensors, infrared sensors, and laser (LiDAR) sensors. As expected, the advantages mentioned above have increased the use and popularity of mmWave radars. The use of mmWave radar combined with machine learning algorithms has grown in popularity in recent years. The applications of machine learning include object detection, classification, clustering, and tracking, utilizing radar data [19]. Consequently, mmWave radar sensors offer significant advantages over other types of sensors, positioning them as an ideal solution for a wide range of applications. The future of mmWave radar technology appears bright, with substantial potential for growth and innovation. However, despite their many benefits, these sensors face certain limitations, such as performance degradation caused by interference from nearby radars. While various techniques have been developed to address this issue, it remains a challenging area and a focus of ongoing research [19].
Shastri et al. [20] provide a comprehensive analysis of the potential advancements and current challenges in millimeter wave (mmWave) technology for both device-based localization and device-free sensing applications. These features are related to the very short wavelength, broad bandwidth, and high directivity of mmWave, which are expected to allow precise localization along with high resolution in environmental sensing. The review comprises two divisions of emphasis: device-based localization for which mmWave is used for position detection of an active device through techniques such as Angle-of-Arrival (AoA), Time-of-Flight (ToF), or hybrid methods and their strengths and limitations for different situations, and device-free sensing by mmWave signals impinging on an environment and detecting and monitoring objects, motion, and activities without needing a device on the target. This includes gesture recognition, human activity recognition and monitoring, and environmental mapping. They emphasize that there is a need to combine advanced signal processing techniques, machine learning, and optimization of networks to overcome challenges such as multipath interference, hardware limitations, and computation overloads. Very interesting applications can be found for mmWave sensing in healthcare, the automotive sector, smart cities, or security. In summary, the article underlines the transformative potential of mmWave technologies and encourages innovative designs for systems, algorithms, and hardware to optimally exploit that potential for applications in real-world emergent scenarios.
The study by Wang et al. [21] provides insight into the use of millimeter-wave (mmWave) radar technology for identifying human activities. These authors emphasize the potential of mmWave radar to use fine-grained motion data without invading privacy as it does not rely on the use of optical means and is effective under different environmental conditions. The work describes a mmWave-based system for collecting motion data before processing feature extraction based on human activity and activity classification using machine learning algorithms for activity differentiation. The authors briefly describe the signal processing pipeline, which includes preprocessing, feature extraction, and model training. The authors subjected the system to a series of experiments performed in real-life scenarios to validate its performance in recognition of boilerplate human activities from overall huge recognition surfaces. The results indicate the possibilities of utilizing mmWave radar in the field, such as in homes, health, and security. It further delves into the challenges and prospects of the segment for improving the classification accuracy of complex activities, latencies in the system, and robustness in dynamic environments.
In the published paper [22] by Amar, Alaee-Kerahroodi, and Mysore, the authors explore the phenomenon of interference in FMCW radars which operate at millimeter-wave (mmWave) frequencies. Such issues are particularly prominent when several radars are used in close proximity environments such as at industrial facilities or smart indoor spaces. The authors analyze products of such interference ‘FMCW-FMCW’, analyses of which are primarily experimental. The expected results from the said effects are on the performance of radar, including degraded detection accuracy and the introduction of false targets. They then propose a framework to understand and model how real-life conditions would affect interference patterns. Also, the study includes experimental validation involving indoor measurements with mmWave radars, practically illustrating radar-to-radar interference. It gives indications regarding some critical parameters for the severity of interference like frequency separation, radar orientation, and environmental reflections. The paper concludes by highlighting the design of strategies for mitigating interference, time-domain filtering, frequency planning, or adaptive waveforms, thus ensuring the reliable operation of mmWave radars under densely deployed conditions. In addition, the outputs will serve as the basis for future improvement of radar cohabitation in new applications like autonomous navigation, smart homes, and industrial automation.
The work presented by [23] has been extensively explored for multi-object tracking due to its high-resolution sensing capabilities. This provides a comprehensive overview of the advancements in object tracking using mmWave sensors, highlighting key methodologies such as clustering algorithms, machine learning-based detection, and motion prediction models. The review underscores the significance of data preprocessing and algorithmic optimization in achieving accurate object recognition and classification. The research in our study aligns with these findings by utilizing mmWave radar for object detection and volume estimation within shipping containers. While the reviewed work primarily focuses on tracking dynamic objects in motion, our approach adapts similar clustering techniques, particularly DBSCAN, to segment static objects for volumetric analysis. Additionally, our system leverages mmWave technology’s advantages in high precision and environmental robustness, as discussed in the review. Future enhancements in our work could integrate object tracking methodologies from this study to monitor cargo movement in real-time, further optimizing space utilization in transport logistics.
The study of [24] demonstrates the effectiveness of mmWave sensors in detecting human presence, motion patterns, and fall events. By employing sophisticated signal processing techniques and real-time tracking, the paper highlights how mmWave technology can enhance safety and automation in various environments, particularly in healthcare and smart home applications. While our work focuses on cargo load monitoring rather than human tracking, it shares foundational principles with this study, especially in sensor data interpretation and real-time analysis. Both studies emphasize the importance of mmWave radar in detecting objects within defined spaces with high accuracy. The application of machine learning techniques in human activity classification, as mentioned in the reviewed work, could inspire further improvements in our system, such as differentiating between various cargo types based on shape and density. Additionally, real-time data processing methodologies from this study could be leveraged to improve our system’s responsiveness in tracking dynamic changes in container loading.
The paper [25] explores how mmWave sensors can be used to detect moving objects in dynamic environments in real time. The study presents innovative methods to enhance radar-based obstacle recognition, employing advanced filtering techniques to reduce noise and improve detection reliability. These methods are particularly valuable in autonomous driving and robotic navigation, where accurate motion sensing is critical. Our work shares similarities with this study regarding object detection, though with a different focus. While the reviewed paper prioritizes identifying moving obstacles, our research adapts mmWave radar technology to measure static objects within shipping containers. The insights provided in the study, particularly regarding signal processing techniques to minimize environmental noise, could significantly enhance our approach by refining data filtering and improving the accuracy of volume estimation. Additionally, incorporating motion detection capabilities into our system could facilitate the monitoring of cargo displacement during transport, contributing to enhanced logistics efficiency.

3. Materials and Methods

In this section we focus on the mmWave sensor and how it fits into the overall theory and experience. First, we outline the underlying theory FMCW and mmWave IWR6843AOP [26,27,28] and in Section 3.2 we present the technical characteristics of the mmWave sensor we used and the practical operation of the IWR6843 for volume measurement.

3.1. Underlying Theory FMCW

The measurement of volumes in spaces using millimeter-wave (mmWave) sensors, such as IWR6843AOP(Texas Instruments, Dallas, TX, USA), is based on the principles of object detection, distance calculation, angle estimation, and advanced signal processing. This technique combines the theory of electromagnetic wave propagation with computational algorithms to deliver precise and reliable measurements. mmWave radar operates in FMCW mode, where the frequency of the transmitted signal varies linearly over time. The difference between the frequency of the transmitted signal and the received signal (reflected by an object) is proportional to ToF. This delay enables the calculation of distance R using the following equation:
R = C · f 2 · S
where
  • c is the speed of light ( 3 × 10 8   m / s );
  • f is the frequency difference (beat frequency);
  • s is the frequency slope of the chirp.
In systems like the IWR6843AOP, which uses multiple antennas (also known as an antenna array) for measuring the AoA of reflected signals, the process of determining the AoA is typically based on the principles of phased array radar or direction finding. Therefore, angular calculation includes signal reception, meaning the reflected signal from a target arrives at different antennas at slightly different times due to the direction from which the signal is coming. Each antenna in the array receives a phase-shifted version of the same signal; phase difference, which is the key to calculating AoA, is the measuring of the phase difference between the signals received at the different antennas in the array. The phase difference is caused by the varying path lengths that the signal travels to reach each antenna. The phase difference Δϕ between two antennas is related to the angle of arrival θ of the signal and ϕ is the elevation angle (systems with vertical antennas). The relationship can be expressed as
= 2 · π · d · sin ( θ ) λ
where
  • d is the distance between antennas in the array;
  • λ is the wavelength of the signal;
  • θ is the AoA.
By measuring the phase difference between the antennas, the angle θ can be calculated. Once the phase difference is known, the angle θ can be determined by rearranging the formula:
θ = s i n 1 Δ · λ 2 · π · d
The spatial resolution in AoA measurement refers to the system’s ability to distinguish between two signals coming from different directions. It depends on several factors:
1. Antenna array configuration: The number of antennas and their spacing impact the angular resolution. A larger array with antennas spaced further apart allows for finer angular resolution but may also introduce other complexities such as grating lobes.
2. Wavelength: The resolution is better for signals with shorter wavelengths (higher frequencies) since the signal will exhibit smaller phase differences for a given angular separation.
3. Signal processing: Advanced signal processing techniques, such as beamforming and phase unwrapping, can improve resolution by enhancing the ability to detect small phase differences more accurately.
In summary, the IWR6843 computes the AoA by analyzing the phase differences between signals received at multiple antennas. The spatial resolution depends on the array’s design, the signal’s wavelength, and the processing techniques used.
The combination of distance and angle data makes it possible to determine the 3D position of objects. Using a Cartesian reference system,
x = R · cos θ · sin y = R · sin θ · sin z = R · cos ( )
To calculate the volume of a space, the radar (mmWave) maps the three-dimensional contour of objects and surfaces. Once the contour has been determined, volume V can be estimated by integrating the area occupied in all dimensions:
V = x m i n x m a x y m i n y m a x z m i n z m a x a r e a ( x , y , z )   x y z
where
  • xmim = −Rx/2 and xmax = Rx/2 (where the origin is in the center of the volume and the length is Rx);
  • ymim =Ry/2 and ymax = Ry/2 (where the origin is in the center of the volume and the length is Ry);
  • zmim = 0 and zmax = Rz (with the ground as reference, z = 0).
Clustering is one of the main ways to process mmWave radar data since it associates points reflected from different objects. The two most typical algorithms are DBSCAN and K-Means [29]. In DBSCAN, points are grouped based on spatial density, where it identifies dense regions and separates noise or outliers. The main parameters for this algorithm are the maximum search radius for near neighbors and the minimal number of points to form a cluster. It works best for the detection of moving and stationary objects in unstructured environments, where the reflection points will not usually conform to any defined distribution. K-Means minimizes the variance within each cluster through iterations of assigning points and updating centroid positions. It is applicable for those situations when objects are well separated and follow a more irregular structure, such as multi-object tracking applications. After clustering, the geometric shapes representing the objects in the environment are refined with Convex Hull and Alpha Shapes. The Convex Hull produces the smallest convex polygon containing all input points, while Alpha Shapes provide non-convex shapes that more properly discriminate between irregular objects.
Figure 3 shows the flowchart of the actions to be carried out by the system. It consists of five functionalities, which are described below. The sensor connection and sensor configuration states correspond to the initialization states of the measurement system. Once these states have been completed, the machine indicates that the system is in Reading and Processing Data mode. Whenever an object is detected, its volume and respective volumetrics are given as output and sent via WiFi and IoT. If no volume is detected, the system is in object detection mode. The Algorithm 1 for reading and processing with IWR6843 is explained below, and a detailed explanation is provided in Section 3.2
Algorithm 1. For IWR6843
Function DBSCAN(Points, ε, MinPts)
    Clusters ← [], Volumes ← []
    For each unvisited point P do
           Mark P as visited, Neighbors ← {Q | Distance(P, Q) ≤ ε}
           If size(Neighbors) < MinPts then continue
           Cluster ← Expand(P, Neighbors, Points, ε, MinPts)
           Add Cluster to Clusters
   For each Cluster in Clusters do
             Volumes ← Volumes + ComputeVolume(ConvexHull(Cluster))
    Return Clusters, Volumes
End Function

Function Expand(P, Neighbors, Points, ε, MinPts)
    Cluster ← {P}
    For each Q in Neighbors do
           If Q is not visited then
                   Mark Q as visited, NewNeighbors ← {R | Distance(Q, R) ≤ ε}
                   If size(NewNeighbors) ≥ MinPts then Neighbors ← Neighbors ∪ NewNeighbors
        Add Q to Cluster
    Return Cluster
End Function

Function ComputeVolume(Hull)
    Return |Σ Determinant(Origin, A, B, C)/6 for each (A, B, C) in Hull|
End Function
The next paragraphs explain a few variables that are used in the algorithm.
Points (set of data points): This represents the input dataset, a collection of points in a multidimensional space. In a 3D space, each point is represented as Pi = (xi,yi,zi).
ε (neighborhood radius): This defines the maximum distance between two points for them to be considered neighbors and controls the density sensitivity of the algorithm. A small ε may result in many small clusters, while a large ε may merge distinct clusters.
MinPts (minimum points threshold): This is the minimum number of points required to form a dense region (i.e., a cluster). If a point has fewer than MinPts neighbors within ε, it is considered noise or an outlier.
P (current point being processed): This represents the point currently being evaluated in the dataset. It is used to check if it belongs to a cluster or is noise.
neighbors (set of nearby points): This contains all points within the ε radius of a given point P. It is used to determine whether P has enough density to form a cluster.
hull (Convex Hull of a cluster): The smallest convex shape that encloses all points in a cluster. It is used to approximate the volume of the cluster in 3D space.
These variables work together, so DBSCAN iterates over each point P in points variable. It finds neighbors within the ε radius. If neighbors contains at least MinPts, a new cluster is formed. The cluster grows by adding more neighboring points. Once clusters are detected, Convex Hull (hull) is computed for each cluster. The volume of hull is calculated to estimate the cluster’s size in 3D space.

3.2. Practical Operation of the IWR6843

One of the sensors that utilizes this technology is the mmWave IWR6843AOP [28]. This sensor is an Antenna-on-Package (AOP) device that represents an evolution within TI single-chip radar device family. This device integrates the following elements:
  • A high-performance TI C674x DSP subsystem for advanced radar signal processing, magnitude, detection, and many other applications.
  • A Built-In Self-Test (BIST) processor subsystem, responsible for radio configuration, control, and calibration.
  • A user-programmable ARM Cortex-R4F for object detection and interface control;
  • Three antennas for TX and four antennas for RX.
  • A PLL, transmitter, receiver, and ADC.
  • A single cable for power and data.
  • Coverage from 60 to 64 GHz with a continuous bandwidth of 4 GHz.
  • A hardware accelerator for FFT, filtering, and Constant False Alarm Rate (CFAR) processing.
  • Up to six ADC channels, up to two Serial Peripheral Interface (SPI) ports, up to two UART ports, Inter-Integrated Circuit (I2C), and a Low Voltage Differential Signaling (LVDS) interface.
  • Azimuth Field Of View (FOV) +/− 60 degrees, with an angular resolution of 29 degrees.
  • Elevation FOV +/− 60 degrees, with an angular resolution of 29 degrees.
  • This device enables a vast variety of applications with simple changes to the programming model. Additionally, it offers the possibility of dynamic reconfiguration, meaning in real-time and while in operation, to implement a multimodal sensor.
  • The compact design of the IWR6843AOP is one of its great advantages, facilitating its integration into various projects. In terms of operating conditions, this device ensures reliable performance and operation within a temperature range of −40 °C to 105 °C [28,30]. Figure 4 shows this sensor.

3.3. Laboratory Tests

In this section we present the components of the experiment and the communication mechanisms between them. Figure 5 shows the system architecture tested in the laboratory.
After analyzing all possible object detection options, a decision was made to use the mmWave IWR6843AOP radar, since precision is crucial for this project. Given that the load of the containers needs to be measured accurately for optimization calculations and represented in a three-dimensional graph, this sensor stands out for its high precision in the order of millimeters. For the laboratory tests, we used a computer with i3 core, with 32G Random Access Memory (RAM) and mmWave IWR6843AOP radar, connected to a USB port (UART). We developed a website hosted in a cloud for user interface (IoT dashboard and IoT mobile APP). All data acquired by the radar sensor are sent via WiFi and represented on the developed website. The detailed block diagram of the functionalities described below can be found in Appendix A, Figure A5, which is divided into three main blocks:
  • RF/Analog—This subsystem handles all radio frequency functions and analog signal processing. It is composed of the following:
    • An oscillator that generates high-frequency signals used by the transmitters and receivers for modulation and demodulation of radar signals;
    • Three transmitters, which include power amplifiers and modulation components;
    • Four receivers, which include Low Noise Amplifiers (LNA), mixers, and filters;
    • Filters and amplifiers improve the signal quality before conversion to a digital signal;
    • ADCs to convert the received analog signals into digital signals for subsequent digital processing.
  • Radio Processor—This subsystem is responsible for receiving and processing the signals from the previous subsystem. It is programmed by TI and consists of the following:
    • A Digital Front-End typically serves as the interface between the analog unit and the digital processing modules. Its primary functions include gain control, sampling rate conversion, adaptive filtering, and phase correction [31].
    • An ADC buffer is used to temporarily store the analog data before it is converted into digital format by the ADCs.
    • A Ramp Generator—this component is essential in continuous wave frequency-modulated radar systems, such as this sensor. It generates the necessary frequency ramp to modulate the radar signals during transmission and reception, allowing for the measurement of distances and speeds of objects in the environment.
    • A Radio (BIST) Processor for RF calibration and performing self-tests programmed by TI for self-diagnosis. It also includes RAM and FLASH memories.
  • DSP Subsystem
The three and four subsystems are user-programmable and consist of the functionalities and features of this sensor presented earlier in the previous section of the document.
Figure 6 depicts the sensor’s field of view, with θ representing an adjustable angle in degrees, configured in the sensor’s settings to align with the intended application.
This sensor was configured in Python 3.0 with the help of a configuration command file, obtained from software provided by TI in [32,33], and the procedure is demonstrated in Appendix A of this manuscript. After it was downloaded and tested, it was necessary to adapt the configuration to the characteristics of our project. These configurations will be discussed in more detail later in this chapter.
To format and transmit the data obtained by the sensor between it and the other components of the system, this sensor uses structural elements known as type–length–value (TLV). This information is then compressed and sent to the computer via UART. The UART output is sent as a packet containing a packet header and the TLVs. Each TLV payload includes a data type and its corresponding value, which defines and describes the specific information being conveyed. The packet size depends on the number of detected objects and varies from packet to packet. To maintain consistency between packets, each one is filled so that the length is a multiple of 32 bytes. The UART information is sent from the Error Vector Magnitude (EVM) board to the computer’s USB port and is subsequently analyzed to process and display results.
In Figure 7, we can observe the structure of the UART message datagram. The fixed length of the frame header is 40 bytes, and its fields are presented in Table 1.
The frame header is sent at the beginning of each packet. The Magic Word is used to identify the start of each packet. The number of TLVs can be observed in the Num TLVs field. As we can see in Figure 7, the TLV header is composed of the type and length fields.
As depicted in Table 2, both the type and length have a size of 4 bytes. The type indicates the type of the payload message, and the length indicates the size of that payload in bytes. Table 3 presents the various TLVs used by this sensor.
After analyzing the types of TLVs captured by the sensor in some tests conducted, we concluded that, with the configuration used and in a more isolated testing environment, the main TLVs sent by the sensor are as follows:
  • 1—Detected Points, which contain the point cloud data. These data include the location (x, y, z) and the signal return intensity (Doppler), as we can see in Table 4.
  • 2—Range Profile, which contains the range profile data, showing the intensity of the reflected signal as a function of distance.
  • 3—Side Info for Detected Points, where the payload consists of 4 bytes for each detected object in the point cloud. This provides the Signal-to-Noise Ratio (SNR) and noise value for each object, as presented in Table 5. These values are measured in multiples of 0.1 dB.
Figure 8 shows a screenshot of the software used to calibrate the sensor. As you can see, this calibration is carried out in real time, so there is no need to calibrate the sensor beforehand.

4. Results

In this section, some initial results and the conclusions obtained are presented, as well as the optimizations made to the sensor configuration and the processing of the received data to improve the results. Finally, the results are presented, incorporating these improvements.

4.1. Initial Tests and Optimizations

To validate the configuration provided by the mmWave Demo Visualizer, initial and straightforward tests were performed, including the detection of one or two stationary objects.
For these tests, an isolated environment was chosen to minimize interference from the surroundings that could affect the sensor’s performance and the reliability of the results. The sensor was placed on a tripod, as shown in Figure 9, to provide greater stabilization.
Considerations regarding the tests in Figure 10 and Figure 11 include the following:
  • The distances between the sensor and the objects were measured and matched those shown in the graphical representation of Figure 10b and Figure 11b;
  • The third is an outlier that corresponds to the wall, for which the distance was also measured.
With the conclusion of these tests, we began to consider how implementation could be achieved to display the real dimensions of the objects in the graph, allowing us to determine the volume occupied by the load, as this is precisely the focus of this experiment (Design and Development of a High-Accuracy IoT System for Real-Time Load and Space Monitoring in Shipping Containers).
To accomplish this task, we sought an approach where an object could be represented by multiple points, allowing for a more accurate depiction of its actual dimensions. After researching the topic, it was determined that this goal could be achieved through sensor configuration.
In Section 3.4 of the document “MMWAVE SDK User Guide” provided in [31], we can find the format of the sensor configuration document, including the description, usage, and parameters of each command available for configuration.
When analyzing the configuration commands, we observed that the last parameter of the cfarCfg command, peak grouping, was enabled, which caused the sensor to group all the points of an object into one. This resulted in only one point being displayed in the graphical representation, which could be useful for some applications. However, in the case of this experiment, the better option is to disable this parameter, for the reasons mentioned earlier.
CFAR is a technique used in radar systems to detect targets in environments where noise is variable. The purpose of the cfarCfg command is to define the parameters that control how the radar processes and filters the received signals to detect objects.
To accurately represent objects with their actual dimensions, we transitioned from a two-dimensional to a three-dimensional graphical representation.
As we can see in Figure 12 and Figure 13, as intended, the objects are represented by multiple points, making their representation easier. The distances were measured and matched the points in the graph for the y and z axis values. However, on the x-axis, there was a deviation of approximately five centimeters compared to the actual measurement.
Given that multiple points represent each object, we can begin to use them to represent its dimensions. To do this, the DBSCAN algorithm was first applied to the data received from the sensor.
DBSCAN is a clustering algorithm widely used in data analysis that is based on the density of data points and is robust against noise. It identifies clusters as regions of high point density separated by low-density areas and is particularly effective at handling noise and outliers by classifying points that do not belong to any cluster as “noise”.
Unlike traditional clustering algorithms, DBSCAN does not require a predefined number of clusters. This algorithm uses two main parameters:
  • Epsilon (ε)—which refers to the maximum distance between two points for them to be considered neighbors;
  • MinPts—the minimum number of points within a radius of ε that defines a point with high density.
Thus, the DBSCAN clustering method was selected due to its robustness in identifying clusters in noisy environments, a crucial characteristic for this experimental setup. Given that the test environment was not ideal, the presence of external objects or noise points could have compromised measurement accuracy.
This algorithm was used in this project to group the points that are considered part of the same object, leaving out any other points that may appear, such as noise. The algorithm assigns each object’s points a different label depending on which object they belong to.
After applying this algorithm, each label was processed so that, for each object, the minimum and maximum values of each axis were extracted, thus providing the necessary information to construct the object’s visualization on the graph. In Figure 14 and Figure 15, it is possible to infer the results of these algorithms.
After completing all the initial tests, the most effective/appropriate configuration for this project was developed. The configuration in question is presented in Appendix A of this document. Video table of parameters names, are presented in Table 6.
This configuration mostly contains the configuration commands taken from the Demo Visualizer provided by TI [31], which is included in Appendix B. To adapt the configuration to our application, the following commands were modified or added:
  • aoaFovCfg—Command for the datapath to filter detected points outside the specified limits in the elevation plane or azimuth range. It consists of five parameters: subFrameIdx, minAzimuthDeg, maxAzimuthDeg, minElevationDeg, and maxElevationDeg. The minAzimuthDeg and maxAzimuthDeg parameters indicate the minimum and maximum azimuth angles, in degrees, specifying the start of the field of view, while the minElevationDeg and maxElevationDeg parameters indicate the minimum and maximum elevation angles, in degrees, specifying the start of the field of view. These parameters were changed from −90, 90, −90, 90 to −60, 60, −60, 60 to increase the precision in the area of focus by the radar sensor.
  • cfarCfg—Command with the CFAR configuration message for the datapath, consisting of nine parameters: subFrameIdx, procDirection, mode, noiseWin, guardLen, divShift, cyclic/Wrapped mode, Threshold, and peak grouping. This command is introduced twice to define two processing directions. The processing directions are defined in the procDirection parameter, where the value 0 corresponds to CFAR detection in the range direction, and the value 1 corresponds to CFAR detection in the Doppler direction. The peak grouping parameter was changed from 1 to 0 in both commands to disable peak grouping. Additionally, the Threshold parameter was changed from 15 to 18 in the command where procDirection is 1, and in the command where procDirection is 0, the Threshold parameter was changed from 15 to 14.
  • staticBoundaryBox—Defines the area where the load is expected to remain static for a long period. It consists of six parameters: X-min, X-max, Y-min, Y-max, Z-min, and Z-max. The X-min and X-max parameters were set to −1 and 1, defining the minimum and maximum horizontal distances from the origin. The Y-min and Y-max parameters were set to 0 and 1.5, defining the minimum and maximum vertical distances from the origin, and the Z-min and Z-max parameters were set to −0.35 and 1, defining the minimum and maximum height relative to the origin.
  • sensorPosition—This is used to specify the orientation and position of the radar sensor and consists of three parameters: sensorHeight, azimTilt, and elevTilt. The sensorHeight parameter defines the height of the radar sensor above the ground plane, which was set to 0.34; the azimTilt parameter defines the azimuth tilt of the radar sensor, which was set to 0; and the elevTilt parameter defines the elevation tilt of the radar sensor, which was also set to 0.
  • cfarFovCfg—Command for the datapath to filter detected points outside the specified limits in the range direction or Doppler direction. It consists of four parameters: subFrameIdx, procDirection, min, and max. This command is introduced twice to allow the definition of two processing directions. The processing directions are defined in the procDirection parameter, where 0 corresponds to filtering points in the range direction and 1 corresponds to filtering points in the Doppler direction. The min parameter corresponds to the minimum limit for range or Doppler, below which detected points are filtered, and the max parameter corresponds to the maximum limit for range or Doppler, above which detected points are filtered. In the command where procDirection is 0, the min and max parameters were set to 0 and 1.5, respectively. In the command where procDirection is 1, the min and max parameters were set to −1 and 1, respectively.

4.2. Final Results

Following the tests described in Section 4.1, the testing environment was shifted to a larger area, providing more space to perform tests with larger boxes.
In the tests and results that will be presented, the calculated volume of the load will be displayed in the command line output, so that the accuracy of the results can be confirmed and concluded. The values are expressed in “meters (m)”.
The tests were conducted using boxes of different sizes, using either a single box or a set of several boxes. These tests are shown in Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20, with Figure 16 corresponding to Test 1 and Figure 20 to Test 5. The values measured on each of the three axes, the error per axis, the variation in the minimum and maximum values per axis, the calculation of the final volume, and the error percentage are shown in Table 7 and Table 8. In Table 8, in the columns of the x, y, and z range axes, the values with a “−” mean that they are offset from the reference value, in this case to the left.
The values indicate that the error is not evenly distributed among the three axes, in some cases, the zz axis has a larger error, which could impact the cargo arrangement inside the container. As denoted in Test 3, for example, the error in zz (1.58 cm) is more than twice the error in yy, which may affect the proper stacking of boxes. In Test 5, the overall error is very low (0.6%), demonstrating that the system can achieve high precision in certain scenarios.
Due to the presented results, the DBSCAN clustering mechanism was chosen for this study due to its robustness in detecting and grouping spatially dense data points while effectively filtering out noise. Unlike other clustering algorithms, such as K-Means, DBSCAN does not require prior knowledge of the number of clusters, making it highly suitable for dynamically detecting objects of varying shapes and sizes within a shipping container [35,36]. DBSCAN was particularly useful in this research because it allows for the identification of clusters based on density, which aligns well with the nature of object distribution in cargo containers. By setting an appropriate epsilon (ε) value—the maximum distance between two points to be considered part of the same cluster—and a minimum number of points (MinPts), the algorithm successfully grouped detected objects while excluding irrelevant data points caused by sensor noise. This method significantly improved the accuracy of volume estimation by segmenting objects more precisely, thereby enhancing the reliability of the IoT-based monitoring system.
Observations about the tests presented in this section:
  • During the tests conducted, the values for the width, length, and height of the various objects showed some deviations from their actual size, but the other dimensions compensated for this deviation, making the calculated volume close to the real volume. This error does not significantly impact the project, as for the use case of this project, only the final volume presented is considered.
  • Based on the results obtained, and if errors up to 10% are tolerable, it is concluded that this new solution has significant potential to enter the market and compete with other existing technologies.
  • During the tests, only the packets with TLV type 1 were processed, as they contained all the necessary information for the focus of this project. That being said, and considering the structure of the datagram and the size of its fields, presented in Section 3.2 of this document, we can conclude that the size of the data being processed is, in bytes, 40 + 8 + ( n × 16 ) , where these represent the size of the frame header, the size of the TLV header, the number of detected objects, and the size of the TLV type 1 payload.
  • Throughout this work, it was not possible to mitigate some identified sources of error, such as the use of objects that were not in the best conditions for measurements. In particular, the presence of tape on the objects could interfere with the accuracy of the measurements due to its impact on light reflection. Additionally, the sensor’s placement could be optimized, as its current position did not always allow for an ideal view of the objects in some cases. Placing it in a higher position, allowing for observation of the objects from a superior angle, could help reduce incorrect measurements, especially in scenarios where there were boxes behind the visible ones. It was also observed that the test environment was not ideal, as the presence of additional objects and insufficient isolation could cause interference with the sensor, compromising the accuracy of the results.
  • In this work, it was also observed that in the case of very small objects, the sensor’s accuracy was not optimal, at least with the sensor configuration we used throughout this project.
  • The time between placing the objects in space and obtaining their measurement results is approximately 5 s.
  • The effectiveness of the system was evaluated at various distances, and the experiments indicated that it becomes more reliable from 0.6 m onwards. Based on this observation, all tests were conducted starting from this distance. However, the determination of the minimum detectable object size was not specifically addressed in this study. This limitation could be explored in future work by analyzing the relationship between object dimensions and measurement accuracy under different experimental conditions.

5. Conclusions and Future Work

This study focuses on IoT-based solutions concerning the optimization of space in shipping containers, making it more relevant now than ever before with the increasing complications and demands of efficiency in global logistics. Today, maritime transportation and popular containers are the mainstays of international trade; more than 90% of world trade is transferred through seawater. Still, they are subjected to inefficiencies concerning space and the lack of accurate cargo tracking which are complicated problems. IoT technologies will enable real-time monitoring and optimization of container space use, thereby improving operational performance and reducing the cost of transport. Smart sensors, real-time data analysis, and artificial intelligence can join hands to create solutions that will maximize the use of space inside containers, enhance risk management, and increase security. This study is relevant because even though most of the solutions out there are still under corporate patent protection, this study reviews the possibilities for adapting emerging technologies in the academic context and hence their implications for industry. It shall also create a broad perspective on the feasibility of IoT integration to achieve greener and more efficient logistics, in line with global optimization and digital transformation trends. Thus, this study will fill a gap in scientific knowledge and can give valuable information to companies trying to move to the IoT domain in operation means or trying to become more efficient, thus increasing global competitiveness in the market of transportation.
The objective of this work was to study and implement an IoT-based solution for one of the most frequent problems today, which is maximizing the efficiency of cargo loading in containers, as every available space can translate into significant economic improvements. This issue arises from the exponential growth of fuel costs and sustainability policies that tend to limit fossil fuel consumption, as it is highly detrimental to the environment and scarce. With this goal, various sensor options available on the market were analyzed, and after reflecting on the strengths, weaknesses, and characteristics of each, the mmWave IWR6843AOP sensor from TI was chosen.
The developed system consists of a computer and a sensor, placed on a tripod to provide stability. The data captured by the sensor are transmitted via USB (UART) communication to the computer, which processes the received data using Python code and presents the occupied volume, as well as the percentage of occupied and free space, based on the volume of the container provided by the user, through a visualization software developed in HTML and CSS.
Initially, the main concern was understanding how the data were being transmitted and how to process them to achieve the experiment. Subsequently, graphical representations of the received data were developed to enhance sensor data visualization, accompanied by the execution of various tests. Performing several tests, the sensor configuration was adjusted and new mechanisms, such as DBSCAN, were used to improve the results and calculate the volume of the objects.
Throughout the experiment, important results were achieved that contributed to the expansion of IoT, as well as to confirming the reliability and effectiveness of the proposed solution. If errors up to 10% are fully tolerable, and considering the errors in the results obtained, this solution proves to be a feasible system for this objective. Therefore, it can be concluded that this new solution has significant potential to enter the market and compete with other existing technologies.
During these tests, only TLV type 1 packets were processed, as they contained all the necessary information for the focus of this project. That said, we can conclude that the size of the data being processed is, in bytes, 40 + 8 + ( n × 16 ) , where these represent the size of the frame header, the size of the TLV header, the number of detected objects, and the size of the TLV type 1 payload.
As for future work, it is considered important to migrate the computer to the Raspberry Pi development board, as this board offers several advantages such as low energy consumption and small size. This migration would be easy to implement since the data processing code was developed in Python, which is the programming language the Raspberry Pi uses. Additionally, after validating the results presented in this manuscript, it could be very beneficial to develop an implementation that differentiates between various boxes or objects in the environment to count the boxes and obtain a more accurate volume. This solution proves to be fully feasible for the experiment use case, given the accuracy demonstrated in the obtained results and as it met the requirements for this specific application.

Author Contributions

Conceptualization, L.M.P., T.A. and M.V.; methodology, T.A. and M.V.; software, T.A. and M.V.; validation, L.M.P. and V.F.; formal analysis, L.M.P. and V.F.; investigation, T.A. and M.V.; resources, L.M.P.; data curation, T.A. and M.V.; writing—original draft preparation, T.A. and M.V.; writing—review and editing, L.M.P., T.A., M.V. and V.F.; visualization, L.M.P., T.A., M.V. and V.F.; supervision, L.M.P. and V.F.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to express their gratitude to Sensefinity company for making the sensor used during the experiments.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

This appendix is a guide for transferring the sensor configuration file [31].
Figure A1. Choosing the sensor in use.
Figure A1. Choosing the sensor in use.
Designs 09 00043 g0a1
Figure A2. Identification of the ports to be used.
Figure A2. Identification of the ports to be used.
Designs 09 00043 g0a2
Figure A3. Sending the configuration directly to the sensor—for testing purposes only directly in the visualizer.
Figure A3. Sending the configuration directly to the sensor—for testing purposes only directly in the visualizer.
Designs 09 00043 g0a3
Figure A4. Confirmation of the configuration sent to the sensor.
Figure A4. Confirmation of the configuration sent to the sensor.
Designs 09 00043 g0a4
Figure A5. Functional block diagram of mmWave IWR6843AOP (adapted from [30]).
Figure A5. Functional block diagram of mmWave IWR6843AOP (adapted from [30]).
Designs 09 00043 g0a5
Figure A6. Transfer of the configuration file to the computer.
Figure A6. Transfer of the configuration file to the computer.
Designs 09 00043 g0a6
Figure A7. Configuration file successfully transferred.
Figure A7. Configuration file successfully transferred.
Designs 09 00043 g0a7

Appendix B

This appendix contains the configuration file used on the sensor.
Commands for sensor configuration
sensorStop
flushcfg
sensorPosition 0.34 0 0
dfeDataoutputMode 1
channelcfg 15 7 0
adcCfg 2 1
adcbufcfg -1 0 1 1 1
profilecfg 0 60 359 7 57.14 0 0 70 1 256 5209 0 0 158
chirpCfg 0 0 0 0 0 1
chirpCfg 1 1 0 0 0 2
chirpCfg 2 2 0 0 0 4
frameCfg 0 2 16 0 100 1 0
lowPower 0 0
guiMonitor -1 1 1 0 0 0 1
cfarCfg 1 0 2 8 4 3 0 14 0
cfarCfg -1 1 0 4 2 3 1 18 0
multiobjBeamForming -1 1 0.5
clutterRemoval -1 0
calibDcRangeSig -1 0 -5 8 256
extendedMaxVelocity -1 0
lvdsStreamCfg -1 0 0 0
compRangeBiasAndRxChanPhase 0.0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0
measureRangeBiasAndRxChanPhase 0 1.5 0.2
CQRxSatMonitor 0 3 5 121 0
CQSigImgMonitor 0 127 4
analogMonitor 0 0
aoaFovCfg -1 -60 60 -60 60
cfarFovCfg -1 0 0 1.5
cfarFovCfg -1 1 -1 1.00
calibData 0 0 0
staticBoundaryBox -1 1 0 1.5 -0.35 1
sensorStart

References

  1. The Internet of Cargo. The IoT Transformation Company. The Internet of Cargo. Available online: https://www.sensefinity.com/ (accessed on 9 July 2024).
  2. Introduction to IoT. Available online: https://www.researchgate.net/publication/330114646_Introduction_to_IOT (accessed on 6 March 2024).
  3. Teicher, J. The Little-Known Story of the First IOT Device, IBM Blog. 2019. Available online: https://www.ibm.com/blog/little-known-story-first-iot-device/ (accessed on 6 March 2024).
  4. Vailshery, L.S. IOT Connected Devices Worldwide 2019–2030, Statista. 2023. Available online: https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/ (accessed on 7 March 2024).
  5. What Is Iot? Internet of Things Explained—AWS. Available online: https://aws.amazon.com/what-is/iot/ (accessed on 6 March 2024).
  6. Iberdrola. O Que É a Iiot? Descubra a Internet Industrial das Coisas, Iberdrola. 2021. Available online: www.iberdrola.com/innovation/what-is-iiot (accessed on 6 March 2024).
  7. Industrial Internet of Things Market to Reach $1693.30bn by 2030 Industrial Internet of Things Market To Reach $1693.30Bn by 2030. Available online: https://www.grandviewresearch.com/press-release/global-industrial-internet-of-thingsiiot-market (accessed on 7 March 2024).
  8. Mecalux A Revolução da Internet Industrial das Coisas (IIoT), Mecalux, Soluções de Armazenagem. Available online: https://www.mecalux.pt/blog/iiot-internet-industrial-dascoisas (accessed on 7 March 2024).
  9. What Is Ultrasonic Sensor: Working Principle & Applications Robocraze. Available online: https://robocraze.com/blogs/post/what-is-ultrasonic-sensor (accessed on 15 October 2024).
  10. Infrared Sensor—IR Sensor: Sensor Division Knowledge Infrared Sensor—IR Sensor. Sensor Division Knowledge. Available online: http://www.infratec.eu/sensor-division/service-support/glossary/infrared-sensor/ (accessed on 20 March 2024).
  11. Devasia, A. Infrared Sensor: What Is It & How Does It Work, Safe and Sound Security. 2023. Available online: https://getsafeandsound.com/blog/infrared-sensor/ (accessed on 20 March 2024).
  12. Utama, H.; Kevin; Tanudjaja, H. Smart Street Lighting System with Data Monitoring. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Chandigarh, India, 28–30 August 2020. Available online: https://iopscience.iop.org/article/10.1088/1757-899X/1007/1/012148 (accessed on 20 March 2024).
  13. Yokoishi, T.; Mitsugi, J.; Nakamura, O.; Murai, J. Room occupancy determination with particle filtering of networked pyroelectric infrared (PIR) sensor data. In Proceedings of the Sensors, 2012 IEEE, Taipei, Taiwan, 28–31 October 2012; pp. 1–4. [Google Scholar] [CrossRef]
  14. Laser Sensor Explained: Types and Working Principles—Realpars RSS. Available online: https://www.realpars.com/blog/laser-sensor (accessed on 22 March 2024).
  15. Laser Sensors Keyence. Available online: https://www.keyence.com/products/sensor/laser/ (accessed on 22 March 2024).
  16. Huang, X.; Tsoi, J.K.P.; Patel, N. mmWave Radar Sensors Fusion for Indoor Object Detection and Tracking. Electronics 2022, 11, 2209. [Google Scholar] [CrossRef]
  17. The Fundamentals of Millimeter Wave Radar Sensors (rev. A). Available online: https://www.ti.com/lit/spyy005 (accessed on 22 March 2024).
  18. Ti Developer Zone. Available online: https://dev.ti.com/tirex/explore/node?node=A__AXNV8Pc8F7j2TwsB7QnTDw__RADAR-ACADEMY__GwxShWe__LATEST (accessed on 27 March 2024).
  19. Soumya, A.; Krishna Mohan, C.; Cenkeramaddi, L.R. Recent Advances in mmWave Radar-Based Sensing, Its Applications, and Machine Learning Techniques: A Review. Sensors 2023, 23, 8901. [Google Scholar] [CrossRef] [PubMed]
  20. Shastri, S.; Arjoune, Y.; Amraoui, A.; Abou Abdallah, N.; Pathak, P.H. A Review of Millimeter Wave Device-Based Localization and Device-Free Sensing Technologies and Applications. IEEE Commun. Surv. Tutor. 2022, 24, 1708–1749. [Google Scholar] [CrossRef]
  21. Wang, Z.; Dong, Z.; Li, J.; Zhang, Q.; Wang, W.; Guo, Y. Human activity recognition based on millimeter-wave radar. In Proceedings of the 2023 5th International Conference on Frontiers Technology of Information and Computer (ICFTIC), Qiangdao, China, 17–19 November 2023; pp. 360–363. [Google Scholar] [CrossRef]
  22. Amar, R.; Alaee-Kerahroodi, M.; Mysore, B.S. FMCW-FMCW Interference Analysis in mm-Wave Radars; An indoor case study and validation by measurements. In Proceedings of the 2021 21st International Radar Symposium (IRS), Berlin, Germany, 21–22 June 2021; pp. 1–11. [Google Scholar] [CrossRef]
  23. Pearce, A.; Zhang, J.A.; Xu, R.; Wu, K. Multi-Object Tracking with mmWave Radar: A Review. Electronics 2023, 12, 308. [Google Scholar] [CrossRef]
  24. Devnath, M.K.; Chakma, A.; Anwar, M.S.; Dey, E.; Hasan, Z.; Conn, M.; Pal, B.; Roy, N. A Systematic Study on Object Recognition Using Millimeter-wave Radar. In Proceedings of the 2023 IEEE International Conference on Smart Computing (SMARTCOMP), Nashville, TN, USA, 26–30 June 2023; pp. 57–64. [Google Scholar] [CrossRef]
  25. Sonny, A.; Kumar, A.; Cenkeramaddi, L.R. Carry Object Detection Utilizing mmWave Radar Sensors and Ensemble-Based Extra Tree Classifiers on the Edge Computing Systems. IEEE Sens. J. 2023, 23, 20137–20149. [Google Scholar] [CrossRef]
  26. TI. Available online: https://www.ti.com/lit/ds/symlink/iwr6843aop.pdf?ts=1617800733758&ref_url=https%2 53A%252F%252Fwww.ti.com%252Fproduct%252FIWR6843AOP (accessed on 5 April 2024).
  27. Ti Developer Zone. Available online: https://dev.ti.com/tirex/explore/node?devtools=AWR6843AOPEVM&node=A__AGnx4WbbqMvEcH9P.cgwvg__com.ti.mmwave_devtools__FUz-xrs__LATEST (accessed on 6 April 2024).
  28. IWR6843AOPEVM Evaluation Board. TI.com. Available online: https://www.ti.com/tool/IWR6843AOPEVM (accessed on 7 April 2024).
  29. Chakraborty, S.; Nagwani, N.K. Performance Comparison of Incremental K-Means and Incremental DBSCAN Algorithms. Int. J. Comput. Appl. 2011, 27, 11–16. [Google Scholar] [CrossRef]
  30. TI. Available online: https://www.ti.com/product/IWR6843AOP (accessed on 5 April 2024).
  31. Digital Front—An Overview. ScienceDirect Topics. Available online: https://www.sciencedirect.com/topics/engineering/digital-front (accessed on 20 May 2024).
  32. Best Practices for Placement and Angle of mmwave Radar. Available online: https://www.ti.com/lit/pdf/swra758 (accessed on 10 July 2024).
  33. MmWave Demo Visualizer. Available online: https://dev.ti.com/gallery/view/mmwave/mmWave_Demo_Visualizer/ver/3.5.0/ (accessed on 5 April 2024).
  34. Ti Developer Zone. Available online: https://dev.ti.com/tirex/explore/node?a=1AslXXD__1.00.01.07&node=A__ADnbI7zK9b SRgZqeAxprvQ__radar_toolbox__1AslXXD__1.00.01.07 (accessed on 5 April 2024).
  35. Mmwave SDK User Guide Scribd. Available online: https://www.scribd.com/document/653462094/mmwave-sdk-user-guide (accessed on 5 April 2024).
  36. Paramita, A.S.; Hariguna, T. Comparison of K-Means and DBSCAN Algorithms for Customer Segmentation in E-commerce. J. Digit. Mark. Digit. Curr. 2024, 1, 43–62. [Google Scholar] [CrossRef]
Figure 1. Principle of laser technology (e.g.: in orange, different blocks circulating on a conveyor belt of a production line, in a given direction (indicated by the black arrow)-the mmWave sensor coverage is indicated in blue dashed).
Figure 1. Principle of laser technology (e.g.: in orange, different blocks circulating on a conveyor belt of a production line, in a given direction (indicated by the black arrow)-the mmWave sensor coverage is indicated in blue dashed).
Designs 09 00043 g001
Figure 3. Proposed system flowchart.
Figure 3. Proposed system flowchart.
Designs 09 00043 g003
Figure 4. Radar mmWave IWR6843AOP (adapted from [28]).
Figure 4. Radar mmWave IWR6843AOP (adapted from [28]).
Designs 09 00043 g004
Figure 5. System architecture.
Figure 5. System architecture.
Designs 09 00043 g005
Figure 6. Field of view of the radar (adapted from [32]).
Figure 6. Field of view of the radar (adapted from [32]).
Designs 09 00043 g006
Figure 7. UART datagram message (adapted from [34]).
Figure 7. UART datagram message (adapted from [34]).
Designs 09 00043 g007
Figure 8. Visualization of the plots generated by the data received from the sensor.
Figure 8. Visualization of the plots generated by the data received from the sensor.
Designs 09 00043 g008
Figure 9. Sensor stabilization with a tripod.
Figure 9. Sensor stabilization with a tripod.
Designs 09 00043 g009
Figure 10. First test. (a) Test scenario with one object, in this case, a box. (b) Result of the test in a two-dimensional graph.
Figure 10. First test. (a) Test scenario with one object, in this case, a box. (b) Result of the test in a two-dimensional graph.
Designs 09 00043 g010
Figure 11. Second test. (a) Test scenario with two objects, in this case, two boxes. (b) Result of the test in a two-dimensional graph.
Figure 11. Second test. (a) Test scenario with two objects, in this case, two boxes. (b) Result of the test in a two-dimensional graph.
Designs 09 00043 g011
Figure 12. First test after configuration changes. (a) Test scenario with one object, in this case, one box. (b) Result of the test in a three-dimensional graph.
Figure 12. First test after configuration changes. (a) Test scenario with one object, in this case, one box. (b) Result of the test in a three-dimensional graph.
Designs 09 00043 g012
Figure 13. Second test after configuration changes. (a) Test scenario with two objects, in this case, two boxes. (b) Test result in a three-dimensional graph.
Figure 13. Second test after configuration changes. (a) Test scenario with two objects, in this case, two boxes. (b) Test result in a three-dimensional graph.
Designs 09 00043 g013
Figure 14. First test after DBSCAN and data processing to represent the real dimension of objects. (a) Test scenario with one object, in this case, one box. (b) Result of the test in a three-dimensional graph.
Figure 14. First test after DBSCAN and data processing to represent the real dimension of objects. (a) Test scenario with one object, in this case, one box. (b) Result of the test in a three-dimensional graph.
Designs 09 00043 g014
Figure 15. Second test after DBSCAN and data processing to represent the real dimension of objects. (a) Test scenario with one object, in this case, one box. (b) Result of the test in a three-dimensional graph.
Figure 15. Second test after DBSCAN and data processing to represent the real dimension of objects. (a) Test scenario with one object, in this case, one box. (b) Result of the test in a three-dimensional graph.
Designs 09 00043 g015
Figure 16. First final test. (a) Test scenario with one object, in this case, one box. (b) test result in a three-dimensional graph.
Figure 16. First final test. (a) Test scenario with one object, in this case, one box. (b) test result in a three-dimensional graph.
Designs 09 00043 g016
Figure 17. Second final test. (a) Test scenario with two equal objects, in this case, two boxes. (b) Result of the test in a three-dimensional graph.
Figure 17. Second final test. (a) Test scenario with two equal objects, in this case, two boxes. (b) Result of the test in a three-dimensional graph.
Designs 09 00043 g017
Figure 18. Third final test. (a) Test scenario with two different objects, in this case, two boxes. (b) Test result in a three-dimensional graph.
Figure 18. Third final test. (a) Test scenario with two different objects, in this case, two boxes. (b) Test result in a three-dimensional graph.
Designs 09 00043 g018
Figure 19. Fourth final test. (a) Test scenario with two different objects, in this case, two boxes. (b) Test result in a three-dimensional graph.
Figure 19. Fourth final test. (a) Test scenario with two different objects, in this case, two boxes. (b) Test result in a three-dimensional graph.
Designs 09 00043 g019
Figure 20. Last final test. (a) Test scenario with three different objects, in this case, three boxes. (b) Test result in a three-dimensional graph.
Figure 20. Last final test. (a) Test scenario with three different objects, in this case, three boxes. (b) Test result in a three-dimensional graph.
Designs 09 00043 g020
Table 1. Structure of the frame header fields (adapted from [34]).
Table 1. Structure of the frame header fields (adapted from [34]).
ValueTypeBytesDetails
Magic worduint16_t8Output buffer magic word (sync word), it is initialized to, e.g., 0x0102
Versionuint32_t4SDK Version
Total Packet Lengthuint32_t4Total packet length including frame header
Platformuint32_t4Device type, e.g., 0xA6843
Frame Numberuint32_t4Frame number
Time [in CPU cycles]uint32_t4Time in CPU cycles when the message was created
Num Detected Objuint32_t4Number of detected objects (points) for the frame
Num TLVsuint32_t4Number of TLV items for the frame
Subframe Numberuint32_t40 if advanced subframe mode not enabled, otherwise range 1 to number of frame-1
Table 2. Structure of the TLV header fields (adapted from [34]).
Table 2. Structure of the TLV header fields (adapted from [34]).
ValueTypeBytesDetails
Typeuint32_t4Indicates types of the message contained in the payload
Lengthuint32_t4Length of the payload in bytes (not including TLV header)
Table 3. Different types of TLVs (adapted from [34]).
Table 3. Different types of TLVs (adapted from [34]).
Type IdentifierValue Type
1Detected Points
2Range Profile
3Noise Floor Profile
4Azimuth Static Heatmap
5Range-Doppler Heatmap
6Statistics (Performance)
7Side Info for Detected Points
8Azimuth/Elevation Static Heatmap
9Temperature Statistics
Table 4. Payload of TLV type 1 (adapted from [34]).
Table 4. Payload of TLV type 1 (adapted from [34]).
ValueTypeBytes
X [m]float4
Y [m]float4
Z [m]float4
Doppler [m/s]float4
Table 5. Payload of TLV type 7 (adapted from [34]).
Table 5. Payload of TLV type 7 (adapted from [34]).
ValueTypeBytes
SNR [dB]uint16_t2
Doppler [m/s]uint16_t2
Table 6. Video table of parameters names.
Table 6. Video table of parameters names.
ParameterOld ValueOptimized ValueDescription
aoaFovCfg−90, 90, −90, 90−60, 60, −60, 60Restricts the radar field of view to increase precision
cfarCfg (procDirection = 1)Threshold = 15Threshold = 18Adjusts CFAR threshold for Doppler detection
cfarCfg (procDirection = 0)Threshold = 15,
Peak grouping = 1
Threshold = 14, Peak grouping = 0Adjusts CFAR threshold for range detection and disables peak grouping
staticBoundaryBoxWe added this parameter to configuration fileX-min = −1
X-max = 1,
Y-min = 0,
Y-max = 1.5,
Z-min = −0.35,
Z-max = 1
Defines a specific area where the load is expected to remain static
sensorPositionWe added this parameter to configuration fileHeight = 0.34, AzimTilt = 0, ElevTilt = 0Specifies radar sensor placement
cfarFovCfg (procDirection = 1)No changes neededFilter detected points outside the specified limits in the range direction or Doppler direction.
cfarFovCfg (procDirection = 0)max = 8.92max = 1.5
Table 7. Nominal error per axis.
Table 7. Nominal error per axis.
ExperimentX Measured (m)X Actual
(m)
X Error (%)Y Measured (m)Y Actual (m)Y Error (%)Z Measured (m)Z Actual (m)Z Error (%)
10.57170.45270.2970.35150.4350.441.13
20.8530.9055.750.3910.3511.70.40.4358
30.4440.451.330.3850.3558.450.62820.6250.51
40.74670.6220.440.28770.3517.80.583240.556.04
50.565520.628.7870.38670.3510.480.62140.631.365
Table 8. Volume error analysis.
Table 8. Volume error analysis.
ExperimentX Range
(m)
Y Range
(m)
Z Range
(m)
Sensor
Volume ( m 3 )
Actual
Volume ( m 3 )
Error
(%)
1Min: −0.1717
Max: 0.4
Min: 0.5979
Max: 0.89538
Min: −0.349
Max: 0.08585
0.0740.06936.78
2Min: −0.78355
Max: 0.0695
Min: 0.61
Max: 1
Min: −0.324
Max: 0.0736
0.13310.13783.53
3Min: −0.38155
Max: 0.0627
Min: 0.61
Max: 0.993
Min: −0.37065
Max: 0.2575
0.10740.09987.61
4Min: −0.5178
Max: 0.22893
Min: 0.5943
Max: 0.882
Min: −0.2725
Max: 0.3107
0.12530.11944.94
5Min: −0.34476
Max: 0.22076
Min: 0.5655
Max: 0.9522
Min:−0.2453
Max: 0.3761
0.13590.13670.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pires, L.M.; Alves, T.; Vassaramo, M.; Fialho, V. Design and Development of a High-Accuracy IoT System for Real-Time Load and Space Monitoring in Shipping Containers. Designs 2025, 9, 43. https://doi.org/10.3390/designs9020043

AMA Style

Pires LM, Alves T, Vassaramo M, Fialho V. Design and Development of a High-Accuracy IoT System for Real-Time Load and Space Monitoring in Shipping Containers. Designs. 2025; 9(2):43. https://doi.org/10.3390/designs9020043

Chicago/Turabian Style

Pires, Luis Miguel, Tiago Alves, Mikil Vassaramo, and Vitor Fialho. 2025. "Design and Development of a High-Accuracy IoT System for Real-Time Load and Space Monitoring in Shipping Containers" Designs 9, no. 2: 43. https://doi.org/10.3390/designs9020043

APA Style

Pires, L. M., Alves, T., Vassaramo, M., & Fialho, V. (2025). Design and Development of a High-Accuracy IoT System for Real-Time Load and Space Monitoring in Shipping Containers. Designs, 9(2), 43. https://doi.org/10.3390/designs9020043

Article Metrics

Back to TopTop