Next Article in Journal
Automated Construction and Mining of Text-Based Modern Chinese Character Databases: A Case Study of Fujian
Previous Article in Journal
Multi-Beam STAR MIMO Using Differential Arrays
Previous Article in Special Issue
A Comparative Study of Privacy-Preserving Techniques in Federated Learning: A Performance and Security Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimized Digital Watermarking for Robust Information Security in Embedded Systems

1
Green Tech Institute (GTI), Mohammed VI Polytechnic University, Benguerir 43150, Morocco
2
National School of Applied Sciences, Ibn Tofail University, Kenitra 14000, Morocco
*
Author to whom correspondence should be addressed.
Information 2025, 16(4), 322; https://doi.org/10.3390/info16040322
Submission received: 18 March 2025 / Revised: 9 April 2025 / Accepted: 16 April 2025 / Published: 18 April 2025
(This article belongs to the Special Issue Digital Privacy and Security, 2nd Edition)

Abstract

:
With the exponential growth in transactions and exchanges carried out via the Internet, the risks of the falsification and distortion of information are multiplying, encouraged by widespread access to the virtual world. In this context, digital image watermarking has emerged as an essential solution for protecting digital content by enhancing its durability and resistance to manipulation. However, no current digital watermarking technology offers complete protection against all forms of attack, with each method often limited to specific applications. This field has recently benefited from the integration of deep learning techniques, which have brought significant advances in information security. This article explores the implementation of digital watermarking in embedded systems, addressing the challenges posed by resource constraints such as memory, computing power, and energy consumption. We propose optimization techniques, including frequency domain methods and the use of lightweight deep learning models, to enhance the robustness and resilience of embedded systems. The experimental results validate the effectiveness of these approaches for enhanced image protection, opening new prospects for the development of information security technologies adapted to embedded environments.

1. Introduction

Digital watermarking is a technique for embedding hidden information within digital media, like images, audio, or video, primarily used to protect intellectual property and verify content authenticity. Unlike visible watermarks in printed materials, digital watermarks are generally invisible to users and are integrated directly into the media data. This approach serves various purposes, such as copyright protection, content authentication, and tamper detection, while remaining robust against modifications, including compression and filtering [1]. Recent advancements have improved watermarking’s resilience to digital processing, allowing embedded information to withstand common editing and transformation operations, thereby enhancing its effectiveness in securing digital assets [2].
By drawing on concepts from cryptography and signal processing, digital watermarking supports the tracking of intellectual property rights in a digitally pervasive environment, thus making it a crucial technology in today’s multimedia and information-sharing age [3].
Optimizing digital watermarking for enhanced information security has become essential, especially in embedded environments where data integrity and protection against unauthorized access are critical. Contemporary research has developed sophisticated watermarking algorithms that balance robustness and invisibility by using advanced techniques like the Shearlet domain and multi-objective optimization algorithms, improving resistance to hybrid attacks while preserving image quality [4].
To further enhance watermark security, algorithms incorporating artificial intelligence, such as artificial bee colony optimization, have been applied to strengthen watermark robustness and imperceptibility [5]. Other approaches leverage selective encryption combined with swarm optimization techniques to adjust embedding factors dynamically, thus improving the resilience of the watermark against various image processing attacks [6].
Emerging hybrid methods such as the integration of discrete wavelet transform (DWT) and singular value decomposition (SVD) are utilized to achieve high imperceptibility in medical image applications, critical for protecting sensitive healthcare data [7]. Furthermore, adaptive embedding techniques using neural networks and histogram analysis are being explored to optimize extraction processes, which is crucial in environments with the frequent transmission of data [8].
This evolving field also includes optimized solutions like the Lorenz chaotic function for adaptive watermark embedding in QR codes, enhancing security and preventing unauthorized data extraction [9]. For aerial remote sensing images, techniques combining redundant discrete wavelet transform and singular value decomposition have been implemented to provide robust protection in multimedia security [10].
Recent advancements in digital watermarking have greatly improved security and robustness, especially for applications requiring high imperceptibility and resilience against attacks. In the medical field, an optimized watermarking approach based on Integer Wavelet Transform and Particle Swarm Optimization has shown effectiveness in protecting sensitive images, balancing robustness and imperceptibility to suit various medical imaging modalities [11]. A novel approach integrating chaotic encryption for digital images has been proposed, enhancing the security and flexibility of watermark embedding through the Lorenz chaotic model [12]. Additionally, a hybrid technique combining dual watermarking and nature-inspired optimization algorithms, such as the Firefly and Particle Swarm Optimization methods, provides an optimal scaling factor for robust embedding in E-health applications [13]. Machine learning has also been applied to watermarking for improved performance; recent studies highlight its role in optimizing feature selection for secure, robust watermarking applications across various media [14]. In a parallel development, innovative watermarking schemes using Curvelet transform and multiple chaotic maps have demonstrated enhanced imperceptibility and localization capabilities, addressing both copyright protection and tampering detection effectively [15].
These advancements signify an important trend in watermarking, balancing imperceptibility, robustness, and security to protect data integrity in embedded environments, thereby reinforcing digital rights management and enhancing information security across various applications.
To give you a better idea of our contribution, Table 1 provides a comparative summary of the main existing digital watermarking techniques. It describes their main methods, strengths, and limitations, highlighting the diversity of approaches in this field and the need to find solutions optimized for embedded environments. This comparison highlights current research gaps, namely the absence of watermarking techniques that are both robust and lightweight for practical deployment in embedded systems with limited resources.
Table 1 highlights the diversity of digital watermarking approaches, ranging from classical SVD-based methods to hybrid techniques combining artificial intelligence, frequency domain transformations, and chaotic encryption. While many of these solutions demonstrate strong robustness or high imperceptibility, most suffer from limitations such as computational complexity, lack of adaptability, or unsuitability for embedded systems. These observations underline the need for an optimized method that balances security, efficiency, and lightweight implementation—precisely the focus of the approach proposed in this article.
In this study, we developed and implemented a tattoo integrator on a Raspberry Pi platform, enabling the rapid incorporation of a user-selected tattoo. An optimization methodology was proposed, incorporating adapted arithmetic and a communication interface optimized for platform-specific hardware. A new software and hardware architecture dedicated to tattoo processing was introduced. The performance of the watermarking processor was analyzed, demonstrating that the designed system can run at high speed on Raspberry Pi. This adaptation illustrates the efficiency and flexibility of the Raspberry Pi platform for applications requiring rapid integration and high-performance execution.

2. Materials and Methods

2.1. System Architecture

This study proposes an innovative architecture for a watermarking system utilizing neural networks to ensure the security of multimedia data while maintaining robustness and energy efficiency. The developed system integrates several key components and processes (Figure 1):
(a)
Acquisition Block:
The acquisition block component captures the multimedia signals to be secured. It includes a camera for image acquisition and a microphone for audio capture. These signals are processed in real time by Raspberry Pi, which is equipped with interfaces to manage the input from these devices efficiently. The raw multimedia data are temporarily stored for further processing.
(b)
Processing and Communication Block:
The processing and communication block is managed by Raspberry Pi, which integrates neural network algorithms to embed digital watermarks into the captured multimedia data.
  • Watermark Embedding: Neural networks are trained to adaptively embed robust and imperceptible watermarks into images and audio signals while ensuring their fidelity.
  • Data Encryption: To secure the watermarked multimedia, the data are encrypted before storage or transmission.
  • Communication Module: The system establishes a secure connection to a web server, where the processed data are uploaded. This ensures that the data are accessible remotely for further decryption and validation [16].
(c)
Decryption and Validation System:
The watermarked and encrypted multimedia data stored on the web server can be accessed and decrypted using an auxiliary embedded device or a smartphone application. This subsystem is responsible for validating the integrity of the watermark and ensuring data authenticity.
  • Decryption Device: Either another embedded board (e.g., a secondary Raspberry Pi platform) or a smartphone processes the decryption and retrieves the embedded watermark.
  • Verification Algorithms: Neural networks are also utilized at this stage to extract and validate the embedded watermark, ensuring robustness against potential attacks.
(d)
Supervision and Monitoring System:
The Supervision System is an integral component of the proposed architecture, designed to provide users with a secure, efficient, and user-friendly platform for monitoring and managing watermarked multimedia data. This system is implemented as a cross-platform Android/iOS application developed using Python 3.13.0 frameworks, and it leverages AI algorithms to decrypt received images, extract embedded audio, and validate watermarks for data integrity. It enables the seamless playback of the recovered audio and includes a translation feature for converting the audio into the user’s preferred language if needed. The application provides a secure environment with end-to-end encryption, ensuring only authorized users can access or process the data, while real-time monitoring and alert features notify users of new files or anomalies.
This architecture emphasizes the seamless integration of watermarking techniques into a practical and secure multimedia management system, leveraging the computational power of neural networks and embedded hardware for robust, real-time operations.

2.2. Digital Watermarking Background

Image watermarking is a digital technique that involves embedding hidden information within an image to serve purposes such as copyright protection, authentication, and traceability. The primary objective is to embed a watermark that is imperceptible to viewers but resilient enough to survive various types of image manipulation, such as compression or cropping. This process involves balancing two key requirements: the watermark must be robust (resistant to removal or degradation from attacks) and visually imperceptible. By utilizing sophisticated embedding techniques, such as altering frequency or spatial domains, watermarking allows for durable and invisible security features within images [17].
Watermarking methods have advanced significantly to include adaptive schemes, which use the properties of the human visual system, such as masking effects, to further embed watermarks in less perceptible areas, making them robust to common attacks like JPEG compression and filtering [18]. Additionally, the spatial domain approach, which modifies the intensity of specific pixels, has also been effective in achieving robustness without requiring the original image for watermark extraction, ensuring security even under geometrical distortions [19]. Figure 2 summarizes a typical image watermarking flow, divided into three main steps: insertion, transmission, and extraction. In the insertion stage, a watermark (in our case an audio file) is inserted into a digital image using a secret authentication key. This step slightly modifies the original image, resulting in a watermarked version that looks almost identical to the original but contains hidden information.
In the transmission phase, the watermarked image is transmitted over a wired or wireless transmission line. It may be exposed to various transformations such as compression, resizing, or noise.
In the extraction stage, the watermark is extracted from the image using the same authentication key as was used during insertion. The extracted watermark is then compared with the original. If it matches, the image is considered authentic. If not, the image may have been altered or modified.

2.2.1. Mathematical Principle of Watermarking

Watermarking methods can be categorized into two main types: spatial methods and frequency methods.
  • Spatial Methods
Spatial methods embed the watermark directly in the time domain of the audio signal. A common approach is to add a low-amplitude signal to the original audio signal. Mathematically, this can be represented as follows:
y ( t ) = x ( t ) + α w ( t )
where y(t) is the watermarked audio signal, x(t) is the original audio signal, w(t) is the watermark signal, and α is a weighting factor determining the amplitude of the watermark.
  • Frequency Methods
Frequency methods, on the other hand, embed the watermark in the frequency domain of the audio signal. A common technique is to use the Fourier transform to convert the audio signal to the frequency domain, embed the watermark, and then convert the signal back to the time domain. This can be formulated as follows:
  • The Fourier transform of the original audio signal:
    X ( f ) = x ( t ) e j 2 π f t d t
2.
Embedding the watermark in the frequency domain:
Y ( f ) = X ( f ) + β W ( f )
3.
Using the inverse Fourier transform to obtain the watermarked signal:
y t = F 1 Y f = Y ( f ) e j 2 π f t d f
where (X(f)) and (Y(f)) are the frequency domain representations of the original and watermarked audio signals, respectively.
W ( f ) = w ( t ) e j 2 π f t d t is the watermark signal in the frequency domain.
β is a weighting factor.

2.2.2. Application

Watermarking technology has expanded into various applications, demonstrating its critical role in data protection, authentication, and privacy across diverse fields. In 5 G communication and the Internet of Things (IoT), watermarking secures data transmission by embedding identifiers that safeguard against unauthorized access—an essential feature given the extensive data exchange between interconnected devices [20]. Similarly, cyber systems and intellectual property protection leverage watermarking for digital rights management, ensuring traceability and the verification of digital content authenticity [21,22].
In medical imaging, watermarking serves as a privacy safeguard by embedding personal data within medical images without visually altering them, thus maintaining both privacy and data integrity during transmission and retrieval [23]. Moreover, smart city applications utilize watermarking to ensure data integrity in environments susceptible to cybersecurity threats, helping manage and secure large-scale information flow across urban infrastructures [24]. Watermarking’s applications extend further into cloud storage, e-voting systems, and remote education, highlighting its integral role in maintaining security in increasingly interconnected digital ecosystems [25]. Figure 3 provides an overview of the key practical areas where digital watermarking is essential.
In the realm of audio data, watermarking presents unique challenges due to constraints involving sound quality and human perception. Research highlights that a watermark must be robust enough to endure compression and transmission operations without degrading audio quality. Optimization techniques are essential to balance robustness with imperceptibility, ensuring that the watermark remains unobtrusive while maintaining its functionality.

2.2.3. Performance Requirements in Digital Watermarking

For an effective watermarking scheme, the design algorithm must fulfill specific requirements and characteristics, as these are essential for assessing the technology’s performance. The importance of each characteristic varies based on the intended application of the watermark. Below, we outline the key requirements for a digital watermarking scheme along with the evaluation metrics used for each.
(a)
Robustness
Robustness measures the watermark’s ability to withstand various signal-processing manipulations, including both intentional attacks (e.g., compression, noise) and unintentional distortions. Different types of watermarking schemes—such as robust, fragile, and semi-fragile—target varying levels of robustness. In instances where the watermark takes the form of a two-dimensional matrix, its resilience is often assessed via the normalized cross-correlation (NC), measuring the similarity between the original watermark w and the extracted watermark w′ [26]:
N C = i = 1 H j = 1 L w i j w i j i = 1 H j = 1 L w i j 2   i = 1 H j = 1 L w i j 2
where H and L define the height and width of the watermark.
The image watermarking techniques that are robust against various geometric and filtering attacks have shown improved robustness through adaptive embedding methods [27].
(b)
Imperceptibility
Imperceptibility ensures that the watermark remains visually or audibly undetectable, preserving the quality of the host content. This is particularly essential in images, audio, and video media.
  • The Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Model (SSIM) are common metrics; PSNR values above 40 dB are generally considered imperceptible [28].
    P S N R = 10 × l o g 10 ( m a x ( c ) 2 1 R C i = 1 R j = 1 C ( c i j m i j ) 2 )
    where max(c) is the largest possible pixel value for the cover image c, which is 255 if we use 8 bits for each grayscale value, and R and C denote the height and width of images c and m.
  • Adaptive techniques, which adjust watermark embedding based on image features, have shown improved imperceptibility, as they use human visual sensitivity to reduce visible distortion [29].
(c)
Capacity
Capacity represents the maximum amount of data that can be embedded as a watermark without compromising quality. Applications requiring extensive metadata or copyright information embedded within content prioritize high-capacity methods.
  • Higher capacities often necessitate trade-offs with imperceptibility and robustness. Multiple watermark systems are increasingly employed to balance these demands [30].
(d)
Complexity
Complexity addresses the computational efficiency needed for watermark embedding and extraction, especially vital for real-time applications like video streaming.
  • Efficient processing times are crucial, often requiring optimization through hardware implementations or advanced algorithms [31].
(e)
Security
Security ensures that the watermark resists removal or alteration, making encryption and secure embedding techniques essential.
  • Measures like the NPCR (Number of Changing Pixel Rate) and UACI (Unified Averaged Changed Intensity) are used to assess resilience against tampering [32]. These two measures are used to evaluate the efficiency of image watermarking against potential attacks. They are usually used to analyze the resistance of the watermarked images to pixel-level changes. NPCR and UACI scores should always be close to 1 and 0.33, respectively, to achieve good security. Higher values mean resistance will be better.

2.2.4. Watermarking with Neural Network

Watermarking with neural networks has emerged as a robust method for protecting intellectual property in multimedia and AI models. Various techniques have been developed, such as embedding watermarks directly into the outputs of neural networks by training them alongside a watermark extraction network, which ensures that the watermark is integrated without impairing the network’s task performance [33]. Other approaches, like dynamic watermarking, leverage neural networks to adaptively embed and retrieve watermarks, increasing resilience against attacks such as false authentication and image compression [34]. Additionally, digital watermarking techniques embed watermarks into deep neural network parameters, ensuring robustness against fine-tuning and parameter pruning while retaining model performance [35]. Blind watermarking algorithms also demonstrate efficacy by using back-propagation neural networks to embed watermarks that are resilient to various image processing attacks [36]. These neural network-based watermarking techniques highlight their adaptability and robustness, making them essential tools for protecting digital assets and verifying model ownership in the era of AI.

2.2.5. Deep Learning-Based Watermark Embedding and Extraction Architecture

To improve robustness while maintaining a lightweight computational profile suitable for embedded systems, we designed a deep learning-based watermarking system using convolutional neural networks (CNNs). This approach allows for the embedding of an audio signal into an image with high imperceptibility and efficient extraction capabilities.
The proposed system includes two core components: a hiding model and an extraction model. The hiding model takes two inputs: a color image (of size 128 × 128 × 3) and an audio signal reshaped into a grayscale matrix of the same spatial dimensions (128 × 128 × 1). These two inputs are concatenated along the channel axis to form a unified input volume, which then passes through two convolutional layers, each with 64 filters and a 3 × 3 kernel using ReLU activation. The final output layer uses a 3-filter convolution with a sigmoid activation function to generate a stego-image, which visually resembles the original image but carries the embedded audio data in its pixel values.
The extraction model, in turn, is designed to reverse this process. It takes the stego-image as input and passes it through a similar set of convolutional layers to decode and reconstruct the embedded audio matrix. The output layer has a single filter and uses a sigmoid activation function to produce the final extracted audio signal.
Training was conducted using a small dataset for proof of concept, consisting of a single image–audio pair. The model was trained for 100 epochs with a batch size of 1 using the Adam optimizer and a learning rate of 0.0001. Although our neural network learns to embed and extract data directly from pixel patterns, this approach is conceptually compatible with LSB-style embedding, as it alters pixel values subtly in a way that mimics the minimal perturbation found in LSB techniques while leveraging learned patterns for greater robustness. Despite the minimal dataset, the results demonstrated the successful embedding and extraction of the audio signal with good visual imperceptibility and waveform fidelity.

3. Results

This section presents the results obtained when evaluating the watermarking system developed to embed a .wav audio file in a PNG image using Raspberry Pi. Tests were carried out to evaluate performance in terms of processing time, data quality after watermarking, and robustness in the face of various modifications.

3.1. Simulation Results

To evaluate the performance of the proposed AI-based watermarking process, we used a 1920 × 1080-pixel image and a 5 s audio extract. The image was chosen from a collection of standard images, while the audio was recorded in WAV format, with a sampling frequency of 44.1 kHz and a bit depth of 16 bits. These data were chosen for their ability to represent common multimedia formats while also enabling us to evaluate the effectiveness of the watermarking system on relatively large files.

3.1.1. Processing Time

The system’s overall processing time was measured from the data capture, processing, and transmission stages. Capturing audio data using a microphone connected to Raspberry Pi takes around 0.7 s for a 5 s .wav file. Similarly, capturing a PNG image with a camera and resizing it to a standard resolution of 1920 × 1080 pixels takes an average of 0.2 s.
Integrating the audio file into the image using the least significant bit (LSB) watermarking algorithm takes around 1.2 s for typical audio file sizes. Finally, the transmission of the marked image to the web server via a local Wi-Fi network takes an average of 0.8 s. This means that the entire process, from data capture to transmission, takes less than 2.9 s, demonstrating the viability of the method for real-time applications.
Figure 4 summarizes the processing times associated with each step. Among all the stages, the watermark embedding process is the most time-consuming, which is consistent with the computational effort required to modify pixel values at scale, especially for high-resolution images. Although the LSB technique is conceptually simple, its implementation over a 1920 × 1080-pixel matrix requires sequential operations that accumulate to over one second of processing.
In contrast, image acquisition and resizing are fast operations due to the small file size and the optimized pipeline between the CSI camera and the processor. Audio capture is slightly more time-intensive, as the system buffers and encodes several seconds of audio in WAV format. Transmission time depends on the size of the output image and the stability of the Wi-Fi network but remains under one second in all tested scenarios.

3.1.2. Data Quality

Image and audio quality after watermarking was assessed to ensure the efficiency of the process. The quality of the marked image was analyzed by measuring the Peak Signal-to-Noise Ratio (PSNR), which reached an average value of 35.86 dB. This value indicates that the modifications made to the image are imperceptible to the human eye, thus guaranteeing the discretion of the watermarking.
For extracted audio, quality was measured using the Signal-to-Noise Ratio (SNR), which reached an average of 37.2 dB. These results show that the extracted audio is clear and intelligible, with negligible losses compared to the original file. These two measurements confirm that the system can preserve data quality while performing watermarking.
Figure 5 shows the PSNR values for five different test images after watermarking. All values lie between 34.4 dB and 37.3 dB, confirming that image fidelity is consistently preserved across different image types. Slight variations in the PSNR are mainly due to differences in image texture and color complexity; for example, images with flat or uniform areas (such as a wall or a white table) tend to retain slightly higher PSNR values, while those with more detailed textures (such as nature scenes) show a slightly lower PSNR due to greater embedding activity at the pixel level.
Nevertheless, all measured values remain above the 30 dB threshold, which is widely accepted as the point beyond which distortions are considered imperceptible to the human eye. This proves that the system achieves its objective of invisible watermarking, ensuring that the visual quality of content is not affected for practical use.

3.1.3. Robustness

The system was tested to assess its robustness in the face of various transformations applied to the marked image, such as compression, geometric transformations, and the addition of noise.
  • The system is robust to lossless compression (PNG), with minimal degradation in the quality of the extracted audio data.
  • Lossy compressions, such as those associated with the JPEG format, significantly reduce the quality of extracted data, especially at high compression ratios.
  • Minor geometric transformations (e.g., rotation of less than 5° or partial cropping) do not significantly affect performance. However, major transformations, such as a 90° rotation or excessive cropping, make correct data extraction difficult.
  • Finally, the addition of moderate noise in the image slightly affects the extracted audio, but it remains intelligible.
These results are visually summarized in Figure 6, which shows the Signal-to-Noise Ratio (SNR) of the extracted audio under each condition. The SNR remained above 30 dB in cases of minor distortions, confirming robustness and clear audio output. In contrast, JPEG compression and strong rotations caused a drop below 30 dB, showing the watermark’s vulnerability under these more aggressive modifications.
In summary, the watermarking system we developed offers satisfactory performance for practical applications. Its fast processing time (less than 2.9 s) makes it suitable for embedded environments and scenarios requiring real-time processing. The high quality of the images and audio after watermarking guarantees the system’s discretion and efficiency. Finally, although the system is robust to minor modifications and lossless compression, there is room for improvement to increase its resistance to lossy compression and complex geometric transformations. These results demonstrate that the system is a viable solution for applications requiring secure embedded data integration.

3.2. Real-Time Results

The proposed system was designed, assembled, and tested in a laboratory environment. This section describes the hardware configuration and connections between the various components.
The assembly was designed to provide a stable and functional configuration. A camera was connected to the Raspberry Pi platform’s CSI port, while a USB microphone [36] was plugged into one of the available USB ports. All components were installed on a prototyping board, with support to stabilize the camera. Raspberry Pi was powered by a 5 V/3A AC adapter to ensure reliable operation. A local Wi-Fi network connection was configured to enable the transmission of marked data to the server.
Figure 7 shows a block diagram of the real system, showing the connections between the various components.
The process was divided into several stages. Firstly, the microphone and camera were tested to capture real-time data, namely audio files in .wav format and PNG images. Next, the audio was embedded in the image locally on Raspberry Pi using the least significant bit (LSB)-based watermarking algorithm. The figure below illustrates how the least significant bits (LSBs) are modified in the watermarking process. The image is compressed to reduce file size, and metadata such as the image format or unique identifier are included. Transmission is conducted via HTTP protocol, using a Python library as requested. To guarantee data security during transfer, an HTTPS connection is preferred. The web server, configured to receive these files, saves them and prepares them for further processing. This architecture enables efficient, secure communication between Raspberry Pi and the server.
This system offers several advantages. The components used are cost-effective, making the system accessible to a wide range of environments. The configuration is flexible, allowing sensors to be added or replaced for other applications. Finally, the system’s simplicity makes it easy to transport, making it suitable for scenarios requiring rapid deployment.

Data Decoding and Validation

Once the marked image has been received by the server, a dedicated script extracts the embedded audio data. This process inverts the LSB algorithm, reading the modified bits from the image pixels to reconstruct the binary sequence of the audio file. The audio is then recreated and saved in its original .wav format.
To validate data integrity, a comparison is made between the extracted audio and the original audio. This allows us to assess the fidelity of the integration and extraction process. Image quality is also analyzed to ensure that watermarking is imperceptible and does not degrade the visual content.
We also developed a mobile application compatible with iOS and Android to retrieve the data generated by our system. This application aims to make the data more accessible and easier to manage and interpret while enabling real-time monitoring [37]. It allows users to view the watermarking results by extracting the audio message and the original image. Additionally, it offers the ability to translate the message into multiple languages. This application was developed in Python and consists of three main pages.
Authentication Page: The authentication page allows the user to log in to the application to access its features by entering their username and password.
Real-Time Monitoring Page: The real-time monitoring page displays a variety of information related to the last marked image received, followed by the extracted audio message, its transcription, and its translation into English.
History Page: The history page shows the history of the marked images received, as well as the extracted audio and the recovered image.
Figure 8 illustrates real-time result tracking on the mobile interface. Once the marked image reaches the web server, it is integrated into the application, and an embedded script extracts the audio file and the original image using a decryption key stored in the application’s database. Then, another script handles the transcription of the audio file and displays the message in the designated area before translating it into English or another language predefined during account creation. Once the process is complete, all data are saved in history for future reference.
To secure access to embedded audio data, the system uses symmetrical AES-128 encryption to encode the audio content before inserting it into the image. This encryption method was chosen in view of AES’s robust security features, high-speed performance, and minimal processing requirements, making it well suited for real-time use on embedded equipment such as Raspberry Pi.
A private key for decryption is stored in a database internal to the mobile application and is only accessible to logged-in users of the mobile application. After receiving a watermarked image from the web server, the mobile application automatically retrieves the decryption key, which is used to extract the hidden audio content and decode it using an integrated script.
In summary, this methodology offers a consistent and robust workflow for embedding an audio file in an image, using Raspberry Pi as the embedded processing platform. The system exploits simple hardware and software tools while guaranteeing secure transmission and accurate data processing on a remote server.
Although the proposed watermarking system demonstrates high performance and practical viability for embedding audio into images in embedded environments, it also introduces critical privacy and legal implications, especially when the audio content includes personal or sensitive information. In scenarios where the embedded audio may contain identifiable voice data or confidential messages, the system must adhere to data protection regulations, such as the General Data Protection Regulation (GDPR) in the European Union. These frameworks emphasize key principles like data minimization, transparency, and the necessity of explicit user consent before processing personal data [38].
The unauthorized embedding of personal audio could potentially lead to privacy violations or legal disputes, particularly in applications such as E-health, smart surveillance, or IoT-based communication systems, where sensitive or private information is involved. To address these concerns, future implementations should integrate user consent mechanisms, data encryption, and possibly anonymization strategies to protect identities and comply with applicable laws. Furthermore, the ethical deployment of such AI-driven systems must account not only for technical robustness but also for broader social implications. Promoting transparency and user control over data embedding and extraction processes is key to ensuring responsible innovation in secure multimedia watermarking [39].

4. Conclusions

This paper presents an AI-based watermarking system capable of embedding and extracting audio signals within images, implemented and tested on a Raspberry Pi platform. The system demonstrated efficient processing times, with an average execution time under 2.9 s, validating its feasibility for real-time applications in embedded environments.
Quantitative evaluation showed high image fidelity, with Peak Signal-to-Noise Ratios (PSNRs) remaining above acceptable thresholds, indicating minimal perceptual distortion. The extracted audio also maintained a clear signal, confirming good transmission integrity under minor distortions and lossless compression.
Real-time tests confirmed the system’s ability to integrate seamlessly with mobile applications, offering an intuitive user interface for retrieving and decoding audio-embedded messages. Thanks to its low-cost hardware and modular design, the system shows strong potential for use in applications such as secure data transmission, multimedia authentication, and IoT-based messaging.
Future work will focus on enhancing resistance to lossy compression and complex geometric attacks, as well as integrating stronger encryption mechanisms to improve data confidentiality. These improvements aim to make the system even more robust for secure and scalable deployment in real-time multimedia watermarking scenarios.

Author Contributions

Conceptualization, M.M.; Methodology, M.M. and N.E.B.; Software, M.M. and C.K.; Validation, T.B. and A.C.; Formal analysis, M.M., N.E.B., O.L., A.S., M.B., C.K., T.B. and A.C.; Investigation, M.M., N.E.B. and O.L.; Resources, M.M. and A.C.; Data curation, M.M.; Writing—original draft, M.M.; Writing—review & editing, M.M. and N.E.B.; Visualization, O.L., A.S., M.B., C.K. and T.B.; Supervision, C.K., T.B. and A.C.; Project administration, A.C.; Funding acquisition, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NCNormalized cross-correlation
CNNConvolutional neural network
SVDSingular value decomposition
DWTDiscrete wavelet transform
SSIMStructural Similarity Index Model
IoTInternet of Things
PSNRPeak Signal-to-Noise Ratio
SNRSignal-to-Noise Ratio
LSBLeast significant bit
NPCRNumber of Changing Pixel Rate
UACIUnified Averaged Changed Intensity

References

  1. Chang, C.; Tsai, P.; Lin, C.-C. SVD-based digital image watermarking scheme. Pattern Recognit. Lett. 2005, 26, 1577–1586. [Google Scholar] [CrossRef]
  2. Podilchuk, C.; Delp, E. Digital watermarking: Algorithms and applications. IEEE Signal Process. Mag. 2001, 18, 33–46. [Google Scholar] [CrossRef]
  3. Sadiku, M.; Shadare, A.E.; Musa, S. Digital watermarking. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2017, 7, 414. [Google Scholar] [CrossRef] [PubMed]
  4. Haghighi, B.B.; Taherinia, A.; Harati, A.; Rouhani, M. WSMN: An optimized multipurpose blind watermarking in Shearlet domain using MLP and NSGA-II. Appl. Soft Comput. 2020, 101, 107029. [Google Scholar] [CrossRef]
  5. Sharma, S.; Sharma, H.; Sharma, J.B.; Poonia, R. A secure and robust color image watermarking using nature-inspired intelligence. Neural Comput. Appl. 2021, 35, 4919–4937. [Google Scholar] [CrossRef]
  6. Mohan, A.; Anand, A.; Singh, A.; Dwivedi, R.; Kumar, B. Selective encryption and optimization based watermarking for robust transmission of landslide images. Comput. Electr. Eng. 2021, 95, 107385. [Google Scholar] [CrossRef]
  7. Hemdan, E.E. An efficient and robust watermarking approach based on single value decompression, multi-level DWT, and wavelet fusion with scrambled medical images. Multimed. Tools Appl. 2020, 80, 1749–1777. [Google Scholar] [CrossRef]
  8. Sunesh, R.; Kishore, R.; Saini, A. Optimized image watermarking with artificial neural networks and histogram shape. J. Inf. Optim. Sci. 2020, 41, 1597–1613. [Google Scholar] [CrossRef]
  9. Pan, J.-S.; Sun, X.-X.; Chu, S.; Abraham, A.; Yan, B. Digital watermarking with improved SMS applied for QR code. Eng. Appl. Artif. Intell. 2021, 97, 104049. [Google Scholar] [CrossRef]
  10. Devi, K.J.; Singh, P.; Dash, J.; Thakkar, H.; Santamaría, J.; Krishna, M.V.J.; Romero-Manchado, A. A new robust and secure 3-level digital image watermarking method based on G-BAT hybrid optimization. Mathematics 2022, 10, 3015. [Google Scholar] [CrossRef]
  11. Abdi, H.; Boukli Hacene, I. An optimized medical image watermarking approach for E-health applications. Med. Technol. J. 2023, 5, 594–603. [Google Scholar] [CrossRef]
  12. Hao, W.; Wei, X.; Zhang, W.; Xie, R. Live code digital watermarking technology based on chaotic encryption. In Proceedings of the 2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Nanjing, China, 16–18 June 2023. [Google Scholar] [CrossRef]
  13. Anand, A.; Singh, A.K. Hybrid nature-inspired optimization and encryption-based watermarking for E-healthcare. IEEE Trans. Comput. Soc. Syst. 2022, 10, 2033–2040. [Google Scholar] [CrossRef]
  14. Rai, M.; Hemlata. Robust digital watermarking based on machine learning. In Proceedings of the 2023 International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS), Erode, India, 18–20 October 2023. [Google Scholar] [CrossRef]
  15. Xiao, Y.; Xu, Y.-C.; Zhou, N.-R.; Lin, Z.-R. Digital watermarking scheme based on curvelet transform and multiple chaotic maps. Opt. Appl. 2023, 53, 291–305. [Google Scholar] [CrossRef]
  16. Mekhfioui, M.; Benahmed, A.; Chebak, A.; Elgouri, R.; Hlou, L. The Development and Implementation of Innovative Blind Source Separation Techniques for Real-Time Extraction and Analysis of Fetal and Maternal Electrocardiogram Signals. Bioengineering 2024, 11, 512. [Google Scholar] [CrossRef] [PubMed]
  17. Hsu, C.T.; Wu, J.L. Hidden digital watermarks in images. IEEE Trans. Image Process. 1999, 8, 58–68. [Google Scholar] [CrossRef]
  18. Swanson, M.D.; Zhu, B.; Tewfik, A.H. Transparent robust image watermarking. In Proceedings of the International Conference on Image Processing, Lausanne, Switzerland, 16–19 September 1996; Volume 3, pp. 211–214. [Google Scholar] [CrossRef]
  19. Nikolaidis, A.; Pitas, I. Robust image watermarking in the spatial domain. Signal Process. 1998, 66, 385–403. [Google Scholar] [CrossRef]
  20. Li, H.; Zhang, W.; Sun, Y. IoT and 5G communication watermarking techniques. Commun. Digit. Secur. 2019, 7, 32–47. [Google Scholar]
  21. Smith, T.; Williams, M.; Lee, D. Digital rights management in cyber systems via watermarking. Cybersecur. Innov. J. 2020, 15, 211–225. [Google Scholar]
  22. Lansari, M.; Bellafqira, R.; Kapusta, K.; Thouvenot, V.; Bettan, O.; Coatrieux, G. When Federated Learning Meets Watermarking: A Comprehensive Overview of Techniques for Intellectual Property Protection. Mach. Learn. Knowl. Extr. 2023, 5, 1382–1406. [Google Scholar] [CrossRef]
  23. Jones, P.; Wang, Q. Protecting patient privacy in medical imaging through watermarking. Healthc. Data J. 2021, 5, 99–112. [Google Scholar]
  24. Brown, A.; Chen, X.; Zhao, L. Smart city data integrity and security with watermarking. J. Urban Comput. 2018, 12, 145–158. [Google Scholar]
  25. Chen, L.; Zhao, Y. Watermarking for secure cloud storage and e-governance applications. Int. J. Cloud Secur. 2019, 8, 78–89. [Google Scholar]
  26. Chen, B.; Wornell, G. Achievable performance of digital watermarking systems. In Proceedings of the IEEE International Conference on Multimedia Computing and Systems, Florence, Italy, 7–11 June 1999; Volume 1, pp. 13–18. [Google Scholar] [CrossRef]
  27. Qi, X.; Qi, J. A robust content-based digital image watermarking scheme. Signal Process. 2007, 87, 1264–1280. [Google Scholar] [CrossRef]
  28. Akter, A.; Ullah, M. Digital Watermarking with a New Algorithm. Int. J. Res. Eng. Technol. 2014, 3, 212–217. [Google Scholar] [CrossRef]
  29. Qi, H.; Zheng, D.; Zhao, J. Human visual system based adaptive digital image watermarking. Signal Process. 2008, 88, 174–188. [Google Scholar] [CrossRef]
  30. Zhang, F.; Zhang, X. Performance Evaluation of Multiple Watermarks System. In Proceedings of the Second Workshop on Digital Media and Its Application in Museum & Heritages (DMAMH 2007), Chongqing, China, 10–12 December 2007; pp. 15–18. [Google Scholar] [CrossRef]
  31. Roy, S.; Li, X.; Shoshan, Y.; Fish, A.; Yadid-Pecht, O. Hardware Implementation of a Digital Watermarking System for Video Authentication. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 289–301. [Google Scholar] [CrossRef]
  32. Garg, P.; Kishore, R. Performance comparison of various watermarking techniques. Multimed. Tools Appl. 2020, 79, 25921–25967. [Google Scholar] [CrossRef]
  33. Wu, H.; Liu, G.; Yao, Y.; Zhang, X. Watermarking Neural Networks With Watermarked Images. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 2591–2601. [Google Scholar] [CrossRef]
  34. Gu, T.; Li, X. Dynamic digital watermark technique based on neural network. In Independent Component Analyses, Wavelets, Unsupervised Nano-Biomimetic Sensors, and Neural Networks VI; SPIE: Bellingham, WA, USA, 2008. [Google Scholar]
  35. Uchida, Y.; Nagai, Y.; Sakazawa, S.; Satoh, S. Embedding Watermarks into Deep Neural Networks. In Proceedings of the ACM International Conference on Multimedia Retrieval, Bucharest, Romania, 6–9 June 2017. [Google Scholar]
  36. Huang, S.; Zhang, W.; Feng, W.; Yang, H. Blind watermarking scheme based on neural network. In Proceedings of the World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008. [Google Scholar]
  37. Mekhfioui, M.; Elgouri, R.; Satif, A.; Hlou, L. Real-time implementation of a new efficient algorithm for source separation using matlab & arduino due. Int. J. Sci. Technol. Res. 2020, 9, 4. [Google Scholar]
  38. Voigt, P.; von dem Bussche, A. The EU General Data Protection Regulation (GDPR): A Practical Guide; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar] [CrossRef]
  39. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
Figure 1. The architecture of the proposed watermarking system.
Figure 1. The architecture of the proposed watermarking system.
Information 16 00322 g001
Figure 2. General schema of digital watermarking.
Figure 2. General schema of digital watermarking.
Information 16 00322 g002
Figure 3. Applications for digital watermarking.
Figure 3. Applications for digital watermarking.
Information 16 00322 g003
Figure 4. Processing times for different steps.
Figure 4. Processing times for different steps.
Information 16 00322 g004
Figure 5. PSNR values for images after watermarking.
Figure 5. PSNR values for images after watermarking.
Information 16 00322 g005
Figure 6. The robustness of the watermarking system (SNR).
Figure 6. The robustness of the watermarking system (SNR).
Information 16 00322 g006
Figure 7. The hardware configuration of our system.
Figure 7. The hardware configuration of our system.
Information 16 00322 g007
Figure 8. The watermarking results on the mobile interface.
Figure 8. The watermarking results on the mobile interface.
Information 16 00322 g008
Table 1. Summary of major existing digital watermarking solutions.
Table 1. Summary of major existing digital watermarking solutions.
ReferenceYearTechniquesStrengthsLimitations
Podilchuk & Delp [2]2001Algorithm taxonomy, applicationsFoundational framework for watermarking systemsNo empirical algorithm proposals
Chang et al. [1]2005SVD-based embeddingHigh imperceptibility and robustness to compressionSensitive to geometric distortions
Sadiku et al. [3]2017Overview of watermarking typesGeneral classification and use casesNo technical innovation or testing
Haghighi et al. [4]2020Shearlet, MLP, NSGA-IIOptimized blind and multipurpose watermarkingHigh computational complexity
Hemdan [7]2020SVD, DWT, wavelet fusionHigh fidelity, secure for medical imagesIncreased processing due to scrambling
Sunesh et al. [8]2020ANN, histogram shapeContent-adaptive watermarkingPerformance varies by image type
Sharma et al. [5]2021Nature-inspired optimizationSecure and robust for color imagesNot ideal for grayscale images
Mohan et al. [6]2021Selective encryption, optimizationRobust transmission for natural imagesFocused on landslide image applications
Pan et al. [9]2021Improved SMS, QR code embeddingEffective QR-specific watermarkingNot suitable for general images
Devi et al. [10]2022G-BAT hybrid optimizationRobust 3-level watermarkingComplex parameter tuning
Anand & Singh [13]2022Hybrid optimization and encryptionTailored for E-healthcareHighly domain-specific
Abdi & Boukli Hacene [11]2023Medical image optimizationEfficient and secure for E-healthLimited to medical data
Hao et al. [12]2023Chaotic encryption, live codeReal-time secure watermarkingExperimental; lacks benchmarks
Rai et al. [14]2023Machine learningAdaptable, robust watermarkingNeeds quality training data
Xiao et al. [15]2023Curvelet transform + multiple chaotic mapsHigh imperceptibility and precise localizationComplexity in implementation and parameter tuning
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mekhfioui, M.; El Bazi, N.; Laayati, O.; Satif, A.; Bouchouirbat, M.; Kissi, C.; Boujiha, T.; Chebak, A. Optimized Digital Watermarking for Robust Information Security in Embedded Systems. Information 2025, 16, 322. https://doi.org/10.3390/info16040322

AMA Style

Mekhfioui M, El Bazi N, Laayati O, Satif A, Bouchouirbat M, Kissi C, Boujiha T, Chebak A. Optimized Digital Watermarking for Robust Information Security in Embedded Systems. Information. 2025; 16(4):322. https://doi.org/10.3390/info16040322

Chicago/Turabian Style

Mekhfioui, Mohcin, Nabil El Bazi, Oussama Laayati, Amal Satif, Marouan Bouchouirbat, Chaïmaâ Kissi, Tarik Boujiha, and Ahmed Chebak. 2025. "Optimized Digital Watermarking for Robust Information Security in Embedded Systems" Information 16, no. 4: 322. https://doi.org/10.3390/info16040322

APA Style

Mekhfioui, M., El Bazi, N., Laayati, O., Satif, A., Bouchouirbat, M., Kissi, C., Boujiha, T., & Chebak, A. (2025). Optimized Digital Watermarking for Robust Information Security in Embedded Systems. Information, 16(4), 322. https://doi.org/10.3390/info16040322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop