Next Article in Journal
Emotional Maintenance: A Digital Model to Support Maintenance Decisions in Buildings’ Coatings
Previous Article in Journal
Synthesis, Characterization, and Biological Activity Study of New Heterocyclic Compounds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Deep Learning-Based Coverless Image Steganography on Medical Images Shared via Cloud †

1
Department of Computer Science & Engineering, Sharnbasva University, Kalaburagi 585103, Karnataka, India
2
Department of Computer Engineering, International Institute of Information Technology, Pune 411057, Maharashtra, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances in Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 176; https://doi.org/10.3390/engproc2023059176
Published: 18 January 2024
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
Coverless image steganography is an approach for creating images with intrinsic colour and texture information that contain hidden secret information. Recently, generative adversarial networks’ (GANs) deep learning transformers have been used to generate secret hidden images. Although it has been proven that this approach is resistant to steganalysis attacks, it modifies critical information in the images which makes the images not suitable for applications like disease diagnosis from medical images shared over cloud. The colour and textural modification introduced by GANs affects the feature vector which is extracted from certain image regions and used for disease diagnosis. To solve this problem, this work proposes an attention-guided GAN which transforms images only in certain regions and retains the originality of images in certain regions. Due to this, there is not much distortion to features and disease classification accuracy.

1. Introduction

Steganography is the technique of masking sensitive information inside an image and communicating it to the intended party [1]. Traditional image steganographic techniques modified the least significant bits of pixels or modified the frequency components in the discrete cosine transform of images to hide secret information. But these techniques are very weak against steganalysis attacks. Attackers employ various statistical correlation [2,3,4] and machine learning [5,6,7] techniques and deep learning techniques [8,9,10] to decipher the secret information hidden in images. Steganography without embedding (SWE) [11] is a recent technique for hiding information in images by modifying characteristics like colour, texture, edge, contours, pixel information, etc. This makes steganalysis difficult, enabling the secure transfer of secret information. Different SWE methods have been designed by changing the pixel intensity, colour, texture, edge, and contours. The secret information is mapped to image characteristics. During the retrieval stage, the image characteristics are mapped back to secret information. SWE uses image hashes, texture synthesis mapping, and bag of words. However, these techniques have constrained steganography abilities and require a large database of images. Recently, deep learning transformers have been used to transform a cover image to a secret information image, and in particular GANs these provide better performance. GANs generate images close to natural images. It becomes difficult for attackers to decipher any information. Though GANs generate images close to natural ones, their use in medical images can distort discriminative features. Medical images like CT and MRI images have discriminative information at certain regions which is important in computer-aided disease diagnosis. GAN transformation does not consider these special regions and applies its transformation rules at a global level over the entire image. This distorts the features at the significant regions of the image carrying discriminative information. As a result, when these medical images are transferred to cloud-based disease diagnosis systems, their accuracy is reduced. This work addresses the problem of GANs distorting the discriminative features at certain specific regions of images and proposes a novel technique called attention vector-guided GAN transformation (AVG-GAN).
The proposed AVG-GAN provides an attention vector that contains information about where the transformation for secret information mapping should be performed. The same attention vector is required in the reverse operation of recovering secret information from the hidden secret image. The discriminative ability of the secret hidden image differs from the original image by a substantial factor. As a result, the AVG-GAN is more suitable for categorisation utility-preserving operations and computer-aided disease diagnosis. The hidden information in an image could contain copyright information, allowing data theft of images at the cloud end to be discovered. The following lists this work’s significant contributions:
(i)
A novel attention vector-guided GAN for transformation of cover image without distorting specialised regions in the image;
(ii)
Classification utility-preserving transformation without reducing the accuracy of disease diagnosis by a large factor in computer-aided disease diagnosis application.

2. Related Works

It has been speculated that SWE-based steganography, which creates a connection between sensitive messages and cover images, can be used to avoid machine learning-based steganography detection. Currently, SWE can be carried out by selecting and synthesising cover images. By combining images with secret information and collecting many images, an image library utilising a cover selection method is created. It is possible to map every message or group of pictures. Cover synthesis generates a new cover image based on secret information. Zhou et al. [12] outlined a method for building a database of hashes derived from strong hashing techniques used to index pictures. The secret binary data consist of several segments. According to the techniques retrieved from the database, each segment is assigned an hash integer which is equal to its value. Zhou et al. proposed a bag-of-words (BOW) model as another way to estimate SWE [13]. Visual words are extracted from an image set using the BOW model; this allows text information keywords to be associated with image terms. An associated set of sub-images can be identified in accordance with the relationship in the mapping. Stego images with sub-images allow for secret communication. In [14], a robust image hash was proposed to link secret data to images. By matching local image hash values with secret data segments, data segments are created. Steganography capacity has been improved because of robust image hashing, making carrier images more resistant to attack. A steganography-based cover has two obvious disadvantages: it has inadequate steganography capabilities and demands a substantial local image database. It is possible to create texture images with secret information by combining methods from based on a synthesis approach. Otori et al. [15] used dotted patterns to encode then hide them in a similarly textured image to maintain acceptable quality. Xu et al. [16] presented stego texture, a technique for creating highly detailed textures from images or text messages. By applying a reversible mathematical function to a given message, one may generate a stego texture instantaneously, allowing it to be deciphered. Wu and Wang [17] created a new texture image by inverse texture reconstruction that retained the original look on a local level, but with an arbitrary size. Using texture, steganography hides message synthesis. In spite of their state-of-the-art status, cover synthesis-based schemes have one common weakness: the SWE schemes use special images (such as texture images), and sending them in a huge quantity will alert guards. Hayes et al. [18] designed a GAN-based steganography that is similarly efficient to previous steganography techniques, along with steganalysis tools for detecting hidden data. In the experiments, the trained steganography service failed 50% of the time to distinguish the messages.
Volkhonskiy et al. [19] discussed using SGANs for steganography due to the resistance of generated images against detection as well as their authenticity. Using secure cover images to deceive steganography analysis is possible when a cover image container is trained. Because of this LSB match-embedding method, SGANs have become steganography containers. When embedding schemes differ, SGANs may need to be retrained. This work does not provide any information on how resistant steganalysis algorithms are to detection. Tang et al. [20] presented an automated learning framework using GANs with adversarial subnetworks. This framework calculates the embedding change probability for every pixel in the spatial cover image. The discriminator D compares the stego images produced by the generator G—which uses embedding distortions based on change probabilities—with cover images. Nevertheless, these methods perform no better than a few well-known recently developed ones. Hu et al. [21] performed steganography using deep convolutional generative networks. The data to be hidden are converted into a noise vector. Based on the noise vector, a carrier image is then generated. If the parameters of the extractor network are compromised, the method cannot learn the hidden information. Jiang et al. [22] addressed issues such as low steganographic capacity, low information recovery accuracy, as well as a lack of natural showing. An adversarial learning-based GAN was used to solve these problems. Despite the method’s higher accuracy, it lacks a mechanism to prevent information leakage. Ke et al. [23] proposed the Kerckhoffs’ principle for generative steganography. The authors used Kirchhoff’s concept to protect confidential data. The secret information cannot be retrieved without the key. However, the method has a limited steganographic capability.

3. Attention Vector Guided GAN Steganographic Technique

The GAN comprises to networks [24]: a generating network and a discriminative network. The discriminative network looks to see if the sample was legitimately formed by the generative network or if it was constructed with the goal of misleading it. The generative network generates a sample that comes close to being real by virtue of the competition between these two networks. Because they can adjust to complex distributions, the use of GAN is used to produce synthetic data.
According to the objective function of GAN,
L G A N = E x ¯ ~ P g [ D ( x ¯ ) ] E x ¯ ~ P r D x + λ E x ¯ ~ P x [ ( | | x ¯ D x ¯ | | 2 1 ) 2 ]
Pr denotes the distribution across V. Pg denotes the distribution over which the generator generates data. Px represents the uniform samples over Pr and Pg.
Figure 1 shows the conventional architecture of the GAN-based steganographic approach. Random noise is created out of the hidden secret information.
In compliance with the noise vector, the generator (G) synthesises or updates the input cover image. The discriminator determines whether hidden embedding is present in an image. The decoder or extractor separates the stegano image with the noise vector. Also, the noise vector is used to reconstruct the hidden information.
The GAN stenographic approach applies global transformation to the entire image. This generates distortions in significant portions of the image that contain discriminative features. This paper presents AVG-GAN as a solution to this problem. The architecture of the proposed solution is given in Figure 2. The input image is split into grids, and a binary grid matrix mapping the significant regions as 1 and insignificant regions as 0 is created. The grid matrix of the image, an attention vector, and a key are given as the input into the AVG-GAN encoder. AVG-GAN encoder generates the stegno image which is uploaded to cloud. The attention vector is encrypted with the key using the AES encryption algorithm, and the encrypted attention vector is uploaded to cloud.
When a user requests secret information, it offers the key to download and decrypt the encrypted attention vector. The AVG-GAN decoder receives the decrypted attention vector. The hidden information mapped in the stegno image is obtained via the decoder.
Figure 3 illustrates the GAN encoder’s architecture. The transformation module is given the input image grids and attention vector matrix. In the significant regions, the transformation module applies a null mask. The transformed image is then sent to the generator network for processing. The generator network has been trained to output the null mask as the input null mask. The generator network generates the output stegno image using the transformed image and the secret text as input. The null mask of the output stegno image is substituted with image grid parts based on the attention matrix. The generated stegno image is then shared through the cloud.
Figure 4 illustrates the GAN decoder’s architecture. The stegno image is divided into grids, and the salient areas are designated as null masks based on the attention vector matrix to generate the transformed image. The transformed image is then sent to the discriminator network, which decodes it to generate the secret text. Because there is no distortion, the significant regions are preserved in the image.

4. Result and Discussion

Three medical image datasets—a brain tumour dataset from Kaggle [25], a glaucoma dataset [26], and an ultrasound ovarian cancer dataset [27]—are used to test the performance of the proposed approach. Each image falls into one of two categories: diseased or normal. The images in the brain tumour image dataset fall into one of two categories: tumour or normal. The images in the dataset for glaucoma images are divided into two categories: glaucoma and healthy. The images in the ultrasound dataset for ovarian cancer are divided into two categories: cancer and normal. Sample images from the database are shown in Table 1.
Comparative analysis is carried out using various performance parameters between the performance of the proposed approach with the GAN-based steganography method (SteganoGAN [28]), High_Capacity_Information_Hiding with_Generative_ Adversarial_Network_ (HCISNet [29]), and Compressed_sensing_based_ enhanced_embedding_capacity_image_steganography_(CSIS [30]). Reed_Solomon_Bits_per_pixel_ (RS_BPP), Peak_signal_to_noise_ratio_(PSNR), weighted_peak_ signal_to_noise_ratio_ (WPSNR), accuracy and structural_similarity_(SSIM) are used to compare the performance of the methods [31,32,33,34].
Table 2 presents the findings of measuring PSNR, RS-BPP, and WPSNR for several categories of medical images.
For the datasets on brain tumours, glaucoma, and ovarian images, the average PSNR in the proposed solution is at least 2.7% higher, 1.9% higher, and 2.7% higher, respectively. The PSNR has increased in the proposed solution because of two reasons: the generated stegno image is nearly identical to the original and showcases significant regions in the stegno image as in the original image. The SSIM is at least 2% higher in the proposed solution compared to existing works for the same reason.
For all three datasets, the embedding capacity is calculated as a percentage of the total image size that can be hidden. The result is shown in Figure 5. The proposed method has at least 2% less embedding capacity because there are less than 5% significant areas. The embedding capacity of the proposed approach decreases as the percentage of significant regions in an image increases. Figure 6 shows the embedding capacity reduction for various percentages of significant regions. Future work may involve compensating with secret information compression or effective coding.
The stegno images in the cloud are taken from the cloud for three datasets, and an SVM machine learning classifier (Mode 1) is trained to classify the images into two classes. In a similar way, the SVM classifier is trained to classify the original images without GAN transformation (Mode 2) into two classes. The performance of the two modes of the classifier is measured in terms of accuracy, precision, recall, and Mathew’s correlation coefficient. While accuracy, precision, and recall are standard performance metrics for measuring classifier performance, Mathews’s correlation coefficient is calculated as
M C C = T P × T N ( F N × F P ) T P + F N ) ( T P + F P ( T N + F N ) ( T N + F P )
The value of it ranges from −1 to +1. The higher the value of it, the better the classifier’s performance is.
The results for all datasets are given in Table 3. From the results of the brain tumour dataset, it can be seen that there is only a 1% difference in accuracy between Mode 1 and Mode 2 in the proposed solution. But the deviation is more than 5% in the other methods. From the results of the glaucoma dataset, it can be seen that there is only a 2% difference in accuracy between Mode 1 and Mode 2 in the proposed solution, but the deviation is more than 8% in other methods. From the results of the ovarian cancer dataset, it can be seen that there is only a 1% difference in accuracy between Mode 1 and Mode 2 in the proposed solution, but the deviation is more than 4% in the other methods.
The deviation in accuracy between Mode 1 and Mode 2 is higher for glaucoma as the significant information is wide spread.
The MCC is plotted for all three datasets in Mode 1 and Mode 2 in Figure 7. The MCC is higher in the proposed method in comparison with state-of-the art methods, demonstrating the least distortion to discriminative features in significant regions of the stegno image.

5. Conclusions

This paper proposes a novel attention vector-guided GAN transformation for coverless image steganography. The solution was successful in resolving the issue of loss of the discriminative information in sensitive regions of images, which is essential for applications such as disease classification. The method was better suited for preserving utility in medical image sharing applications. The distortion loss has been prevented with little degradation in the quality of the secret information image. The reduction in embedding capacity was less than 2% in order to preserve features in key areas. Future work will include testing the proposed solution for other image classification applications.

Author Contributions

Conceptualisation, A. and V.; methodology, A. and V.; software, A. and D.S.U.; validation, A., V. and D.S.U.; formal analysis, V.; investigation, V.; resources, D.S.U.; data curation, V.; writing—original draft preparation, A.; writing—review and editing, V. and D.S.U.; visualisation, A.; supervision, V.; project administration, A.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

Authors would like to thank the reviewers for the constructive comments, which improved overall quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cachin, C. An information-theoretic model for steganography. Inf. Hiding 1998, 1525, 306–318. [Google Scholar]
  2. Pevný, T.; Filler, T.; Bas, P. Using high-dimensional image models to perform highly undetectable steganography. Inf. Hiding 2010, 6387, 161–177. [Google Scholar]
  3. Holub, V.; Fridrich, J.; Denemark, T. Universal distortion function for steganography in an arbitrary domain. EURASIP J. Inf. Secur. 2014, 1. [Google Scholar] [CrossRef]
  4. Holub, V.; Fridrich, J. Designing steganographic distortion using directional filters. In Proceedings of the 2012 IEEE International Workshop on Information Forensics and Security (WIFS), Costa Adeje, Spain, 2–5 December 2012; pp. 234–239. [Google Scholar]
  5. Fridrich, J.; Kodovský, J. Rich models for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 2012, 7, 868–882. [Google Scholar] [CrossRef]
  6. Kodovský, J.; Fridrich, J. Quantitative steganalysis using rich models. Proc. SPIE 2013, 8665. [Google Scholar] [CrossRef]
  7. Goljan, M.; Fridrich, J.; Cogranne, R. Rich model for steganalysis of color images. In Proceedings of the 2014 IEEE International Workshop on Information Forensics and Security (WIFS), Atlanta, GA, USA, 3–5 December 2014; pp. 185–190. [Google Scholar]
  8. Ambika; Biradar, R.L. A robust low frequency integer wavelet transform based fractal encryption algorithm for image steganography. Int. J. Adv. Intell. Paradig. 2021, 19, 342–356. [Google Scholar]
  9. Zeng, J.; Tan, S.; Li, B.; Huang, J. Large-scale JPEG steganalysis using hybrid deep-learning framework. IEEE Trans. Inf. Forensics Secur. 2016, 13, 1200–1214. [Google Scholar]
  10. Qian, Y.; Dong, J.; Tan, T.; Wang, W. Deep learning for steganalysis via convolutional neural networks. Proc. SPIE 2015, 9409. [Google Scholar] [CrossRef]
  11. Barni, M. Steganography in digital media: Principles, algorithms, and applications. IEEE Signal Process. Mag. 2011, 28, 142–144. [Google Scholar] [CrossRef]
  12. Zhou, Z.; Sun, H.; Harit, R.; Chen, X.; Sun, X. Coverless image steganography without embedding. Proc. Int. Conf. Cloud Comput. Secur. 2015, 9483, 123–132. [Google Scholar]
  13. Zhou, Z.-L.; Cao, Y.; Sun, X.-M. Coverless information hiding based on bag-of-words model of image. J. Appl. Sci. 2016, 34, 527–536. [Google Scholar]
  14. Zheng, S.; Wang, L.; Ling, B.; Hu, D. Coverless information hiding based on robust image hashing. Intell. Comput. Methodol. 2017, 10363, 536–547. [Google Scholar] [CrossRef]
  15. Wu, K.-C.; Wang, C.-M. Steganography using reversible texture synthesis. IEEE Trans. Image Process. 2015, 24, 130–139. [Google Scholar] [PubMed]
  16. Liu, J. Recent Advances of Image Steganography With Generative Adversarial Networks. IEEE Access 2020, 8, 60575–60597. [Google Scholar]
  17. Xu, J. Hidden message in a deformation-based texture. Vis. Comput. Int. J. Comput. Graph. 2015, 31, 1653–1669. [Google Scholar] [CrossRef]
  18. Hayes, J.; Danezis, G. Generating steganographic images via adversarial training. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17), Long Beach, CA, USA, 4–9 December 2017; Curran Asso-ciates Inc.: Red Hook, NY, USA, 2017; pp. 1951–1960. [Google Scholar]
  19. Volkhonskiy, D.; Borisenko, B.; Burnaev, E. Steganographic generative adversarial networks. arXiv 2017, arXiv:1703.05502. [Google Scholar] [CrossRef]
  20. Tang, W.; Tan, S.; Li, B.; Huang, J. Automatic steganographic distortion learning using a generative adversarial network. IEEE Signal Process. Lett. 2017, 24, 1547–1551. [Google Scholar] [CrossRef]
  21. Hu, D.; Wang, L.; Jiang, W.; Zheng, S.; Li, B. A Novel Image Steganography Method via Deep Convolutional Generative Adversarial Networks. IEEE Access 2018, 6, 38303–38314. [Google Scholar] [CrossRef]
  22. Jiang, W.; Hu, D.; Yu, C.; Li, M.; Zhao, Z. A New Steganography Without Embedding Based on Adversarial Training. In Proceedings of the ACM Turing Celebration Conference, Hefei, China, 22–24 May 2020; pp. 219–223. [Google Scholar]
  23. Ke, Y.; Zhang, M.; Liu, J.; Su, T.; Yang, X. Generative steganography with Kerckhoffs’ principle. Multimed. Tools Appl. 2019, 78, 13805–13818. [Google Scholar] [CrossRef]
  24. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Advances. Neural Inf.-Tion Process. Syst. 2014, 3, 2672–2680. [Google Scholar]
  25. Available online: https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection (accessed on 8 January 2022).
  26. Available online: https://www.kaggle.com/datasets/sshikamaru/glaucoma-detection (accessed on 8 January 2022).
  27. Available online: https://www.ultrasoundcases.info/ (accessed on 8 January 2022).
  28. Zhang, K.A.; Cuesta-Infante, A.; Xu, L. SteganoGAN: High capacity image steganography with GANs. arXiv 2019, arXiv:1901.03892. [Google Scholar]
  29. Wang, Z.; Gao, N.; Wang, X.; Xiang, J.; Zha, D.; Li, L. HidingGAN: High Capacity Information Hiding with Generative Adversarial Network. Comput. Graph. Forum. 2019, 38, 393–401. [Google Scholar] [CrossRef]
  30. Agrawal, R.; Ahuja, K. CSIS: Compressed sensing-based enhanced-embedding capacity image steganography scheme. IET Image Proc. 2021, 15, 1909–1925. [Google Scholar]
  31. Virupakshappa, A.B. An approach of using spatial fuzzy and level set method for brain tumor segmentation. Int. J. Tomogr. Simul. 2018, 31, 18–33. [Google Scholar]
  32. Patil, V.; Saxena, J.; Vineetha, R.; Paul, R.; Shetty, D.K.; Sharma, S.; Smriti, K.; Singhal, D.K.; Naik, N. Age assessment through root lengths of mandibular second and third permanent molars using machine learning and Artificial Neural Networks. J. Imaging 2023, 9, 33. [Google Scholar] [CrossRef]
  33. Rangayya; Virupakshappa; Patil, N. Improved face recognition method using SVM-MRF with KTBD based KCM segmentation approach. Int. J. Syst. Assur. Eng. Manag. 2022, 1–12. [Google Scholar] [CrossRef]
  34. Lim, S.J. Hybrid image embedding technique using Steganographic Signcryption and IWT-GWO methods. Microprocess. Microsyst. 2022, 95, 104688. [Google Scholar]
Figure 1. Architecture of GAN-based stenographic technique.
Figure 1. Architecture of GAN-based stenographic technique.
Engproc 59 00176 g001
Figure 2. Proposed Architecture.
Figure 2. Proposed Architecture.
Engproc 59 00176 g002
Figure 3. AVG-GAN encoder architecture.
Figure 3. AVG-GAN encoder architecture.
Engproc 59 00176 g003
Figure 4. AVG-GAN decoder architecture.
Figure 4. AVG-GAN decoder architecture.
Engproc 59 00176 g004
Figure 5. Comparison of embedding capacity.
Figure 5. Comparison of embedding capacity.
Engproc 59 00176 g005
Figure 6. Reduction in embedding capacity.
Figure 6. Reduction in embedding capacity.
Engproc 59 00176 g006
Figure 7. Comparison of MCC.
Figure 7. Comparison of MCC.
Engproc 59 00176 g007
Table 1. Sample images.
Table 1. Sample images.
Brain ImageGlaucoma ImageOvarian Image
TumourNormalGlaucomaHealthyCancerNormal
Engproc 59 00176 i001Engproc 59 00176 i002Engproc 59 00176 i003Engproc 59 00176 i004Engproc 59 00176 i005Engproc 59 00176 i006
Table 2. Comparison of obtained results.
Table 2. Comparison of obtained results.
Brain Image Dataset Glaucoma Image DatasetOvarian Image Dataset
PSNRRS-BPPWPSNRSSIM PSNRRS-BPPWPSNRSSIM PSNRRS-BPPWPSNRSSIM
Proposed39.966.2838.420.9839.566.6339.420.9839.966.6139.720.98
SteganoGAN36.464.3335.610.8436.214.1335.410.8536.214.1335.510.82
HCISNet38.875.6737.120.9238.775.1737.320.9438.895.5237.220.94
CSIS33.802.0632.340.9433.822.0332.840.9533.822.1732.140.96
Table 3. Difference between the classification modes.
Table 3. Difference between the classification modes.
Brain tumour dataset Mode 1Mode 2
AccuracyPrecisionRecallMCCAccuracyPrecisionRecallMCC
Proposed9697940.769798940.8
SteganoGAN9493910.649798940.8
HCISNet9392900.619798940.8
CSIS9293900.609798940.8
Glaucoma dataset Mode 1Mode 2
AccuracyPrecisionRecallMCCAccuracyPrecisionRecallMCC
Proposed9495890.699697920.72
SteganoGAN9193870.559697920.72
HCISNet9092860.529697920.72
CSIS8991850.509697920.72
Ovarian datasetMode 1Mode 2
AccuracyPrecisionRecallMCCAccuracyPrecisionRecallMCC
Proposed9291870.649391890.66
SteganoGAN9090860.599391890.66
HCISNet8990840.579391890.66
CSIS8788830.529391890.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ambika; Virupakshappa; Uplaonkar, D.S. Deep Learning-Based Coverless Image Steganography on Medical Images Shared via Cloud. Eng. Proc. 2023, 59, 176. https://doi.org/10.3390/engproc2023059176

AMA Style

Ambika, Virupakshappa, Uplaonkar DS. Deep Learning-Based Coverless Image Steganography on Medical Images Shared via Cloud. Engineering Proceedings. 2023; 59(1):176. https://doi.org/10.3390/engproc2023059176

Chicago/Turabian Style

Ambika, Virupakshappa, and Deepak S. Uplaonkar. 2023. "Deep Learning-Based Coverless Image Steganography on Medical Images Shared via Cloud" Engineering Proceedings 59, no. 1: 176. https://doi.org/10.3390/engproc2023059176

Article Metrics

Back to TopTop