Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = CAPTCHA recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3824 KB  
Article
Style Transfer and Topological Feature Analysis of Text-Based CAPTCHA via Generative Adversarial Networks
by Tao Xue, Zixuan Guo, Zehang Yin and Yu Rong
Mathematics 2025, 13(11), 1861; https://doi.org/10.3390/math13111861 - 2 Jun 2025
Viewed by 632
Abstract
The design and cracking of text-based CAPTCHAs are important topics in computer security. This study proposes a method for the style transfer of text-based CAPTCHAs using Generative Adversarial Networks (GANs). First, a curated dataset was used, combining a text-based CAPTCHA library and image [...] Read more.
The design and cracking of text-based CAPTCHAs are important topics in computer security. This study proposes a method for the style transfer of text-based CAPTCHAs using Generative Adversarial Networks (GANs). First, a curated dataset was used, combining a text-based CAPTCHA library and image collections from four artistic styles—Van Gogh, Monet, Cézanne, and Ukiyo-e—which were used to generate style-based text CAPTCHA samples. Subsequently, a universal style transfer model, along with trained CycleGAN models for both single- and double-style transfers, were employed to generate style-enhanced text-based CAPTCHAs. Traditional methods for evaluating the anti-recognition capability of text-based CAPTCHAs primarily focus on recognition success rates. This study introduces topological feature analysis as a new method for evaluating text-based CAPTCHAs. Initially, the recognition success rates of the three methods across four styles were evaluated using Muggle-OCR. Subsequently, the graph diameter was employed to quantify the differences between text-based CAPTCHA images before and after style transfer. The experimental results demonstrate that the recognition rates of style-enhanced text-based CAPTCHAs are consistently lower than those of the original CAPTCHA, suggesting that style transfer enhances anti-recognition capability. Topological feature analysis indicates that style transfer results in a more compact topological structure, further validating the effectiveness of the GAN-based twice-transfer method in enhancing CAPTCHA complexity and anti-recognition capability. Full article
Show Figures

Figure 1

20 pages, 11655 KB  
Article
Variational Color Shift and Auto-Encoder Based on Large Separable Kernel Attention for Enhanced Text CAPTCHA Vulnerability Assessment
by Xing Wan, Juliana Johari and Fazlina Ahmat Ruslan
Information 2024, 15(11), 717; https://doi.org/10.3390/info15110717 - 7 Nov 2024
Cited by 2 | Viewed by 1313
Abstract
Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces [...] Read more.
Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces a novel color augmentation technique called Variational Color Shift (VCS) to boost the recognition accuracy of different networks. VCS generates a color shift of every input image and then resamples the image within that range to generate a new image, thus expanding the number of samples of the original dataset to improve training effectiveness. In contrast to Random Color Shift (RCS), which treats the color offsets as hyperparameters, VCS estimates color shifts by reparametrizing the points sampled from the uniform distribution using predicted offsets according to every image, which makes the color shifts learnable. To better balance the computation and performance, we also propose two variants of VCS: Sim-VCS and Dilated-VCS. In addition, to solve the overfitting problem caused by disturbances in text CAPTCHAs, we propose an Auto-Encoder (AE) based on Large Separable Kernel Attention (AE-LSKA) to replace the convolutional module with large kernels in the text CAPTCHA recognizer. This new module employs an AE to compress the interference while expanding the receptive field using Large Separable Kernel Attention (LSKA), reducing the impact of local interference on the model training and improving the overall perception of characters. The experimental results show that the recognition accuracy of the model after integrating the AE-LSKA module is improved by at least 15 percentage points on both M-CAPTCHA and P-CAPTCHA datasets. In addition, experimental results demonstrate that color augmentation using VCS is more effective in enhancing recognition, which has higher accuracy compared to RCS and PCA Color Shift (PCA-CS). Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Graphical abstract

22 pages, 8725 KB  
Article
Adaptive CAPTCHA: A CRNN-Based Text CAPTCHA Solver with Adaptive Fusion Filter Networks
by Xing Wan, Juliana Johari and Fazlina Ahmat Ruslan
Appl. Sci. 2024, 14(12), 5016; https://doi.org/10.3390/app14125016 - 8 Jun 2024
Cited by 5 | Viewed by 4392
Abstract
Text-based CAPTCHAs remain the most widely adopted security scheme, which is the first barrier to securing websites. Deep learning methods, especially Convolutional Neural Networks (CNNs), are the mainstream approach for text CAPTCHA recognition and are widely used in CAPTCHA vulnerability assessment and data [...] Read more.
Text-based CAPTCHAs remain the most widely adopted security scheme, which is the first barrier to securing websites. Deep learning methods, especially Convolutional Neural Networks (CNNs), are the mainstream approach for text CAPTCHA recognition and are widely used in CAPTCHA vulnerability assessment and data collection. However, verification code recognizers are mostly deployed on the CPU platform as part of a web crawler and security assessment; they are required to have both low complexity and high recognition accuracy. Due to the specifically designed anti-attack mechanisms like noise, interference, geometric deformation, twisting, rotation, and character adhesion in text CAPTCHAs, some characters are difficult to efficiently identify with high accuracy in these complex CAPTCHA images. This paper proposed a recognition model named Adaptive CAPTCHA with a CNN combined with an RNN (CRNN) module and trainable Adaptive Fusion Filtering Networks (AFFN), which effectively handle the interference and learn the correlation between characters in CAPTCHAs to enhance recognition accuracy. Experimental results on two datasets of different complexities show that, compared with the baseline model Deep CAPTCHA, the number of parameters of our proposed model is reduced by about 70%, and the recognition accuracy is improved by more than 10 percentage points in the two datasets. In addition, the proposed model has a faster training convergence speed. Compared with several of the latest models, the model proposed by the study also has better comprehensive performance. Full article
(This article belongs to the Special Issue Advanced Technologies in Data and Information Security III)
Show Figures

Figure 1

20 pages, 2736 KB  
Article
Deep Learning Based CAPTCHA Recognition Network with Grouping Strategy
by Zaid Derea, Beiji Zou, Asma A. Al-Shargabi, Alaa Thobhani and Amr Abdussalam
Sensors 2023, 23(23), 9487; https://doi.org/10.3390/s23239487 - 29 Nov 2023
Cited by 5 | Viewed by 5569
Abstract
Websites can improve their security and protect against harmful Internet attacks by incorporating CAPTCHA verification, which assists in distinguishing between human users and robots. Among the various types of CAPTCHA, the most prevalent variant involves text-based challenges that are intentionally designed to be [...] Read more.
Websites can improve their security and protect against harmful Internet attacks by incorporating CAPTCHA verification, which assists in distinguishing between human users and robots. Among the various types of CAPTCHA, the most prevalent variant involves text-based challenges that are intentionally designed to be easily understandable by humans while presenting a difficulty for machines or robots in recognizing them. Nevertheless, due to significant advancements in deep learning, constructing convolutional neural network (CNN)-based models that possess the capability of effectively recognizing text-based CAPTCHAs has become considerably simpler. In this regard, we present a CAPTCHA recognition method that entails creating multiple duplicates of the original CAPTCHA images and generating separate binary images that encode the exact locations of each group of CAPTCHA characters. These replicated images are subsequently fed into a well-trained CNN, one after another, for obtaining the final output characters. The model possesses a straightforward architecture with a relatively small storage in system, eliminating the need for CAPTCHA segmentation into individual characters. Following the training and testing of the suggested CNN model for CAPTCHA recognition, the experimental results demonstrate the model’s effectiveness in accurately recognizing CAPTCHA characters. Full article
(This article belongs to the Special Issue Deep Learning-Based Neural Networks for Sensing and Imaging)
Show Figures

Figure 1

18 pages, 2639 KB  
Article
Secure CAPTCHA by Genetic Algorithm (GA) and Multi-Layer Perceptron (MLP)
by Saman Shojae Chaeikar, Fatemeh Mirzaei Asl, Saeid Yazdanpanah, Mazdak Zamani, Azizah Abdul Manaf and Touraj Khodadadi
Electronics 2023, 12(19), 4084; https://doi.org/10.3390/electronics12194084 - 29 Sep 2023
Cited by 10 | Viewed by 2014
Abstract
To achieve an acceptable level of security on the web, the Completely Automatic Public Turing test to tell Computer and Human Apart (CAPTCHA) was introduced as a tool to prevent bots from doing destructive actions such as downloading or signing up. Smartphones have [...] Read more.
To achieve an acceptable level of security on the web, the Completely Automatic Public Turing test to tell Computer and Human Apart (CAPTCHA) was introduced as a tool to prevent bots from doing destructive actions such as downloading or signing up. Smartphones have small screens, and, therefore, using the common CAPTCHA methods (e.g., text CAPTCHAs) in these devices raises usability issues. To introduce a reliable, secure, and usable CAPTCHA that is suitable for smartphones, this paper introduces a hand gesture recognition CAPTCHA based on applying genetic algorithm (GA) principles on Multi-Layer Perceptron (MLP). The proposed method improves the performance of MLP-based hand gesture recognition. It has been trained and evaluated on 2201 videos of the IPN Hand dataset, and MSE and RMSE benchmarks report index values of 0.0018 and 0.0424, respectively. A comparison with the related works shows a minimum of 1.79% fewer errors, and experiments produced a sensitivity of 93.42% and accuracy of 92.27–10.25% and 6.65% improvement compared to the MLP implementation. The range of the supported hand gestures can be a limit for the application of this research as a limited range may result in a vulnerable CAPTCHA. Also, the processes of training and testing require significant computational resources. In the future, we will optimize the method to run with high reliability in various illumination conditions and skin color and tone. The next development plan is to use augmented reality and create unpredictable random patterns to enhance the security of the method. Full article
(This article belongs to the Special Issue State-of-the-Art Electronics in the USA)
Show Figures

Figure 1

17 pages, 3285 KB  
Article
A Novel Short-Memory Sequence-Based Model for Variable-Length Reading Recognition of Multi-Type Digital Instruments in Industrial Scenarios
by Shenghan Wei, Xiang Li, Yong Yao and Suixian Yang
Algorithms 2023, 16(4), 192; https://doi.org/10.3390/a16040192 - 31 Mar 2023
Cited by 3 | Viewed by 2094
Abstract
As a practical application of Optical Character Recognition (OCR) for the digital situation, the digital instrument recognition is significant to achieve automatic information management in real-industrial scenarios. However, different from the normal digital recognition task such as license plate recognition, CAPTCHA recognition and [...] Read more.
As a practical application of Optical Character Recognition (OCR) for the digital situation, the digital instrument recognition is significant to achieve automatic information management in real-industrial scenarios. However, different from the normal digital recognition task such as license plate recognition, CAPTCHA recognition and handwritten digit recognition, the recognition task of multi-type digital instruments faces greater challenges due to the reading strings are variable-length with different fonts, different spacing and aspect ratios. In order to overcome this shortcoming, we propose a novel short-memory sequence-based model for variable-length reading recognition. First, we involve shortcut connection strategy into traditional convolutional structure to form a feature extractor for capturing effective features from characters with different fonts of multi-type digital instruments images. Then, we apply an RNN-based sequence module, which strengthens short-distance dependencies while reducing the long-distance trending memory of the reading string, to greatly improve the robustness and generalization of the model for invisible data. Finally, a novel short-memory sequence-based model consisting of a feature extractor, an RNN-based sequence module and the CTC, is proposed for variable-length reading recognition of multi-type digital instruments. Experimental results show that this method is effective on variable-length instrument reading recognition task, especially for invisible data, which proves that our method has outstanding generalization and robustness in real-industrial applications. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning in Pattern Recognition)
Show Figures

Figure 1

20 pages, 5259 KB  
Article
An Efficient and Accurate Depth-Wise Separable Convolutional Neural Network for Cybersecurity Vulnerability Assessment Based on CAPTCHA Breaking
by Stephen Dankwa and Lu Yang
Electronics 2021, 10(4), 480; https://doi.org/10.3390/electronics10040480 - 18 Feb 2021
Cited by 12 | Viewed by 3655
Abstract
Cybersecurity practitioners generate a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHAs) as a form of security mechanism in website applications, in order to differentiate between human end-users and machine bots. They tend to use standard security to implement [...] Read more.
Cybersecurity practitioners generate a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHAs) as a form of security mechanism in website applications, in order to differentiate between human end-users and machine bots. They tend to use standard security to implement CAPTCHAs in order to prevent hackers from writing malicious automated programs to make false website registrations and to restrict them from stealing end-users’ private information. Among the categories of CAPTCHAs, the text-based CAPTCHA is the most widely used. However, with the evolution of deep learning, it has been so dramatic that tasks previously thought not easily addressable by computers and used as CAPTCHA to prevent spam are now possible to break. The workflow of CAPTCHA breaking is a combination of efforts, approaches, and the development of the computation-efficient Convolutional Neural Network (CNN) model that attempts to increase accuracy. In this study, in contrast to breaking the whole CAPTCHA images simultaneously, this study split four-character CAPTCHA images for the individual characters with a 2-pixel margin around the edges of a new training dataset, and then proposed an efficient and accurate Depth-wise Separable Convolutional Neural Network for breaking text-based CAPTCHAs. Most importantly, to the best of our knowledge, this is the first CAPTCHA breaking study to use the Depth-wise Separable Convolution layer to build an efficient CNN model to break text-based CAPTCHAs. We have evaluated and compared the performance of our proposed model to that of fine-tuning other popular CNN image recognition architectures on the generated CAPTCHA image dataset. In real-time, our proposed model used less time to break the text-based CAPTCHAs with an accuracy of more than 99% on the testing dataset. We observed that our proposed CNN model has efficiently improved the CAPTCHA breaking accuracy and streamlined the structure of the CAPTCHA breaking network as compared to other CAPTCHA breaking techniques. Full article
(This article belongs to the Special Issue Security and Trust in Next Generation Cyber-Physical Systems)
Show Figures

Figure 1

19 pages, 3523 KB  
Article
CAPTCHA Recognition Using Deep Learning with Attached Binary Images
by Alaa Thobhani, Mingsheng Gao, Ammar Hawbani, Safwan Taher Mohammed Ali and Amr Abdussalam
Electronics 2020, 9(9), 1522; https://doi.org/10.3390/electronics9091522 - 17 Sep 2020
Cited by 30 | Viewed by 13000
Abstract
Websites can increase their security and prevent harmful Internet attacks by providing CAPTCHA verification for determining whether end-user is a human or a robot. Text-based CAPTCHA is the most common and designed to be easily recognized by humans and difficult to identify by [...] Read more.
Websites can increase their security and prevent harmful Internet attacks by providing CAPTCHA verification for determining whether end-user is a human or a robot. Text-based CAPTCHA is the most common and designed to be easily recognized by humans and difficult to identify by machines or robots. However, with the dramatic advancements in deep learning, it becomes much easier to build convolutional neural network (CNN) models that can efficiently recognize text-based CAPTCHAs. In this study, we introduce an efficient CNN model that uses attached binary images to recognize CAPTCHAs. By making a specific number of copies of the input CAPTCHA image equal to the number of characters in that input CAPTCHA image and attaching distinct binary images to each copy, we build a new CNN model that can recognize CAPTCHAs effectively. The model has a simple structure and small storage size and does not require the segmentation of CAPTCHAs into individual characters. After training and testing the proposed CAPTCHA recognition CNN model, the achieved experimental results reveal the strength of the model in CAPTCHA character recognition. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 4396 KB  
Article
CAPTCHA Image Generation: Two-Step Style-Transfer Learning in Deep Neural Networks
by Hyun Kwon, Hyunsoo Yoon and Ki-Woong Park
Sensors 2020, 20(5), 1495; https://doi.org/10.3390/s20051495 - 9 Mar 2020
Cited by 12 | Viewed by 7636
Abstract
Mobile devices such as sensors are used to connect to the Internet and provide services to users. Web services are vulnerable to automated attacks, which can restrict mobile devices from accessing websites. To prevent such automated attacks, CAPTCHAs are widely used as a [...] Read more.
Mobile devices such as sensors are used to connect to the Internet and provide services to users. Web services are vulnerable to automated attacks, which can restrict mobile devices from accessing websites. To prevent such automated attacks, CAPTCHAs are widely used as a security solution. However, when a high level of distortion has been applied to a CAPTCHA to make it resistant to automated attacks, the CAPTCHA becomes difficult for a human to recognize. In this work, we propose a method for generating a CAPTCHA image that will resist recognition by machines while maintaining its recognizability to humans. The method utilizes the style transfer method, and creates a new image, called a style-plugged-CAPTCHA image, by incorporating the styles of other images while keeping the content of the original CAPTCHA. In our experiment, we used the TensorFlow machine learning library and six CAPTCHA datasets in use on actual websites. The experimental results show that the proposed scheme reduces the rate of recognition by the DeCAPTCHA system to 3.5% and 3.2% using one style image and two style images, respectively, while maintaining recognizability by humans. Full article
(This article belongs to the Special Issue Selected papers from WISA 2019)
Show Figures

Figure 1

13 pages, 1227 KB  
Article
Applying Visual Cryptography to Enhance Text Captchas
by Xuehu Yan, Feng Liu, Wei Qi Yan and Yuliang Lu
Mathematics 2020, 8(3), 332; https://doi.org/10.3390/math8030332 - 3 Mar 2020
Cited by 38 | Viewed by 4065
Abstract
Nowadays, lots of applications and websites utilize text-based captchas to partially protect the authentication mechanism. However, in recent years, different ways have been exploited to automatically recognize text-based captchas especially deep learning-based ways, such as, convolutional neural network (CNN). Thus, we have to [...] Read more.
Nowadays, lots of applications and websites utilize text-based captchas to partially protect the authentication mechanism. However, in recent years, different ways have been exploited to automatically recognize text-based captchas especially deep learning-based ways, such as, convolutional neural network (CNN). Thus, we have to enhance the text captchas design. In this paper, using the features of the randomness for each encoding process in visual cryptography (VC) and the visual recognizability with naked human eyes, VC is applied to design and enhance text-based captcha. Experimental results using two typical deep learning-based attack models indicate the effectiveness of the designed method. By using our designed VC-enhanced text-based captcha (VCETC), the recognition rate is in some degree decreased. Full article
(This article belongs to the Special Issue Computing Methods in Steganography and Multimedia Security)
Show Figures

Figure 1

17 pages, 1393 KB  
Article
A Low-Cost Approach to Crack Python CAPTCHAs Using AI-Based Chosen-Plaintext Attack
by Ning Yu and Kyle Darling
Appl. Sci. 2019, 9(10), 2010; https://doi.org/10.3390/app9102010 - 16 May 2019
Cited by 22 | Viewed by 5722
Abstract
CAPTCHA authentication has been challenged by recent technology advances in AI. However, many of the AI advances challenging CAPTCHA are either restricted by a limited amount of labeled CAPTCHA data or are constructed in an expensive or complicated way. In contrast, this paper [...] Read more.
CAPTCHA authentication has been challenged by recent technology advances in AI. However, many of the AI advances challenging CAPTCHA are either restricted by a limited amount of labeled CAPTCHA data or are constructed in an expensive or complicated way. In contrast, this paper illustrates a low-cost approach that takes advantage of the nature of open source libraries for an AI-based chosen-plaintext attack. The chosen-plaintext attack described here relies on a deep learning model created and trained on a simple personal computer in a low-cost way. It shows an efficient cracking rate over two open-source Python CAPTCHA Libraries, Claptcha and Captcha. This chosen-plaintext attack method has raised a potential security alert in the era of AI, particularly to small-business owners who use the open-source CAPTCHA libraries. The main contributions of this project include: (1) it is the first low-cost method based on chosen-plaintext attack by using the nature of open-source Python CAPTCHA libraries; (2) it is a novel way to combine TensorFlow object detection and our proposed peak segmentation algorithm with convolutional neural network to improve the recognition accuracy. Full article
(This article belongs to the Special Issue Advanced Intelligent Imaging Technology)
Show Figures

Figure 1

Back to TopTop