Next Article in Journal
Image Semantic Segmentation of Underwater Garbage with Modified U-Net Architecture Model
Next Article in Special Issue
Optical Identification of Parenteral Nutrition Solutions Exploiting Refractive Index Sensing
Previous Article in Journal
Research on Configuration Constraints of Airborne Bistatic SARs
Previous Article in Special Issue
Whitening Technique Based on Gram–Schmidt Orthogonalization for Motor Imagery Classification of Brain–Computer Interface Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses

1
School of Information, Shanghai Ocean University, Shanghai 201306, China
2
Key Laboratory of Fishery Information, Ministry of Agriculture, Shanghai 200335, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6544; https://doi.org/10.3390/s22176544
Submission received: 26 July 2022 / Revised: 22 August 2022 / Accepted: 26 August 2022 / Published: 30 August 2022
(This article belongs to the Special Issue Sensing for Biomedical Applications)

Abstract

:
Visual prostheses, used to assist in restoring functional vision to the visually impaired, convert captured external images into corresponding electrical stimulation patterns that are stimulated by implanted microelectrodes to induce phosphenes and eventually visual perception. Detecting and providing useful visual information to the prosthesis wearer under limited artificial vision has been an important concern in the field of visual prosthesis. Along with the development of prosthetic device design and stimulus encoding methods, researchers have explored the possibility of the application of computer vision by simulating visual perception under prosthetic vision. Effective image processing in computer vision is performed to optimize artificial visual information and improve the ability to restore various important visual functions in implant recipients, allowing them to better achieve their daily demands. This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes. Then, the important aspects of computer vision in the optimization of visual information processing are reviewed, and the possibilities and shortcomings of these solutions are discussed. Ultimately, the development direction and emphasis issues for improving the performance of visual prosthesis devices are summarized.

1. Introduction

In the 2019 World Health Organization report on global vision [1], it was revealed that at least 2.2 billion people worldwide have various vision problems, and Dr. Tedros added that 65 million people are blind or visually impaired. The main causes of blind-ness are retinal diseases, both hereditary and acquired. Among these, retinal degenerative diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) are irreversible causes of blindness.
Loss of vision can cause great difficulties for humans to perform some tasks in daily life. People who are visually impaired can accomplish simple braille reading with the help of touch, some simple tasks with the help of hearing, daily walking and going out with the help of guide dogs, etc. With the development of technology, wearable and implantable visual aid electronic devices have benefited a certain number of visually impaired patients. These devices can improve partial functional vision by inducing optical illusions from implanted electrodes through techniques of artificial vision generation on the basis of a partially intact visual pathway.
Research on artificial vision generation has been conducted since the 1950s, including the 1956 discovery by the American scientist Tassiker [2] that subretinal implantation of a photosensitive selenium cell could help humans generate light perception, and since then, researchers have started to investigate electrical stimulation to elicit light perception. In 1974, Dobelle [3] implanted electrodes in the visual cortexes of 15 visually impaired patients and produced light perception. Dobelle’s clinical results gave hope to researchers and inspired them to think about the site of electrode implantation. Inspired by these studies, from 1974 to the present researchers have carried out a variety of visual aids with electronic devices at the site of implantation and called these devices for artificial vision generation visual prostheses. Depending on the implantable site of the electrode, visual prostheses can be classified into four types: retinal prostheses, choroidal prostheses, cortical prostheses, and optic nerve prostheses. Retinal prostheses that are less invasive, have lower surgical risk, and are implanted closer to the central visual area have become the focus in this field.
Research on retinal prostheses was initiated in 1989 by the Rizzo group in the United States, which was the first to investigate artificial prostheses that could help restore vision in patients blinded by RP and AMD [4]. Since 1990, the Humayun group [5] in the United States has continued to develop retinal prostheses and conducted the first clinical trial in 2002, implanting 16 electrodes into the retinas of six patients who could perceive discontinuous light perception after surgery. In order to improve the visual perception of the implants, the retinal prosthesis Argus® II, with 60 electrodes, was proposed in 2007 and was approved to enter the clinical stage in 2011 [6]. Increasing the number of implanted electrodes under the limited human tolerance and providing patients with a 20° viewing angle can help some patients to complete the recognition of simple single letters of the alphabet. The hardware enhancement, although improving the vision perception, is far from satisfying the patients’ visual requirements, and there are limitations in the maximum viewing angle that can be provided. With the limitation of hardware technology enhancement, researchers started the study of image processing optimization strategies.
Image processing plays an important role in the field of ophthalmology, such as fundus lesion detection [7,8,9,10]. Optimization strategies for image processing mainly focus on performing operations such as foreground target extraction [11,12,13,14], edge detection [15,16,17,18], and segmentation [19,20,21,22] on images captured by visual prostheses using relevant methods of computer vision or perform corrections on the generated optical illusion arrays. Early works paid attention to enhance features of images or objects. In 2008, Boyle et al. [23] used a method with an enlarged window for saliency detection to focus on human facial features inside the window to help subjects complete face recognition. With the improvement in computer vision technology, object detection and instance segmentation are being applied more in this field. Chris et al. [24] proposed an adaptive enhancement model that performs the local attenuation of less important regions and ensures the prominence of important features to enhance perception in low resolutions. Some studies have tried to improve the image acquisition method, such as Li et al. [25], who captured outside images with a depth camera and separated the object or person from the background with a segmentation model. To deal with the phenomenon of the visual irregularity of prostheses, some researchers have proposed certain image correction methods, and this field of work is more focused on Chinese character recognition, such as in [26]. Image processing techniques have improved the limited artificial visual perception induced by visual prostheses to some extent, and how to appropriately apply them to visual prosthesis device performance enhancement is the focus of current researchers’ attention.
This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes.
This paper summarizes the clinical progress and the visual perception of visual prosthesis implant recipients in recent years, followed by an overview of visual information optimization studies for prosthetic devices, and then discusses the possibilities and shortcomings of analyzing these optimization strategies.

2. Clinical Advances in Visual Prosthetics

Visual prosthesis research has been conducted for 50 years since the 1970s, and some prostheses have been approved for the clinical phase. Different types of visual prostheses require different surgical procedures, and the corresponding clinical trial data vary. Information on the implantation site, the number of implanted electrodes, the clinical trial number, and the status of the different visual prostheses in recent years is summarized in Table 1 [27]. In addition, the clinical trial reports are analyzed and summarized for some typical prosthetic devices, i.e., the real rehabilitation of visual function and the vision perception of patients after implantation.
Table 1. Progress on clinical trials of visual prosthetics.
Table 1. Progress on clinical trials of visual prosthetics.
Implant SiteVisual ProsthesisElectrode Number Visual Implant VisionClinical Trial NumbersStatus
EpiretinalArgus® II [5,6,28,29,30,31,32,33,34,35,36,37,38,39]6020/1260NCT03635645Received the CE mark in 2011, FDA approval in 2013, and two patients to identify a subset of the Sloan letters.
IRIS II [40]150NANCT02670980Ten patients evaluated for functional visual tasks for up to 3 years
IMI [41,42]49NANCT02982811Follow-up of 20 patients with faint light perception for 3 months
SubretinalAlpha-AMS [43,44,45]150020/546NCT03629899Received the CE mark in 2013 and had patients achieve an optimal visual acuity of 20/546.
PRIMA [46,47,48,49,50,51]37820/460NCT03333954Implantation of PRIMA to five patients was started in 2017 with 36 months of follow-up.
Suprachoroidal retinal prosthesis [52,53]49NANCT05158049Seven implants were assessed for vision, orientation, and movement.
Bionic Eye [54,55,56,57,58]44NANCT03406416The safety of the device was evaluated in 2018 by implantation in four subjects with increased electrode–retinal distance and stable impedance after the procedure, with no side effects.
Intracortical ORION [59]60NANCT03344848Six patients without photoreceptors were approved by the FDA to be implanted in 2017, and each implant recipient received a 5-year follow-up; data from the relevant trials are not yet publicly available.
optic cortex ICVP [59,60,61]144NANCT04634383Five participants, tested weekly for 1 to 3 years, were assessed for electrical-stimulation-induced visual perception.
CORTIVIS [18,59,62]100NANCT02983370After receiving FDA approval, it was implanted in five patients for six months in 2019.
Data source: National Center for Biotechnology Information official website.

2.1. Epiretinal Prostheses

The study of the Argus series of retinal prostheses began in 1990, which achieved rapid development and gained more attention, with the first implantation into patients and long-term trials in 2002 in a total of six patients with end-stage RP. The first-generation device, Argus® I, based on the cochlear implant, was implanted into the patient’s eye with a 4 × 4 array of electrodes and carried out several visual tasks [6]: locating a white square on a black background, pointing out the path of a white line above a black background, and finding a door in a room. The results of the clinical trial showed that patients could recognize simple geometric shapes with the help of the prosthesis, and one patient showed some improvement in visual perception during the target localization and mobility tests. In addition to using white square location tests, Dagnelie et al. [28] attempted to have implants perform sock classification (black and white only) and showed that the average success rate for sock classification in subjects with 60 electrodes was around 50%. With the same electrodes, the Humayun group implanted a permanent retinal prosthesis in the eye of a completely blind patient and performed individual letter recognition and word reading tests, showing that the patient was able to correctly recognize individual letters within 6 s to 221 s. During the test, the patient indicated that not all electrodes worked properly [32] and that the phosphene array, seen via electrode stimulation, had large distortions and some degree of loss of visual information, which may increase the difficulty to perform character recognition. A schematic representation of the phenomenon of dropout or distorted individual letters during recognition is shown in Figure 1. The same feedback of visual information loss was obtained from other patients implanted with this prototype device under the same circumstances [5,6,33]. With a limited number of implanted electrodes, implanters could only perform some simple visual tasks such as letter recognition. Meanwhile, it was found that the patients saw a lower resolution than the number of implanted electrodes, with a distorted array of electrodes after surgery. This may have been due to the implanted electrodes not working or because the electrodes were implanted in necrotic tissues from simulation experiments [34]. The principle of inducing dropout or distorted phosphene is illustrated in Figure 2.
The second generation of the Argus product, Argus® II, increases the number of implanted electrodes to a 6 × 10 array and installs a camera on the device’s eyeglasses that captures images, which are processed and coded to transmit stimulus commands to the electrode array, producing corresponding phosphenes. Not only does the second-generation product have an increased number of electrodes and theoretically evoke better visual information from electrodes, the Argus® II product was the first visual prosthesis in the world to receive CE mark approval and FDA approval, making the Argus® II among the most common implants worldwide [35]. Breanne et al. [36] used the second-generation product to test patients for single-letter recognition, where the test letters were a subset of the Sloan alphabet containing O, V, K, and Z. Two subjects were able to complete 27 out of 36 trials correctly. In addition, patients reported seeing a defective array with distorted and dropout letter shapes, making it difficult to correctly identify letters. A follow-up survey was conducted with 32 patients implanted with the second-generation product, and the results showed that the number of truly effective electrodes ranged from 46 to 60 [6]. With the best achievable visual acuity of 20/1260 after implantation, patients were still not able to perform visual tasks smoothly in daily life. Beyeler et al. [37] also used a single-letter recognition test to assess the functional visual improvement after the implantation of the Argus series in four patients. The results showed that the recovery of visual acuity ranged from 20/15,887 to 20/1260 after implantation, while the patients all reported they obtained deficits in vision, resulting in dropout letter information and increased recognition time. Other trials were conducted to test the effect of Argus® II implantation on their performance in motion detection, and clinical trial results showed that half of patients had improved ability to detect moving targets [38]. Abhishek et al. [39] tested the electrode and retinal gap distance using Cirrus HD-OCT software and obtained a series of clinical trial data after 1 month, 3 months, 6 months, and 1 year postoperatively. The clinical trial data showed that the distance between the electrode array and the retinal gap had an effect on the patient’s ability to complete the square localization task. The greater the distance, the weaker the light sensation produced by the stimulated electrode, while the closer the distance, the greater the light sensations that may be produced.
The IRIS prosthesis, which has 150 electrodes and is in the internal surface of the retina, was implanted in 2017. Several clinical trials tested visual tasks in 10 patients after implantation [40], and the results showed that the mean error distance was reduced from 8 to 2 in the square localization test and the mean accuracy for image recognition was improved from 45% to about 55%. However, the performance of image recognition with the device did not reach the passing line (60%), and the device had a short life span. Twenty participants with epiretinal prosthesis IMI consisting of a platinum microelectrode [41,42] stated during a follow-up period that they could perceive a weak sense of light with the prosthesis, which was not sufficient to help with activities of daily living. Retinal detachment occurred in some patients during the 3-month follow-up period.

2.2. Subretinal Prostheses

Alpha-AMS is the representative of the subretinal prostheses, an implant containing an array of 1500 active microphotodiodes implanted subretinally. Katarina et al. [43] reported the performance of three patients with Alpha-AMS subretinal implants on a 26-letter recognition test in which the patients were able to recognize only five letters, T, V, L, I, and O, when each letter was displayed individually. Among them, some investigators summarized the visual acuity of patients after the implantation of Alpha-AMS [44] and showed that the optimal level of visual acuity that the patients could achieve was 20/546. Zrenner et al. [45] evaluated the recovery of functional visual acuity in patients after the implantation. Two patients were unable to identify the Landolt C ring and individual letters, and only one patient was able to identify individual letters such as L, T, and Z for a maximum of 60 s. Two patients were able to distinguish between the different positions of the letter “U” opening and achieved 73% and 88% correct response rates. Meanwhile, some patients recognized individual letters for more than 40 s after implantation, as in [32]. PRIMA [47,49], a prosthetic device with 378 electrodes, was shown to have an implantation life of up to 3 years in animals [50]. In 2018 [51], it was successfully implanted in three subjects. The follow-up comparison showed that the three patients with subretinal implantation sites achieved visual acuity between 20/460 and 20/550, whereas the two patients with suprachoroidal implantation sites achieved 20/800, indicating the subretinal implantation site was optimal. However, the optimal visual acuity after implantation was far below normal vision, and the patients still had difficulty performing visual tasks in daily life. From the above clinical results, it can be concluded that subretinal and epiretinal prostheses can both elicit a kind of light sensation called “phosphene”. Comparing the signal processing method, the former used an extra-ocular information processor, while the latter used the processing method closest to that of natural vision. From the perspective of implantation risk, the latter was less damaging to the retina, while the small space of the former poses a greater challenge for electrode encapsulation and design.
Suprachoroidal prostheses are also considered to be a type of retinal prosthesis. Fujikado et al. [52] implanted a developed suprachoroidal prosthesis with 49 electrodes into three patients with RP. For one year after surgery, all patients were tested daily at home, such as the ability to view white lines on a black background, square positioning, walking along a line, and differentiating between dishes and chopsticks, to assess the effectiveness of the suprachoroidal prosthesis in improving patients’ functional vision. Two patients with RP reported that the stimulation of the electrodes did not produce corresponding phosphenes in their expected locations [53] and that the viewed phosphene array had distortions. Mathew et al. [56,58] performed square localization (SL) testing and functional visual acuity assessment in four patients with advanced RP and four with advanced AMD after the implantation of the Bionic Eye choroidal implant with 44 electrodes. The results showed that the mean pointing error in SL decreased from 27.7 ± 8.7° to 10.3 ± 3.3° in the four RP patients who used the device, and the mean success rate for patients completing the task was lower than 40% in the four AMD patients who performed the search for objects on the table test.

2.3. Visual Cortex Prostheses

As early as 1976, research began on the ICVP visual cortex prosthesis project, and later ICVP used a wireless floating microelectrode array (WFMA) to replace earlier implantation kits that used a large number of wires to connect electrodes and to reduce the cost and risk of the procedure. Brindley et al. [63] implanted a completely blind patient to perform a word reading test, and the best level the patient could achieve was 8.5 characters/min. Fernández et al. [18] implanted a CORTIVIS consisting of 96 electrodes in the visual cortex of a totally blind patient for 6 months, during which the patient was given a letter recognition test. The results showed that the patient was able to recognize only five letters out of 26: I, L, C, V, and O. Another CORTIVIS prosthesis [64] produced vision by evoking 100 microelectrodes. Patients described the evoked phosphenes as flickering, colored, or colorless pinpoint stars that dropped out and were distorted during the single-letter recognition test. The irregularities are shown in Figure 3. Chen et al. [65] implanted 1024 electrodes into the visual cortexes of monkeys, and the monkeys were able to recognize simple shapes or letters after evoking the electrodes. Because the visual cortex is located in the occipital cortex of the brain, far from the center of the human visual field, this is more risky to accomplish surgically, and there have been fewer clinical trials of visual cortex prostheses.
Of the several abovementioned prosthetic devices that have entered the clinic or ended that phase, researchers have assessed the functional visual acuity of patients after wearing the prosthesis through different visual tasks. The greatest number of tests were required for single-letter recognition, square positioning, etc., but patients did not perform well on these simple tasks, such as the recognition of single letters in more than 40 s with an implant containing 60 implanted electrodes [32] and 1500 diode arrays [45]. Additionally, most of the implant recipients reported that the evoked electrodes produced phosphene points with dropouts and distorted arrays, which still bring inconvenience in achieving daily visual tasks. The researchers in the field of prostheses have started to look for relevant optimization solutions to further improve the limited visual perception.

3. Optimization of Information Processing in Visual Prosthetics

It is important to provide better visual perception to the patient as researchers look for factors that influence the visual perception of the implant recipient, such as the material of the electrode and the density of array [17], the stimulation parameters of the electrode [67,68,69], the distance between the electrode and the implantation site [70], and others. Seung et al. [71] used a liquid crystal polymer (LCP) to fabricate a smoothly rounded and flexible structured electrode and implanted it into the choroid of rabbit eyes, which showed that this electrode was safe and stable and could be effectively used for retinal implants. The Argus II increased the number of electrodes from 16 to 60, provided nearly four times the resolution to the patient, and was theoretically capable of providing significantly more visual information than the first generation. The characteristics of the phosphenes are influenced by adjusting the electrode stimulation parameters, such as synchronous pulses affecting the brightness and shape of phosphenes [72], producing a higher level of visual perception than asynchronous stimulation, and the degree of influence is also closely related to the configuration and location of the implanted electrodes [73]. Rebecca et al. [74] used electrodes made of activated iridium oxide (AIROF) to maintain anodic potential bias during interpulse intervals, which could satisfy the charge level required for neural stimulation and reduce electrode polarization. To avoid the phenomenon of phosphenes lasting less than half a second due to the desensitization of retinal ganglion cells, chenais et al. [75] proposed a more natural stimulation strategy based on the temporal modulation of electrical pulses, which was effectively validated on experimental mice with a duration of 4.2 s.
Hardware upgrades, such as increasing the number of electrodes, are necessary for visual prosthetic devices. However, there are currently more difficulties in practice. Therefore, researchers are looking for image processing strategies to optimize the visual information at a low resolution so that implant recipients can better understand the artificial vision available with current prosthesis devices. The optimization of the image processing strategy mainly uses effective techniques in computer vision to extract useful information and to propose certain expression methods according to different visual tasks. Finally, more useful visual information is provided to the recipients. Depending on the target in the assessment of functional vision with a visual prosthesis, widely studied visual tasks include face recognition, letter recognition, and object recognition.

3.1. The Optimization Strategy of Face Recognition

Human beings socially communicate with people very frequently in daily life, so learning how to improve face recognition through image processing is one of the important directions in prosthesis research. The related studies conducted a recognition task with either unfamiliar or familiar faces. Boyle et al. [23] designed six processing schemes by image enlargement for subjects to choose the best scheme for face recognition under prosthetic vision. The results showed that the image optimization based on the magnification window of saliency detection was the most chosen, and thus it was considered as the most effective one. Wang et al. [76] proposed three face detection strategies for investigating the appropriate regions for face recognition under artificial prosthetic vision. The first one was to detect faces with the Viola–Jones face detection technique (VJFR) and box out face regions; the second one was to extract face regions according to statistical face ratios (SFR) based on the results of VJFR; the third one was to use the matting face region (MFR) depending on the detection of the previous two methods. Subjects achieved the best recognition accuracy of 67.22 ± 14.45% with the low resolution of the three methods. In the meantime, the experimental results indicated that hair was important for familiar face recognition at a low resolution.
Interior feature extraction is particularly important in familiar face recognition because interior features (e.g., glasses, nose, and mouth region) can help subjects identify familiar faces. Rollend et al. [77] also proposed an image enhancement method using efficient local binary pattern (LBP) features to detect faces when the detected face intersects the an implanted field of view, segmenting the area around the face with an ellipse, performing face contrast enhancement by histogram equalization, and achieving real-time face detection at a low resolution. Moreover, to highlight interior features, Jessica et al. [78] caricatured the face image to exaggerate the identity information in both familiar and unfamiliar face recognition. The average faces of females and males were calculated from the locations of the marked face attributes. By exaggerating the distance between attributes such as the target face’s and the average face’s lips, people with thick lips were caricatured so that the lips became thicker. At a resolution of 40 × 40 and a dropout rate of 30%, the average face recognition accuracy of the subjects improved from 55% to 65%, exceeding the passing level. The schematic illustration of the principle is shown in Figure 4, A, B and C in Figure 4 are the results of the face processing. To further reduce the difficulty of face recognition, Zhao et al. [79] proposed a FaceNet-based strategy to transform and replace complex face information into simple Chinese characters (surnames) in real time, resulting in recognition accuracy values of 77.37% and above, providing a new possible direction for improving face recognition in the field of prosthetics. Chang et al. [80] combined Sobel edge detection and contrast enhancement techniques to highlight interior features of familiar faces. The proposed contrast enhancement was a novel histogram equalization technique that adjusted the input histogram by adaptively changing parameters to enhance the image naturally. The face images selected in the experiments were all familiar faces for the subjects. The results showed that the subjects’ face recognition accuracy reached 27 ± 12.96%, 56.43 ± 17.54%, and 84.05 ± 11.23% for the three resolutions (8 × 8, 12 × 12, and 16 × 16), respectively, while the subjects’ average response times for recognizing facial images were 3.21 ± 0.68 s, 2.73 s, and 1.93 ± 0.53 s, respectively. Recently, Xia et al. [81] proposed an F2Pnet for translating faces into pixelated faces, and 14 subjects were recruited for tests of face recognition. The training dataset was AIRS-PFD, and the results showed that mean individual identifiability values were 58% with pixelated faces and 46% with reduced resolution and display degradation (30%).
Methods such as image enhancement and edge detection for interior face features have been shown to be helpful in improving face recognition under prosthetic vision, and some methods have been applied in prosthetic devices [82]. However, the study of image optimization algorithms for face recognition under irregular artificial prosthesis vision were relatively small, and the resolution used in the relevant studies that have been conducted was high, much higher than the number of electrodes in the more widely implanted prosthetic devices. The performance of the image optimization for face recognition deserves further research by taking into account the irregularities in real artificial vision.

3.2. The Optimization Strategy for Character Recognition

Character recognition has likewise received much attention, as an important direction in prosthesis research. Early studies focused on the effects of phosphene properties, such as dot size and number, on character recognition [11,83,84,85]. Some of the work utilized image processing methods, such as Fu et al. [86], who processed images with cropping and segmentation. Considering the presence of dropout phosphenes and array distortions, some researchers have improved the adverse effects of such irregularities through image processing. Dai et al. [26] proposed two correction methods, including weighted nearest neighbor search (NNS) and expansion based on image morphology, to improve the recognition of Chinese characters under irregular phosphene arrays. The results demonstrated that the average accuracy after using the correction was more than 80% when the index of array irregularity reached 0.4. Based on this work, Lu et al. [87] optimized the NNS and further proposed a projection method to improve the reading ability of subjects with irregular phosphene arrays, and its specific processing flow is shown in Figure 5. In Lu’s study, the NNS found the evoked irregular phosphene array for the nearest phosphene dot, q k , in a circle that centered on the point p i in the ideal regular phosphene array. The schematic diagram is shown in Figure 6, where the observed q k replaces the q i to express visual information. Projection refers to superimposing normal characters on a phosphene array of the same size and pixelating the strokes over the viable phosphenes to finally generate the corresponding pixelated character results.
By the experimental results, the accuracy of Chinese character recognition under both strategies is higher than that before optimization, and the effect is better under the nearest neighbor search optimization strategy. The results also indicated that, in the NNS, the larger the search range selected by the nearest neighbor search strategy, the higher the subjects’ recognition accuracy. The accuracy reached or exceeded 69.4 ± 3.4% when the search range reached or exceeded 0.6 times the adjacent phosphene point spacing. This is due to the fact that the NNS method complements the absent features influenced by the distortion and dropout of character strokes while preserving the structure of Chinese characters.
Kiral-Kornek et al. [88] proposed to extract edge orientation information encoded as a directional elliptical phosphene to improve letter recognition performance under prosthetic vision. The results showed that, considering a dropout rate of 50%, the subjects achieved 65% recognition accuracy using the directional phosphene strategy, significantly higher than the 47% recognition accuracy under the uniform stimulation strategy. Hyun et al. [89] investigated the effects of image presentation methods on character recognition at different stimulus frequencies. If the electrode stimulation frequency is too fast, it may cause the subject to see multiple phosphenes or even a large phosphene directly occupying the entire visual field. At the two resolutions of 6 × 6 and 10 × 10, Hyun et al. used two methods of pixelization for Korean and English letters, the static pixelization method and the spatiotemporal pixelization method (SP). The SP method means that the original image was downsampled with a block-averaging algorithm with four times the spatial resolution of pixelation, while the block-averaged image was subsampled to four different low-resolution images. A two-dimensional Gaussian function was convolved on each subsampled image to generate four different phosphene results, which were presented to the subjects at different stimulus frame rates. The strategy of spatiotemporal pixelation significantly improved the recognition accuracy from a failing grade to 80%. This method of sequential stimulation of subsampled images “splits” the stroke structure into four parts and takes advantage of the characteristics of the human brain’s short-term memory to achieve character recognition.
Character recognition is an important part of life, and researchers have used computer vision image processing methods to assist visually impaired people with character recognition. Character/letter recognition is not as difficult and commonly does not require high visual acuity compared to face and object recognition. Likewise, in prosthetic vision, characters/letters do not need many phosphenes to convey information, using mostly simple preprocessing methods such as binarization. During the clinical trial phase, the implant recipients were able to perform character recognition faster or better with a short training period compared to the other visual tasks tested [43]. To reduce the adverse effects of irregular artificial vision on reading or daily word communication, researchers have proposed several array optimization methods, which have been validated in Chinese character recognition under simulated artificial vision. After irregularity optimization under simulated prosthetic vision, subjects have achieved more than 80% accurate recognition rates [26]. Even in the recognition of Chinese characters with complex strokes, there was a large improvement in recognition rate with optimized correction. Therefore, in recent years, there were less studies on character recognition and its information optimization methods. On the other hand, the role of these correction methods in other language characters should be further investigated in the future.

3.3. The Optimization Strategy of Object Recognition

Similar to the studies of face recognition, research on object recognition in this field aims to extract and enhance useful information from the low-resolution artificial vision field to help the implant recipients to obtain better visual perception and object recognition ability. Li et al. proposed a top-down model for global contrast saliency detection [90], which detects and extracts the most conspicuous objects in the scene in real time by combining color and intensity differences. A set of visual tasks was designed with simulated ideal artificial vision. Subjects were asked to find the target object at a distance of 2 m. The average time to complete the task before using the optimized strategy was around 62 s. Afterwards, the average time was around 42 s using the optimized strategy. In order to better assess the effect of the optimization strategy on daily life, a second eye–hand coordination task was designed in which subjects were asked to find two target objects among the four objects in front of them and complete the corresponding actions. Meanwhile, the mean rate of correctly completed tasks (PC) increased from 62.85 ± 1.54% to 84.72 ± 1.41%, and the mean completion time (CT) decreased from 49.5 ± 3.76 s to 40.73 ± 2.1 s. Analyzing the experimental data together, the mean PC and CT with both types of vision tasks verify the effectiveness of the optimization strategy. Meanwhile, the mean head motion of the subjects decreased from 939.31 ± 38.38° to 575.70 ± 38.53°, indicating the searching scope was significantly reduced and the ability to perceive was improved. A year later, Li et al. [91] used another bottom-up saliency detection model, graph-based visual saliency (GBVS), combined with edge extraction to help locate foreground regions, which could also help the recognition of one or two objects of interest at an ideal low resolution.
Considering the deficit in real artificial vision, Li et al. [92,93] utilized a generative adversarial network (GAN) model, which has had remarkable success in the field of image inpainting, to compensate for the absence of phosphene points. The Pix2pix GAN was used to learn the mapping relationship between RGB images and pixelated results with a generator and a discriminator, which generates the pixelated results with additive phosphenes points close to the real one. The principle of the model is shown in Figure 7, and the calculation is shown in Equation (1).
p i x e l r e c o n s t r u c t e d = y M + ( 1 M ) G ( Z )
where the input binary mask is denoted by M ; the dropout image of the input phosphene point is denoted by y ; G ( Z ) is the mapping suitable for representing the missing parts, and is the Hadamard product.
Inputting arbitrary Gaussian noise, z, into the generator, the image features of any simulated ideal phosphenes were learned to obtain a mapping close to the real generated image, G ( z ) , where the input to the generator was any image, y , with dropout phosphenes. The Hadamard product of y and the binary mask of dropout part M were calculated to obtain the image y M of the no dropout part. Simultaneously, the discriminator determined the difference between the input image, y , and the real image using difference back-propagation and adversarial training to reconstruct the dropout part of the image, y , to obtain the optimal result, G ( Z ) , for which the generator used previously learned features, where the optimal solution, Z , was obtained by back-propagating the updated generator parameters in the process of minimizing the global loss. Then, the no-dropout part, y M , and the optimal result of generating the dropout part, G ( Z ) , were summed to obtain the final complemented complete image, p i x e l r e c o n s t r u c t e d . Subjects were asked to answer verbally the identity of the object appearing on the screen within 10 s at a distance of 35–40 cm from the display. The test results showed that the average recognition accuracy of subjects ranged from 35.0 ± 4.3% to 60.0 ± 6.1%, and the accuracy improved to 80.3 ± 7.7% after using the phosphene point addition optimization strategy. The model decreased the difficulty for subjects in the recognition process, and they could recognize most of the pixelated objects.
Aiding the visual prosthetic wearer in subsequent daily perceptual tasks through scene understanding, Melani et al. [94,95] used a strategy combining structural informative edges (SIE) and objects mask segmentation (OMS) to help identify objects and rooms. Among them, the objects in the indoor scene were highlighted with instance segmentation to reduce the interference of the background, the structural information in the scene was extracted with semantic segmentation, and the edge information in the scene was extracted with Canny edge detection. Object recognition and scene recognition tests were conducted to simulate the daily life of subjects in a 20° viewing angle with a 32 × 32 resolution. In the object recognition task in direct low pixelation, the subjects’ correct object recognition rate was 36.83%, compared to 62.78% with the SIE-OMS strategy. The recognition success rate for the scene recognition task with the same strategy was significantly higher than that with direct low pixelation and edge detection. To reduce the difficulty of recognition when multiple target objects within the field of view appear to overlap, Jiang et al. [96] proposed a hierarchical method to assign different levels of grayscale values to multiple targets according to an object’s location, size, and degree of importance in the scene based on the segmentation of multiple targets with Mask RCNN. Ultimately, the subjects achieved a average task completion rates of 87.08 ± 1.92% on the test describing the number of multiple objects in the scene and 60.31 ± 1.99% on the test describing the content of the scene. Considering the depth information of objects, David et al. [97] proposed an InI-based object segmentation model that extracts objects from the scene based on their depth information, derived from the distance from the camera (simulating a human eye), pixelates the extracted objects to improve the clarity of objects in the vision, and reduces some of the distracting spatial and temporal effects. Other scholars have conducted studies under other image acquisition methods, such as Dagnelie et al. [98], who used infrared enhancement to help subjects with cup recognition. Again, using infrared images, Liang et al. [99] proposed an infrared image enhancement algorithm with an improved SAPHE algorithm to enhance image contrast and highlight edge contours to help highlight edges for object recognition. The experimental results showed that the subjects achieved an average recognition accuracy of 86.24 ± 1.88% under infrared mode processing, which was higher than the 64.55 ± 3.34% with direct low-resolution pixels. The depth information of images plays an important role in obstacle avoidance navigation tasks. Alejandro et al. [100] used depth information to detect obstacles while guiding the walking direction with a chessboard grid pattern. To make better use of the depth information in the scene, Rasla et al. [101] proposed a scene simplification strategy based on depth estimation and semantic edge detection using a neurobiologically inspired bionic visual computational model to simulate obstacle avoidance tests, where depth estimation was implemented by a self-supervised monocular depth estimation model using monodepth2 to predict the relative depth map of the pixels in each frame. Simplifying the scenario by semantic edge detection, the success rate for subjects with zero collision obstacles was above 80% in the obstacle avoidance test under simulated prosthetic vision using the combination of depth information and edges.
The method based on saliency detection and segmentation, which were widely used in the field of visual prosthesis, detects the most salient object (objects) in the field of view, removes complex backgrounds, and satisfies the requirement of providing limited information in a low resolution. Some scholars have carried out studies on the methods of depth in detecting useful information related to the task. Few researchers have focused on the presence of deficits in artificial vision and proposed correction methods for object recognition. Further research should consider the proposed optimization method under more irregular artificial vision conditions, such as distortion.

3.4. Summaries of Optimization of Information Processing

Since the research on the optimal processing of visual information was carried out, different computer vision methods were applied to the image processing stage, and the subjects showed improvements in their performance on different visual tasks. Table 2 summarizes the image processing optimization methods in three vision tasks with or without considering the phosphene point dropout or phosphenes array distortion as well as the optimization and final test results.
In the simulation experiments conducted with several reviewed and analyzed image processing optimization strategies, the selected subjects were normal-sighted and unfamiliar with artificial vision to avoid learning effects in psychophysical experiments. The datasets used in the experimental process were mostly created by the researchers themselves. Among them, a few studies used public datasets, such as the public dataset of indoor scenes [102] used in the study of Melani et al. [94,95] and the ETH-80 adopted by Li’s work [92]. In the face and character recognition, some well-known datasets were also utilized, such as AIRS-PFD in Xia’s work [81] and the standard MNREAD reading test in the work of Fu et al. [86]. Some others captured images directly from the camera in real time as experimental images. However, without a public dataset, it is difficult to generalize the researchers’ experimental results. Preprocessing was sometimes applied by common methods or computer vision techniques, such as the work of Boyle et al. [23] cropping images to make them fit the window size. Wang et al. [76] used noise reduction for face recognition, and the work of Chang et al. [80] extracted the edges of faces in images with Canny operators. Before the layering and optimizing of object information, Li et al. [96] utilized Mask-RCNN to obtain the mask of multiple object instance, which could be regarded as processing. Meanwhile, certain image processing optimization strategies have high hardware requirements, and computing devices with GPUs are essential to the implementation of the algorithms in real time. The above image processing optimization strategies applied to different visual tasks brought better visual perception to the subjects to some extent, and the results of the simulation experiments in each study mostly quantitatively assessed the effectiveness of these image processing strategies. However, the majority of them are based on simulation studies under ideal arrays [21,22,90,91,96,99,103,104,105,106,107,108], except for [26,78,81,87,92], which consider irregularities in real artificial vision.

4. Discussion

Visual prosthetics have provided an important research direction for repairing the visual perception of the visually impaired and have been promoted with certain clinical applications. Visual prostheses are not a fundamental solution to the visually impaired, but they provide the opportunity for the visually impaired to improve their ability to perform functional visual tasks. This paper reviews the visual perceptual ability of implant recipients in clinical trials and studies of image processing optimization in the field of simulated prosthetic vision. Although the experimental results from many studies were promising, some problems still need to be solved. Fewer prosthetic devices have entered the clinical phase, and some are implanted far from the center of the visual field and are not well-perceived by the wearer. Implantation into the patient’s eye can take a long time [99,109,110,111,112,113] and carries surgical risks. The number of electrodes in these devices is limited, and there are irregularities in the induced phosphenes. The more widely worn Argus II has an external image processing unit that includes edge detection and enhancement technology. However, these methods are simpler and provide global important information to the wearer, which is less related to the visual tasks. While image processing algorithm research is spreading in the field of visual prosthesis applications, some issues are worth considering. Most of the image processing algorithms investigated by researchers are image optimization methods used on single vision tasks in static images. However, in real life, people typically perform two or more visual tasks at the same time, such as putting on clothes, where people may perform object detection and eye–hand coordination simultaneously. The multitask requires image processing methods that guarantee good performance while ensuring real-time implementation. However, few models in the current research can meet such requirements. Meanwhile, the dropout of phosphene points and the distortion of phosphene arrays in artificial vision were less considered. Additionally, the simulated phosphenes are mostly colorless. However, clinically evoked ones are colorful, such as yellow, red, and orange [65,114,115]. Recently, Vernon et al. [116] proposed a hybrid stimulus model that can provide color information without reducing spatial resolution. The understanding of colored phosphene vision will help improve research on visual function restoration for artificial prosthesis wearers. Some researchers have now made improvements to electrode implantation by proposing an array in a honeycomb configuration [117]. This unique configuration shape offers great possibilities for improving the spatial resolution of the visual prosthesis. On the basis of existing image optimization research, future research will focus on irregular array optimization under different image categories, and increasing the color information of phosphenes should be carried out in the process of exploring the improvement in the resolution of visual prostheses.

5. Conclusions

In studies of the optimization of information processing, most show that computer vision can be used to improve the visual functions of wearers, such as object recognition, face recognition, and character recognition. Future visual prosthesis devices may have smaller implanted electrodes, allowing for the implantation of higher density microelectrode arrays. However, as the density of electrode arrays increases, it may not always produce the expected high-resolution artificial visual information and may bring such phenomena as virtual electrodes and increase the risk of tissue damage and the cost of implantation. These issues indicate that the growth in the number of implanted electrodes will be limited in the near future. With the growing development in the field of artificial intelligence, more accurate and efficient image detection and segmentation techniques are ongoing, offering the possibility of improving image processing modules in prosthetic devices to optimize artificial vision. It is believed that along with the improvement in visual prosthesis device hardware and the application of computer vision, the two complement each other to optimize the elicited vision of artificial visual prostheses, bringing the hope of “seeing” to prosthetic wearers.

Author Contributions

Writing—original draft preparation, J.W. and R.Z. (Rongfeng Zhao); writing—review and editing, P.L., Y.Z., Z.F., Q.L. and R.Z. (Ruyan Zhou); funding acquisition, Y.Z. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 61806123, 41871325, 41506213, 41401489, 41376178), the National Key R&D Program of China (Grant No. 2019YFD0900805), and the Shanghai Sailing Program (Grant No. 16YF1415700).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge support by China (61806123, 41871325, 41506213, 41401489, and 41376178); the National Key R&D Program of China (2019YFD0900805); and the Shanghai Sailing Program (16YF1415700).

Conflicts of Interest

The authors declare that they have no conflict of interest with the content of this article. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. World Health Organization. World Report on Vision; World Health Organization: Geneva, Switzerland, 2019. [Google Scholar]
  2. Tassicker, G.E. Preliminary report on a retinal stimulator. Br. J. Physiol. Opt. 1956, 13, 102–105. [Google Scholar]
  3. Dobelle, W.H.; Mladejovsky, M.G.; Girvin, J.J.S. Artificial vision for the blind: Electrical stimulation of visual cortex offers hope for a functional prosthesis. Science 1974, 183, 440–444. [Google Scholar] [CrossRef] [PubMed]
  4. Rizzo, J.F., 3rd; Wyatt, J.; Loewenstein, J.; Kelly, S.; Shire, D. Perceptual efficacy of electrical stimulation of human retina with a microelectrode array during short-term surgical trials. Investig. Ophthalmol. Vis. Sci. 2003, 44, 5362–5369. [Google Scholar] [CrossRef] [PubMed]
  5. Humayun, M.S.; Weiland, J.D.; Fujii, G.Y.; Greenberg, R.; Williamson, R.; Little, J.; Mech, B.; Cimmarusti, V.; Van Boemel, G.; Dagnelie, G.; et al. Visual perception in a blind subject with a chronic microelectronic retinal prosthesis. Vis. Res. 2003, 43, 2573–2581. [Google Scholar] [CrossRef]
  6. Humayun, M.S.; Dorn, J.D.; da Cruz, L.; Dagnelie, G.; Sahel, J.A.; Stanga, P.E.; Cideciyan, A.V.; Duncan, J.L.; Eliott, D.; Filley, E.; et al. Interim results from the international trial of Second Sight’s visual prosthesis. Ophthalmology 2012, 119, 779–788. [Google Scholar] [CrossRef]
  7. Cheng, X.; Feng, X.; Li, W. Research on Feature Extraction Method of Fundus Image Based on Deep Learning. In Proceedings of the 2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 20–22 November 2020; pp. 443–447. [Google Scholar]
  8. Orlando, J.I.; Fu, H.; Barbosa Breda, J.; van Keer, K.; Bathula, D.R.; Diaz-Pinto, A.; Fang, R.; Heng, P.A.; Kim, J.; Lee, J.; et al. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med. Image Anal. 2020, 59, 101570. [Google Scholar] [CrossRef]
  9. Son, J.; Shin, J.Y.; Kim, H.D.; Jung, K.H.; Park, K.H.; Park, S.J. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology 2020, 127, 85–94. [Google Scholar] [CrossRef]
  10. Catalán, E.B.; Gámez, E.D.L.C.; Valverde, J.A.M.; Reyna, R.H.; Hernández, J.L.H. Detection of Exudates and Microaneurysms in the Retina by Segmentation in Fundus Images. Rev. Mex. Ing. Bioméd. 2021, 42, 67–77. [Google Scholar]
  11. Dagnelie, G.; Barnett, D.; Humayun, M.S.; Thompson, R.W., Jr. Paragraph text reading using a pixelized prosthetic vision simulator: Parameter dependence and task learning in free-viewing conditions. Investig. Opthalmol. Vis. Sci. 2006, 47, 1241–1250. [Google Scholar] [CrossRef]
  12. Abolfotuh, H.H.; Jawwad, A.; Abdullah, B.; Mahdi, H.M.; Eldawlatly, S. Moving object detection and background enhancement for thalamic visual prostheses. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 4711–4714. [Google Scholar]
  13. Bar-Yoseph, P.Z.; Brøns, M.; Gelfgat, A.; Oron, A. Fifth International Symposium on Bifurcations and Instabilities in Fluid Dynamics (BIFD2013). Fluid Dyn. Res. 2014, 49, 1015–1031. [Google Scholar] [CrossRef]
  14. White, J.; Kameneva, T.; McCarthy, C. Vision Processing for Assistive Vision: A Deep Reinforcement Learning Approach. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 123–133. [Google Scholar] [CrossRef]
  15. Dowling, J.A.; Maeder, A.; Boles, W. Mobility enhancement and assessment for a visual prosthesis. In Proceedings of the Medical Imaging 2004: Physiology, Function, and Structure from Medical Images, San Diego, CA, USA, 30 April 2004; pp. 780–791. [Google Scholar]
  16. Thorn, J.T.; Migliorini, E.; Ghezzi, D. Virtual reality simulation of epiretinal stimulation highlights the relevance of the visual angle in prosthetic vision. J. Neural Eng. 2020, 17, 056019. [Google Scholar] [CrossRef] [PubMed]
  17. Adewole, D.O.; Struzyna, L.A.; Burrell, J.C.; Harris, J.P.; Nemes, A.D.; Petrov, D.; Kraft, R.H.; Chen, H.I.; Serruya, M.D.; Wolf, J.A. Development of optically controlled “living electrodes” with long-projecting axon tracts for a synaptic brain-machine interface. Sci. Adv. 2021, 7, eaay5347. [Google Scholar] [CrossRef] [PubMed]
  18. Fernandez, E.; Alfaro, A.; Soto-Sanchez, C.; Gonzalez-Lopez, P.; Lozano, A.M.; Pena, S.; Grima, M.D.; Rodil, A.; Gomez, B.; Chen, X.; et al. Visual percepts evoked with an intracortical 96-channel microelectrode array inserted in human occipital cortex. J. Clin. Investig. 2021, 131, e151331. [Google Scholar] [CrossRef]
  19. McCarthy, C.; Barnes, N.; Lieby, P. Ground surface segmentation for navigation with a low resolution visual prosthesis. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 4457–4460. [Google Scholar]
  20. Yang, K.; Wang, K.; Bergasa, L.M.; Romera, E.; Hu, W.; Sun, D.; Sun, J.; Cheng, R.; Chen, T.; Lopez, E. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation. Sensors 2018, 18, 1506. [Google Scholar] [CrossRef]
  21. Han, N.; Srivastava, S.; Xu, A.; Klein, D.; Beyeler, M. Deep Learning–Based Scene Simplification for Bionic Vision. In Proceedings of the Augmented Humans Conference 2021, Rovaniemi, Finland, 22–24 February 2021; pp. 45–54. [Google Scholar]
  22. De Luca, D.; Moccia, S.; Micera, S. Deploying an Instance Segmentation Algorithm to Implement Social Distancing for Prosthetic Vision. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Pisa, Italy, 21–25 March 2022; pp. 735–740. [Google Scholar]
  23. Boyle, J.R.; Boles, W.W.; Maeder, A.J. Region-of-interest processing for electronic visual prostheses. J. Electron. Imaging 2008, 17, 013002. [Google Scholar] [CrossRef]
  24. McCarthy, C.; Barnes, N. Importance weighted image enhancement for prosthetic vision: An augmentation framework. In Proceedings of the 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 10–12 September 2014; pp. 45–51. [Google Scholar]
  25. Li, W.H. Wearable Computer Vision Systems for a Cortical Visual Prosthesis. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, NSW, Australia, 2–8 December 2013; pp. 428–435. [Google Scholar]
  26. Dai, C.; Lu, M.; Zhao, Y.; Lu, Y.; Zhou, C.; Chen, Y.; Ren, Q.; Chai, X. Correction for Chinese character patterns formed by simulated irregular phosphene map. In Proceedings of the 32nd Annual International Conference of the IEEE EMBS, Buenos Aires, Argentina, 31 August–4 September 2010. [Google Scholar]
  27. U.S. National Library of Medicine. Clinical Research Database. Available online: https://www.clinicaltrials.gov/ct2/home (accessed on 31 May 2022).
  28. Dagnelie, G.; Christopher, P.; Arditi, A.; da Cruz, L.; Duncan, J.L.; Ho, A.C.; Olmos de Koo, L.C.; Sahel, J.A.; Stanga, P.E.; Thumann, G.; et al. Performance of real-world functional vision tasks by blind subjects improves after implantation with the Argus(R) II retinal prosthesis system. Clin. Exp. Ophthalmol. 2017, 45, 152–159. [Google Scholar] [CrossRef]
  29. Demchinsky, A.M.; Shaimov, T.B.; Goranskaya, D.N.; Moiseeva, I.V.; Kuznetsov, D.I.; Kuleshov, D.S.; Polikanov, D.V. The first deaf-blind patient in Russia with Argus II retinal prosthesis system: What he sees and why. J. Neural Eng. 2019, 16, 025002. [Google Scholar] [CrossRef]
  30. Rizzo, S.; Barale, P.O.; Ayello-Scheer, S.; Devenyi, R.G.; Delyfer, M.N.; Korobelnik, J.F.; Rachitskaya, A.; Yuan, A.; Jayasundera, K.T.; Zacks, D.N.; et al. Hypotony and the Argus II retinal prosthesis: Causes, prevention and management. Br. J. Ophthalmol. 2020, 104, 518–523. [Google Scholar] [CrossRef]
  31. Yoon, Y.H.; Humayun, M.S.; Kim, Y.J. One-Year Anatomical and Functional Outcomes of the Argus II Implantation in Korean Patients with Late-Stage Retinitis Pigmentosa: A Prospective Case Series Study. Ophthalmologica 2021, 244, 291–300. [Google Scholar] [CrossRef]
  32. da Cruz, L.; Coley, B.F.; Dorn, J.; Merlini, F.; Filley, E.; Christopher, P.; Chen, F.K.; Wuyyuru, V.; Sahel, J.; Stanga, P.; et al. The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss. Br. J. Ophthalmol. 2013, 97, 632–636. [Google Scholar] [CrossRef] [PubMed]
  33. Greenwald, S.H.; Horsager, A.; Humayun, M.S.; Greenberg, R.J.; McMahon, M.J.; Fine, I. Brightness as a function of current amplitude in human retinal electrical stimulation. Investig. Ophthalmol. Vis. Sci. 2009, 50, 5017–5025. [Google Scholar] [CrossRef] [PubMed]
  34. Schiefer, M.A.; Grill, W.M. Sites of neuronal excitation by epiretinal electrical stimulation. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 5–13. [Google Scholar] [CrossRef] [PubMed]
  35. Farvardin, M.; Afarid, M.; Attarzadeh, A.; Johari, M.K.; Mehryar, M.; Nowroozzadeh, M.H.; Rahat, F.; Peyvandi, H.; Farvardin, R.; Nami, M. The Argus-II Retinal Prosthesis Implantation; From the Global to Local Successful Experience. Front. Neurosci. 2018, 12, 584. [Google Scholar] [CrossRef]
  36. Christie, B.; Sadeghi, R.; Kartha, A.; Caspi, A.; Tenore, F.V.; Klatzky, R.L.; Dagnelie, G.; Billings, S. Sequential epiretinal stimulation improves discrimination in simple shape discrimination tasks only. J. Neural Eng. 2022, 19, 036033. [Google Scholar] [CrossRef] [PubMed]
  37. Beyeler, M.; Nanduri, D.; Weiland, J.D.; Rokem, A.; Boynton, G.M.; Fine, I. A model of ganglion axon pathways accounts for percepts elicited by retinal implants. Sci. Rep. 2019, 9, 9199. [Google Scholar] [CrossRef]
  38. Rizzo, S.; Belting, C.; Cinelli, L.; Allegrini, L.; Genovesi-Ebert, F.; Barca, F.; di Bartolo, E. The Argus II Retinal Prosthesis: 12-month outcomes from a single-study center. Am. J. Ophthalmol. 2014, 157, 1282–1290. [Google Scholar] [CrossRef]
  39. Naidu, A.; Ghani, N.; Yazdanie, M.S.; Chaudhary, K. Effect of the Electrode Array-Retina Gap Distance on Visual Function in Patients with the Argus II Retinal Prosthesis. BMC Ophthalmol. 2020, 20, 366. [Google Scholar] [CrossRef]
  40. Muqit, M.M.K.; Velikay-Parel, M.; Weber, M.; Dupeyron, G.; Audemard, D.; Corcostegui, B.; Sahel, J.; Le Mer, Y. Six-Month Safety and Efficacy of the Intelligent Retinal Implant System II Device in Retinitis Pigmentosa. Ophthalmology 2019, 126, 637–639. [Google Scholar] [CrossRef]
  41. Wolffsohn, J.S.; Kollbaum, P.S.; Berntsen, D.A.; Atchison, D.A.; Benavente, A.; Bradley, A.; Buckhurst, H.; Collins, M.; Fujikado, T.; Hiraoka, T.; et al. IMI—Clinical Myopia Control Trials and Instrumentation Report. Investig. Opthalmol. Vis. Sci. 2019, 60, M132–M160. [Google Scholar] [CrossRef]
  42. Keseru, M.; Feucht, M.; Bornfeld, N.; Laube, T.; Walter, P.; Rossler, G.; Velikay-Parel, M.; Hornig, R.; Richard, G. Acute electrical stimulation of the human retina with an epiretinal electrode array. Acta Ophthalmol. 2012, 90, e1–e8. [Google Scholar] [CrossRef] [PubMed]
  43. Stingl, K.; Bartz-Schmidt, K.U.; Besch, D.; Braun, A.; Bruckmann, A.; Gekeler, F.; Greppmaier, U.; Hipp, S.; Hortdorfer, G.; Kernstock, C.; et al. Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS. Proc. Biol. Sci. 2013, 280, 20130077. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Daschner, R.; Rothermel, A.; Rudorf, R.; Rudorf, S.; Stett, A. Functionality and Performance of the Subretinal Implant Chip Alpha AMS. Sens. Mater. 2018, 30, 179–192. [Google Scholar] [CrossRef]
  45. Zrenner, E.; Bartz-Schmidt, K.U.; Benav, H.; Besch, D.; Bruckmann, A.; Gabel, V.P.; Gekeler, F.; Greppmaier, U.; Harscher, A.; Kibbel, S.; et al. Subretinal electronic chips allow blind patients to read letters and combine them to words. Proc. Biol. Sci. 2011, 278, 1489–1497. [Google Scholar] [CrossRef]
  46. Lorach, H.; Goetz, G.; Smith, R.; Lei, X.; Mandel, Y.; Kamins, T.; Mathieson, K.; Huie, P.; Harris, J.; Sher, A.; et al. Photovoltaic restoration of sight with high visual acuity. Nat. Med. 2015, 21, 476–482. [Google Scholar] [CrossRef]
  47. Lemoine, D.; Simon, E.; Buc, G.; Deterre, M. In vitro reliability testing and in vivo lifespan estimation of wireless Pixium Vision PRIMA photovoltaic subretinal prostheses suggest prolonged durability and functionality in clinical practice. J. Neural Eng. 2020, 17, 035005. [Google Scholar] [CrossRef]
  48. Palanker, D.; Le Mer, Y.; Mohand-Said, S.; Sahel, J.A. Simultaneous perception of prosthetic and natural vision in AMD patients. Nat. Commun. 2022, 13, 513. [Google Scholar] [CrossRef]
  49. Muqit, M.M.K.; Hubschman, J.P.; Picaud, S.; McCreery, D.B.; van Meurs, J.C.; Hornig, R.; Buc, G.; Deterre, M.; Nouvel-Jaillard, C.; Bouillet, E.; et al. PRIMA subretinal wireless photovoltaic microchip implantation in non-human primate and feline models. PLoS ONE 2020, 15, e0230713. [Google Scholar] [CrossRef]
  50. Prevot, P.H.; Gehere, K.; Arcizet, F.; Akolkar, H.; Khoei, M.A.; Blaize, K.; Oubari, O.; Daye, P.; Lanoe, M.; Valet, M.; et al. Behavioural responses to a photovoltaic subretinal prosthesis implanted in non-human primates. Nat. Biomed. Eng. 2020, 4, 172–180. [Google Scholar] [CrossRef]
  51. Palanker, D.; Le Mer, Y.; Mohand-Said, S.; Muqit, M.; Sahel, J.A. Photovoltaic Restoration of Central Vision in Atrophic Age-Related Macular Degeneration. Ophthalmology 2020, 127, 1097–1104. [Google Scholar] [CrossRef]
  52. Fujikado, T.; Kamei, M.; Sakaguchi, H.; Kanda, H.; Endo, T.; Hirota, M.; Morimoto, T.; Nishida, K.; Kishima, H.; Terasawa, Y.; et al. One-Year Outcome of 49-Channel Suprachoroidal-Transretinal Stimulation Prosthesis in Patients with Advanced Retinitis Pigmentosa. Investig. Ophthalmol. Vis. Sci. 2016, 57, 6147–6157. [Google Scholar] [CrossRef]
  53. Fujikado, T.; Kamei, M.; Sakaguchi, H.; Kanda, H.; Morimoto, T.; Ikuno, Y.; Nishida, K.; Kishima, H.; Konoma, K.; Ozawa, M. Feasibility of Semi-chronically Implanted Retinal Prosthesis by Suprachoroidal-Transretinal Stimulation in Patients with Retinitis Pigmentosa. Investig. Ophthalmol. Vis. Sci. 2011, 52, 2589. [Google Scholar]
  54. Abbott, C.J.; Nayagam, D.A.X.; Luu, C.D.; Epp, S.B.; Williams, R.A.; Salinas-LaRosa, C.M.; Villalobos, J.; McGowan, C.; Shivdasani, M.N.; Burns, O.; et al. Safety Studies for a 44-Channel Suprachoroidal Retinal Prosthesis: A Chronic Passive Study. Investig. Ophthalmol. Vis. Sci. 2018, 59, 1410–1424. [Google Scholar] [CrossRef] [Green Version]
  55. Titchener, S.A.; Kvansakul, J.; Shivdasani, M.N.; Fallon, J.B.; Nayagam, D.A.X.; Epp, S.B.; Williams, C.E.; Barnes, N.; Kentler, W.G.; Kolic, M.; et al. Oculomotor Responses to Dynamic Stimuli in a 44-Channel Suprachoroidal Retinal Prosthesis. Transl. Vis. Sci. Technol. 2020, 9, 31. [Google Scholar] [CrossRef] [PubMed]
  56. Petoe, M.A.; Titchener, S.A.; Kolic, M.; Kentler, W.G.; Abbott, C.J.; Nayagam, D.A.X.; Baglin, E.K.; Kvansakul, J.; Barnes, N.; Walker, J.G.; et al. A Second-Generation (44-Channel) Suprachoroidal Retinal Prosthesis: Interim Clinical Trial Results. Transl. Vis. Sci. Technol. 2021, 10, 12. [Google Scholar] [CrossRef]
  57. Titchener, S.A.; Nayagam, D.A.X.; Kvansakul, J.; Kolic, M.; Baglin, E.K.; Abbott, C.J.; McGuinness, M.B.; Ayton, L.N.; Luu, C.D.; Greenstein, S.; et al. A Second-Generation (44-Channel) Suprachoroidal Retinal Prosthesis: Long-Term Observation of the Electrode-Tissue Interface. Transl. Vis. Sci. Technol. 2022, 11, 12. [Google Scholar] [CrossRef]
  58. Kolic, M.; Baglin, E.K.; Titchener, S.A.; Kvansakul, J.; Abbott, C.J.; Barnes, N.; McGuinness, M.; Kentler, W.G.; Young, K.; Walker, J.; et al. A 44 channel suprachoroidal retinal prosthesis: Laboratory based visual function and functional vision outcomes. Investig. Ophthalmol. Vis. Sci. 2021, 62, 3168. [Google Scholar]
  59. Niketeghad, S.; Pouratian, N. Brain Machine Interfaces for Vision Restoration: The Current State of Cortical Visual Prosthetics. Neurotherapeutics 2019, 16, 134–143. [Google Scholar] [CrossRef]
  60. Schmidt, E.M.; Bak, M.J.; Hambrecht, F.T.; Kufta, C.V.; O’rourke, D.K.; Vallabhanath, P. Feasibility of a visual prosthesis for the blind based on intracorticai microstimulation of the visual cortex. Brain 1996, 119, 507–522. [Google Scholar] [CrossRef]
  61. Troyk, P.R. The Intracortical Visual Prosthesis Project. In Artificial Vision; Springer: Cham, Switzerland, 2017; pp. 203–214. [Google Scholar]
  62. Ong, J.M.; da Cruz, L. The bionic eye: A review. Clin. Exp. Ophthalmol. 2012, 40, 6–17. [Google Scholar] [CrossRef]
  63. Dobelle, W.H.; Mladejovsky, M.G.; Evans, J.R.; Roberts, T.; Girvin, J.J.N. ‘Braille’ reading by a blind volunteer by visual cortex stimulation. Nature 1976, 259, 111–112. [Google Scholar] [CrossRef] [PubMed]
  64. Fernández, E.; Normann, R.A. CORTIVIS Approach for an Intracortical Visual Prostheses. In Artificial Vision; Springer: Cham, Switzerland, 2017; pp. 191–201. [Google Scholar]
  65. Chen, X.; Wang, F.; Fernandez, E.; Roelfsema, P.R.J.S. Shape perception via a high-channel-count neuroprosthesis in monkey visual cortex. Science 2020, 370, 191–1196. [Google Scholar] [CrossRef] [PubMed]
  66. Fernandez, E. Development of visual Neuroprostheses: Trends and challenges. Bioelectron. Med. 2018, 4, 12. [Google Scholar] [CrossRef] [PubMed]
  67. Chernov, M.M.; Friedman, R.M.; Chen, G.; Stoner, G.R.; Roe, A.W. Functionally specific optogenetic modulation in primate visual cortex. Proc. Natl. Acad. Sci. USA 2018, 115, 10505–10510. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Shivdasani, M.N.; Sinclair, N.C.; Dimitrov, P.N.; Varsamidis, M.; Ayton, L.N.; Luu, C.D.; Perera, T.; McDermott, H.J.; Blamey, P.J. Factors Affecting Perceptual Thresholds in a Suprachoroidal Retinal Prosthesis. Investig. Ophthalmol. Vis. Sci. 2014, 55, 6467–6481. [Google Scholar] [CrossRef]
  69. Weitz, A.C.; Nanduri, D.; Behrend, M.R.; Gonzalez-Calle, A.; Greenberg, R.J.; Humayun, M.S.; Chow, R.H.; Weiland, J.D. Improving the spatial resolution of epiretinal implants by increasing stimulus pulse duration. Investig. Ophthalmol. Vis. Sci. 2015, 7, ra203–ra318. [Google Scholar] [CrossRef]
  70. Beyeler, M.; Boynton, G.M.; Fine, I.; Rokem, A. Interpretable machine-learning predictions of perceptual sensitivity for retinal prostheses. Investig. Ophthalmol. Vis. Sci. 2020, 61, 2202. [Google Scholar]
  71. Lee, S.W.; Seo, J.-M.; Ha, S.; Kim, E.T.; Chung, H.; Kim, S.J. Development of Microelectrode Arrays for Artificial Retinal Implants Using Liquid Crystal Polymers. Investig. Ophthalmol. Vis. Sci. 2009, 50, 5859–5866. [Google Scholar] [CrossRef]
  72. Horsager, A.; Greenberg, R.J.; Fine, I. Spatiotemporal Interactions in Retinal Prosthesis Subjects. Investig. Ophthalmol. Vis. Sci. 2010, 51, 1223–1233. [Google Scholar] [CrossRef]
  73. Najarpour Foroushani, A.; Pack, C.C.; Sawan, M. Cortical visual prostheses: From microstimulation to functional percept. J. Neural Eng. 2018, 15, 021005. [Google Scholar] [CrossRef]
  74. Frederick, R.A.; Meliane, I.Y.; Joshi-Imre, A.; Troyk, P.R.; Cogan, S.F. Activated iridium oxide film (AIROF) electrodes for neural tissue stimulation. J. Neural Eng. 2020, 17, 056001. [Google Scholar] [CrossRef] [PubMed]
  75. Chenais, N.A.L.; Airaghi Leccardi, M.J.I.; Ghezzi, D. Naturalistic spatiotemporal modulation of epiretinal stimulation increases the response persistence of retinal ganglion cell. J. Neural Eng. 2021, 18, 016016. [Google Scholar] [CrossRef] [PubMed]
  76. Wang, J.; Wu, X.; Lu, Y.; Wu, H.; Kan, H.; Chai, X. Face recognition in simulated prosthetic vision: Face detection-based image processing strategies. J. Neural Eng. 2014, 11, 046009. [Google Scholar] [CrossRef] [PubMed]
  77. Rollend, D.; Rosendall, P.; Billings, S.; Burlina, P.; Wolfe, K.; Katyal, K. Face Detection and Object Recognition for a Retinal Prosthesis. In Proceedings of the Asian Conference on Computer Vision, Taipei, China, 20–24 November 2016. [Google Scholar]
  78. Irons, J.L.; Gradden, T.; Zhang, A.; He, X.; Barnes, N.; Scott, A.F.; McKone, E. Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing. Vis. Res. 2017, 137, 61–79. [Google Scholar] [CrossRef]
  79. Zhao, Y.; Yu, A.; Xu, D. Person Recognition Based on FaceNet under Simulated Prosthetic Vision. J. Phys. Conf. Ser. 2020, 1437, 012012. [Google Scholar] [CrossRef] [Green Version]
  80. Chang, M.H.; Kim, H.S.; Shin, J.H.; Park, K.S. Facial identification in very low-resolution images simulating prosthetic vision. J. Neural Eng. 2012, 9, 046012. [Google Scholar] [CrossRef]
  81. Xia, X.; He, X.; Feng, L.; Pan, X.; Li, N.; Zhang, J.; Pang, X.; Yu, F.; Ding, N. Semantic translation of face image with limited pixels for simulated prosthetic vision. Inf. Sci. 2022, 609, 507–532. [Google Scholar] [CrossRef]
  82. Duncan, J.L.; Richards, T.P.; Arditi, A.; da Cruz, L.; Dagnelie, G.; Dorn, J.D.; Ho, A.C.; Olmos de Koo, L.C.; Barale, P.O.; Stanga, P.E.J.C.; et al. Improvements in vision-related quality of life in blind patients implanted with the Argus II Epiretinal Prosthesis. Clin. Exp. Optom. 2017, 100, 144–150. [Google Scholar] [CrossRef]
  83. Chai, X.; Yu, W.; Wang, J.; Zhao, Y.; Cai, C.; Ren, Q. Recognition of pixelized Chinese characters using simulated prosthetic vision. Artif. Organs 2007, 31, 175–182. [Google Scholar] [CrossRef]
  84. Zhao, Y.; Lu, Y.; Zhao, J.; Wang, K.; Ren, Q.; Wu, K.; Chai, X. Reading pixelized paragraphs of Chinese characters using simulated prosthetic vision. Investig. Opthalmol. Vis. Sci. 2011, 52, 5987–5994. [Google Scholar] [CrossRef]
  85. Zhao, Y.; Lu, Y.; Zhou, C.; Chen, Y.; Ren, Q.; Chai, X. Chinese character recognition using simulated phosphene maps. Investig. Ophthalmol. Vis. Sci. 2011, 52, 3404–3412. [Google Scholar] [CrossRef] [PubMed]
  86. Fu, L.; Cai, S.; Zhang, H.; Hu, G.; Zhang, X. Psychophysics of reading with a limited number of pixels: Towards the rehabilitation of reading ability with visual prosthesis. Vis. Res. 2006, 46, 1292–1301. [Google Scholar] [CrossRef]
  87. Lu, Y.; Kan, H.; Liu, J.; Wang, J.; Tao, C.; Chen, Y.; Ren, Q.; Hu, J.; Chai, X. Optimizing chinese character displays improves recognition and reading performance of simulated irregular phosphene maps. Investig. Ophthalmol. Vis. Sci. 2013, 54, 2918–2926. [Google Scholar] [CrossRef] [PubMed]
  88. Kiral-Kornek, F.I.; O’Sullivan-Greene, E.; Savage, C.O.; McCarthy, C.; Grayden, D.B.; Burkitt, A.N. Improved visual performance in letter perception through edge orientation encoding in a retinal prosthesis simulation. J. Neural Eng. 2014, 11, 066002. [Google Scholar] [CrossRef] [PubMed]
  89. Kim, H.S.; Park, K.S. Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses. Sensors 2017, 17, 2439. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  90. Li, H.; Han, T.; Wang, J.; Lu, Z.; Cao, X.; Chen, Y.; Li, L.; Zhou, C.; Chai, X. A real-time image optimization strategy based on global saliency detection for artificial retinal prostheses. Inf. Sci. 2017, 415–416, 1–18. [Google Scholar] [CrossRef]
  91. Li, H.; Su, X.; Wang, J.; Kan, H.; Han, T.; Zeng, Y.; Chai, X. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision. Artif. Intell. Med. 2018, 84, 64–78. [Google Scholar] [CrossRef]
  92. Zhao, Y.; Li, Q.; Wang, D.; Yu, A. Image Processing Strategies Based on Deep Neural Network for Simulated Prosthetic Vision. In Proceedings of the 2018 11th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 8–9 December 2018; pp. 200–203. [Google Scholar]
  93. Li, Q. Research on Optimization of Image Processing Based Generative Adversarial Networks in Simulated Prosthetic Vision. Ph.D. Thesis, Inner Mongolia University of Science & Technology, Baotou, China, 2019. [Google Scholar]
  94. Guerrero, J.; Martinez-Cantin, R.; Sanchez-Garcia, M. Indoor Scenes Understanding for Visual Prosthesis with Fully Convolutional Networks. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 218–225. [Google Scholar]
  95. Sanchez-Garcia, M.; Martinez-Cantin, R.; Guerrero, J.J. Semantic and structural image segmentation for prosthetic vision. PLoS ONE 2020, 15, e0227677. [Google Scholar] [CrossRef]
  96. Jiang, H.; Li, H.; Liang, J.; Chai, X. A hierarchical image processing strategy for artificial retinal prostheses. In Proceedings of the 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Beijing, China, 23–25 October 2020; pp. 359–362. [Google Scholar]
  97. Avraham, D.; Yitzhaky, Y. Effects of Depth-Based Object Isolation in Simulated Retinal Prosthetic Vision. Symmetry 2021, 13, 1763. [Google Scholar] [CrossRef]
  98. Dagnelie, G.; Kalpin, S.; Yang, L.; Legge, G. Visual Performance with Images Spectrally Augmented by Infrared: A Tool for Severely Impaired and Prosthetic Vision. Investig. Ophthalmol. Vis. Sci. 2005, 46, 1490. [Google Scholar]
  99. Liang, J.; Li, H.; Chen, J.; Zhai, Z.; Wang, J.; Di, L.; Chai, X. An infrared image-enhancement algorithm in simulated prosthetic vision: Enlarging working environment of future retinal prostheses. Artif. Organs 2022. early view. [Google Scholar] [CrossRef] [PubMed]
  100. Perez-Yus, A.; Bermudez-Cameo, J.; Lopez-Nicolas, G.; Guerrero, J.J. Depth and Motion Cues with Phosphene Patterns for Prosthetic Vision. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), Venice, Italy, 22–29 October 2017. [Google Scholar]
  101. Rasla, A.; Beyeler, M. The Relative Importance of Depth Cues and Semantic Edges for Indoor Mobility Using Simulated Prosthetic Vision in Immersive Virtual Reality. arXiv 2022, arXiv:2208.05066. [Google Scholar]
  102. Ariadna Quattoni, A.T. Recognizing indoor scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
  103. Fornos, A.P.; Sommerhalder, J.; Pelizzone, M. Reading with a simulated 60-channel implant. Front. Neurosci. 2011, 5, 57. [Google Scholar] [CrossRef] [PubMed]
  104. Han, T.; Li, H.; Lyu, Q.; Zeng, Y.; Chai, X. Object recognition based on a foreground extraction method under simulated prosthetic vision. In Proceedings of the 2015 International Symposium on Bioelectronics and Bioinformatics (ISBB), Hangzhou, China, 8–9 December 2018. [Google Scholar]
  105. Guo, F.; Yang, Y.; Xiao, Y.; Gao, Y.; Yu, N. Recognition of Moving Object in High Dynamic Scene for Visual Prosthesis. IEICE Trans. Inf. Syst. 2019, E102.D, 1321–1331. [Google Scholar] [CrossRef]
  106. Lozano, A.; Suarez, J.S.; Soto-Sanchez, C.; Garrigos, J.; Martinez-Alvarez, J.J.; Ferrandez, J.M.; Fernandez, E. Neurolight: A Deep Learning Neural Interface for Cortical Visual Prostheses. Int. J. Neural Syst. 2020, 30, 2050045. [Google Scholar] [CrossRef]
  107. White, J.; Kameneva, T.; McCarthy, C. Deep reinforcement learning for task-based feature learning in prosthetic vision. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2809–2812. [Google Scholar]
  108. Alevizaki, A.; Melanitis, N.; Nikita, K. Predicting eye fixations using computer vision techniques. In Proceedings of the 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), Athens, Greece, 28–30 October 2019; pp. 309–315. [Google Scholar]
  109. Seuthe, A.-M.; Haus, A.; Januschowski, K.; Szurman, P. First simultaneous explantation and re-implantation of an Argus II retinal prosthesis system. Ophthalmic Surg. Lasers Imaging Retin. 2019, 50, 462–465. [Google Scholar] [CrossRef]
  110. Ayton, L.N.; Barnes, N.; Dagnelie, G.; Fujikado, T.; Goetz, G.; Hornig, R.; Jones, B.W.; Muqit, M.M.K.; Rathbun, D.L.; Stingl, K.; et al. An update on retinal prostheses. Clin. Neurophysiol. 2020, 131, 1383–1398. [Google Scholar] [CrossRef]
  111. Xue, K.; MacLaren, R.E.J. Correcting visual loss by genetics and prosthetics. Curr. Opin. Physiol. 2020, 16, 1–7. [Google Scholar] [CrossRef]
  112. Erickson-Davis, C.; Korzybska, H. What do blind people “see” with retinal prostheses? Observations and qualitative reports of epiretinal implant users. PLoS ONE 2021, 16, e0229189. [Google Scholar]
  113. Faber, H.; Ernemann, U.; Sachs, H.; Gekeler, F.; Danz, S.; Koitschev, A.; Besch, D.; Bartz-Schmidt, K.-U.; Zrenner, E.; Stingl, K.; et al. CT Assessment of Intraorbital Cable Movement of Electronic Subretinal Prosthesis in Three Different Surgical Approaches. Vis. Sci. Technol. 2021, 10, 16. [Google Scholar] [CrossRef]
  114. Schiller, P.H.; Slocum, W.M.; Kwak, M.C.; Kendall, G.L.; Tehovnik, E.J. New methods devised specify the size and color of the spots monkeys see when striate cortex (area V1) is electrically stimulated. Proc. Natl. Acad. Sci. USA 2011, 108, 17809–17814. [Google Scholar] [CrossRef]
  115. Yue, L.; Castillo, J.; Gonzalez, A.C.; Neitz, J.; Humayun, M.S. Restoring Color Perception to the Blind: An Electrical Stimulation Strategy of Retina in Patients with End-stage Retinitis Pigmentosa. Ophthalmology 2021, 128, 453–462. [Google Scholar] [CrossRef] [PubMed]
  116. Towle, V.L.; Pham, T.; McCaffrey, M.; Allen, D.; Troyk, P.R. Toward the development of a color visual prosthesis. J. Neural Eng. 2021, 18, 023001. [Google Scholar] [CrossRef] [PubMed]
  117. Flores, T.; Huang, T.; Bhuckory, M.; Ho, E.; Chen, Z.; Dalal, R.; Galambos, L.; Kamins, T.; Mathieson, K.; Palanker, D. Honeycomb-shaped electro-neural interface enables cellular-scale pixels in subretinal prosthesis. Sci. Rep. 2019, 9, 10657. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. (A) shows an individual letter recognition task (dark environment). The display shows the letters in white in Century Gothic font on a black background, and the monitor next to it shows the camera view (V) and array (A) in real time; (B) is an illustration of the difference between the electrode activation maps in standard and scrambled modes when the camera is viewing the letter “N”. The correspondence between the real position of the phosphenes and the stimulus position on the array were randomized in the scrambled mode (Adapted with permission from Ref. [62]. Copyright 2011 Royal Australian and New Zealand College of Ophthalmologists).
Figure 1. (A) shows an individual letter recognition task (dark environment). The display shows the letters in white in Century Gothic font on a black background, and the monitor next to it shows the camera view (V) and array (A) in real time; (B) is an illustration of the difference between the electrode activation maps in standard and scrambled modes when the camera is viewing the letter “N”. The correspondence between the real position of the phosphenes and the stimulus position on the array were randomized in the scrambled mode (Adapted with permission from Ref. [62]. Copyright 2011 Royal Australian and New Zealand College of Ophthalmologists).
Sensors 22 06544 g001
Figure 2. Retinal ganglion cells with 90-degree axonal curvature. (A) Diffuse streaky phosphenes produced by stimulation of distant retinal ganglion cells through the overlying axons. (B) Direct stimulation of retinal ganglion cells beneath electrodes produces punctuate phosphene (cited from [34]).
Figure 2. Retinal ganglion cells with 90-degree axonal curvature. (A) Diffuse streaky phosphenes produced by stimulation of distant retinal ganglion cells through the overlying axons. (B) Direct stimulation of retinal ganglion cells beneath electrodes produces punctuate phosphene (cited from [34]).
Sensors 22 06544 g002
Figure 3. Schematic diagram of poor phosphenes induced by CORTIVIS electrode implantation. (A) Simultaneous stimulation of four electrodes arranged in a square may produce the perception; (B) Immediately after implantation, the induced phosphenes may cause poor perception of objects, such as the letter “E” in the figure. However, appropriate learning and rehabilitation strategies can help to improve the poor perception (adapted from [66]).
Figure 3. Schematic diagram of poor phosphenes induced by CORTIVIS electrode implantation. (A) Simultaneous stimulation of four electrodes arranged in a square may produce the perception; (B) Immediately after implantation, the induced phosphenes may cause poor perception of objects, such as the letter “E” in the figure. However, appropriate learning and rehabilitation strategies can help to improve the poor perception (adapted from [66]).
Sensors 22 06544 g003
Figure 4. Schematic diagram of caricatured human face. (A) How faces are represented in the brain, and how this explains improved performance with caricaturing. The dimensions coded on the axes remain unknown, so they might represent the width of the face or other variables. (B) Examples of caricatured faces. Facial features are altered, e.g., the higher the degree of caricature, the thicker the lips become. (C) The leftmost is the face after 60% caricature. The three images from right to left are phosphene images with random dropout at different resolutions (adapted from [78]).
Figure 4. Schematic diagram of caricatured human face. (A) How faces are represented in the brain, and how this explains improved performance with caricaturing. The dimensions coded on the axes remain unknown, so they might represent the width of the face or other variables. (B) Examples of caricatured faces. Facial features are altered, e.g., the higher the degree of caricature, the thicker the lips become. (C) The leftmost is the face after 60% caricature. The three images from right to left are phosphene images with random dropout at different resolutions (adapted from [78]).
Sensors 22 06544 g004
Figure 5. Two optimization strategy procedures (cited from [87]).
Figure 5. Two optimization strategy procedures (cited from [87]).
Sensors 22 06544 g005
Figure 6. Principle of optimization of Chinese characters for nearest neighbor search. p i is the ideal location for the induced phosphene point of view, q i is the location of the actual induced phosphene, q k is the location of the phosphene induced by another electrode, D min represents the shortest distance (cited from [87]).
Figure 6. Principle of optimization of Chinese characters for nearest neighbor search. p i is the ideal location for the induced phosphene point of view, q i is the location of the actual induced phosphene, q k is the location of the phosphene induced by another electrode, D min represents the shortest distance (cited from [87]).
Sensors 22 06544 g006
Figure 7. The principle of optical illusion point of view supplement. z is arbitrary Gaussian noise; M is the dropout part; in the binary mask, 0 indicates the dropout position of the phosphenes, and 1 indicates the retained phosphene position; y is the input image; y M is the image of ideal phosphenes. G ( z ) is a mapping close to the real generated image obtained from the Gaussian noise z. Z is the optimal solution. G ( Z ) is the most suitable mapping to represent the dropout part (adapted from [93]).
Figure 7. The principle of optical illusion point of view supplement. z is arbitrary Gaussian noise; M is the dropout part; in the binary mask, 0 indicates the dropout position of the phosphenes, and 1 indicates the retained phosphene position; y is the input image; y M is the image of ideal phosphenes. G ( z ) is a mapping close to the real generated image obtained from the Gaussian noise z. Z is the optimal solution. G ( Z ) is the most suitable mapping to represent the dropout part (adapted from [93]).
Sensors 22 06544 g007
Table 2. Image processing optimization on visual prostheses.
Table 2. Image processing optimization on visual prostheses.
Visual TasksOptimization MethodsArray
Distortion
DistortionDatasetEvaluation
Indicators
Results
Optimization
Significancenonoself-construction Subjects selected the
amplificationSubject significance amplification
windowpreferencewindow as the most
[23] helpful method.
nonoself-constructionRecognition accuracyThe recognition accuracy of
VJFR;VJFR-ROI, SFR-ROI, and
SFR; MFR-ROI were
52.78 ± 18.52%, 62.78 ± 14.83%
MFRand 67.22 ± 14.45%
[76]respectively
Histogramnonoself-construction real-time
face detection
equalizationAlgorithmat low resolution (30 fps)
enhancementruntime
[77]yesyes26 faces Correct recognition rates of
FaceCaricatured humanRecognition 53% and 65% were
Recognitionface [78]accuracyobtained with old faces and
new faces, respectively.
FaceNet [79]nonoself-construction The average face
Recognition recognition accuracy
accuracyobtained by the subjects
reached over 77.37%.
nonoself-construction The average recognition
accuracy at 8 × 8, 12 × 12, and
16 × 16 resolutions were
Sobel edge Recognition 27 ± 12.96%, 56.43 ± 17.54%,
detection and accuracy, and 84.05 ± 11.23%,
contrast response timerespectively; the average
enhancement response times were
techniques [80] 3.21 ± 0.68 s, 0.73 s, and
1.93 ± 0.53 s.
F2Pnet [81]yesnoAIRS-PFD Mean individual
Individual identifiability of 46% at a
identifiabilitylow resolution with
dropout
yesyes The irregularity index
Commonly Used reached 0.4, and the
NNS and ChineseRecognition average recognition
expansion Characteraccuracyaccuracy of the subjects
method [26]Database after using the correction
method was over 80%.
nonoStandardized Reading speedThe reading speeds of the
MNREADsubjects using 6 × 6 and 8 × 8
Threshold judgment reading test resolutions reached 15
[86]provided by words/min and 30
Dr. G.E. Leggewords/min.
yesyesCommonly used
Character modern Chinese The recognition accuracy of
RecognitionProjection andcharacters (the Recognition the subjects using the NNS
NNS [87]first 500 in the accuracymethod exceeded 68%.
statistical table)
yesnoN, H, R, S The average recognition
DirectedRecognition accuracy of the subjects
phosphenes [88]accuracywas 65%.
SP [89]nono After SP, the character
26 English lettersRecognition recognition accuracy of the
40 Korean lettersaccuracysubjects broke the passing
line (60%).
Checkerboard-stylenono NA *NA *
phosphene guideRGB-D camera
walking [100]capture
noself-construction Percentage of The mean PC of subjects in
correctly single task was
completed 88.72 ± 1.41%, mean CT was
Top-down globaltasks (PC), 41.76 ± 2.9s, and mean
contrastcompletionHMID was 575.70 ± 38.53°;
significance time (CT), the mean PC of subjects in
detection [90]head multitask was 84.72 ± 1.41%,
movements in mean CT was 40.73 ± 2.1 s,
degrees and mean HMID was
(HMID)487.38 ± 14.71°.
nonoself-construction The average recognition
accuracy of the subjects
GBVS and edge Recognition was 70.63 ± 7.59% for single
detection [91]accuracyobject recognition and
75.31 ± 11.40% for
double-target recognition.
Generating anyesyesETH-80 The subjects were able to
Objectadditive model forRecognitionaccomplish an average
Recognitionadversarialaccuracyrecognition accuracy of
networks [92,93] 80.3 ± 7.7% for all objects.
SIE-OMS [94,95]nono Object recognition correct
Public Recognition rate reached 62.78%; room
Indoor scenesaccuracyrecognition correct rate
dataset [102] reached 70.33%.
nonoself-construction Subjects achieved a mean
PC of 87.08 ± 1.92% in the
Percentage of object test task in the
Mask-RCNN layerscorrectly description scene and a
[96]completed mean PC of 60.31 ± 1.99% in
tasks (PC)the description scene
content test.
InI-based objectyesnoself-constructionNA *NA *
segmentation [97]
Improved SAPHEnonoCaptured directlyRecognitionThe average RA of subjects
algorithm [99]with the cameraaccuracy (RA)was 86.24 ± 1.88%.
nonoself-constructionSuccess rateThe success rate for
Depth and edgesubjects with depth and
combinations [101]edge was over 80%.
* NA stands for No Assessment Indicators and Experimental Assessment Results.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, J.; Zhao, R.; Li, P.; Fang, Z.; Li, Q.; Han, Y.; Zhou, R.; Zhang, Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. Sensors 2022, 22, 6544. https://doi.org/10.3390/s22176544

AMA Style

Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. Sensors. 2022; 22(17):6544. https://doi.org/10.3390/s22176544

Chicago/Turabian Style

Wang, Jing, Rongfeng Zhao, Peitong Li, Zhiqiang Fang, Qianqian Li, Yanling Han, Ruyan Zhou, and Yun Zhang. 2022. "Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses" Sensors 22, no. 17: 6544. https://doi.org/10.3390/s22176544

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop