Next Article in Journal
The Honey Bee: An Active Biosampler of Environmental Pollution and a Possible Warning Biomarker for Human Health
Previous Article in Journal
Biphasic Injection for Masseter Muscle Reduction with Botulinum Toxin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Segmentation Algorithm for Cystoid Macular Edema Based on Omnidirectional Wave Operator

Key Laboratory of Opto-Electronics Information Technology of Ministry of Education, School of Precision Instrument & Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(14), 6480; https://doi.org/10.3390/app11146480
Submission received: 31 May 2021 / Revised: 9 July 2021 / Accepted: 12 July 2021 / Published: 14 July 2021
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
Optical coherence tomography (OCT) is widely used in the field of ophthalmic imaging. The existing technology cannot automatically extract the contour of the OCT images of cystoid macular edema (CME) and can only evaluate the degree of lesions by detecting the thickness of the retina. To solve this problem, this paper proposes an automatic segmentation algorithm that can segment the CME in OCT images of the fundus quickly and accurately. This method firstly constructs the working environment by denoising and contrast stretching, secondly extracts the region of interest (ROI) containing CME according to the average gray distribution of the image, and then uses the omnidirectional wave operator to perform multidirectional automatic segmentation. Finally, the fused segmentation results are screened by gray threshold and position feature, and the contour extraction of CME is realized. The segmentation results of the proposed method on data set images are compared with those obtained by manual marking of experts. The accuracy, recall, Dice index, and F1-score are 88.8%, 75.0%, 81.1%, and 81.3%, respectively, with the average process time being 1.2 s. This algorithm is suitable for general CME image segmentation and has high robustness and segmentation accuracy.

1. Introduction

Optical coherence tomography (OCT) [1] is an imaging technology based on low-coherence interference. It can obtain tomography of biological tissues by measuring the interference signals of the reflected light and the backscattered light. The reflected light comes from the reference arm and the backscattered light comes from different depths inside the biological tissue [2]. Compared with retinal imaging methods such as fundus cameras, ultrasound detection, and fundus angiography, OCT is safe, fast, high-precision, and non-invasive. Additionally, OCT has been widely used in the field of ophthalmology [3,4,5]. OCT technology can obtain two-dimensional tomographic imaging of the fundus, making it an important tool for the evaluation and diagnosis of ophthalmic diseases.
The macula is located in the optical center of the human eye and the health of the macula is closely related to human vision. Macular edema is caused by the destruction of a retinal barrier. The infiltration of vascular fluid or protein deposits between retinal layers occurs with an increase in vascular permeability. This will lead to retinal swelling, resulting in decreased vision and even irreversible blindness [6,7]. Macular edema with an obvious cystic structure is called cystoid macular edema (CME). Early detection and treatment of macular edema are very important for preventing permanent visual impairment. However, clinical commercial instruments can only assess the degree of lesions by detecting the thickness of the retina and thus can neither fully extract the CME nor obtain accurate information about the lesion. In addition, the manual measurement of the retinal cyst area in the OCT image by doctors is very time-consuming and lengthy training is needed to accumulate experience in the early stage [8]. Therefore, automatic segmentation technology of the retinal cyst area is an effective tool for evaluating retinal diseases, which can detect, diagnose, and treat retinal diseases early.
Image segmentation methods are divided into traditional segmentation methods and neural network methods. In traditional segmentation methods, the threshold-based method is not suitable for processing complex images, like Otsu [9]. The region-based growing method is susceptible to noise interference thus causes false segmentation easily. Algorithms based on level sets, active contour models, graph theory, and graph cuts have excellent segmentation capabilities, but they are time-consuming. The segmentation algorithm based on machine learning is limited by the number of data sets and the accuracy of manual labeling, and low robustness. Among traditional methods, Wang et al. clustered the liquid area by the fuzzy C-means method and then used the fuzzy level set to realize target segmentation [10]. Since the traditional level set method cannot converge or has convergence errors locally on the surface, Li et al. put forward a level set evolution model based on distance regularization (DRLSE) [11], which greatly improves the efficiency of the level set algorithm. Fernandez et al. used the GVF Snake model [12] in the active contour model combined with the nonlinear anisotropic diffusion filter to semi-automatically segment CME, which has high accuracy but takes a long time. Rashno et al. performed segmentation from a region of interest (ROI) based on the shortest path method of graph theory and used K clustering for cyst segmentation [13]. Chiu et al. combined classification based on kernel regression with graph theory to segment retinal boundaries. The algorithm has high space complexity and time complexity [14]. For the machine learning method, Montuoro et al. combined unsupervised feature representation and graph theory segmentation [15]. The use of machine learning methods is limited by the shortage of training data sets and requires manual division of the cyst area as the truth map to put in the network. Thus, the pre-training cost for machine learning methods is too high.
The wave algorithm [16] is a novel image segmentation algorithm, which obtains the potential energy of pixels in the image by analogy with the potential energy equation of fluid mechanics. Then, it obtains the target boundary according to the potential energy characteristics. The wave algorithm is a segmentation method based on image pixel features, which is much faster than level set, graph theory, and other methods. It has outstanding performance in retinal layer segmentation and can also be used in denoising applications while preserving boundaries [17]. According to the gradual change in the gray value of the pixels from the target boundary, the wave algorithm can effectively reduce the influence of random speckle noise and is suitable for the OCT image process. The algorithm has high precision in retina interlaminar segmentation. Compared with manual segmentation, the deviation is less than 1.5 pixels. It is both an accurate and fast segmentation method.
The wave algorithm has outstanding performance in retinal layer segmentation. However, when being applied to CME segmentation, the wave algorithm has the following problems: (a) the complete contour cannot be extracted; (b) the algorithm operation direction is fixed, and so is the extracted curve direction. Aiming at the shortcomings of the wave algorithm, this paper describes an algorithm that can extract the CME contour in all directions. The direction adjustment function of the algorithm endows the operator with omnidirectional process capabilities, which can realize full-contour detection for any figure of a closed curve and extract the complete target boundary. In addition, the proposed algorithm does not rely on the gradient information of the target boundary and realizes the accurate extraction of the target contour under speckle noise. Moreover, other process algorithms are added, including enhancing the contrast of images and contour integration. The former brings higher accuracy for CME detection; the latter filters the interference information of the choroid through connected domain screening, to avoid the wrong results. Compared with other algorithms, the proposed algorithm has higher robustness, higher accuracy, and faster operation speed, which can ensure the automatic segmentation of CME and help promote the development of ophthalmology and clinical medicine.

2. Methods

OCT is an imaging technology using the principle of low coherence interference. OCT technology measures the interference signals of reflected light from the reference arm and backscattered light from the sample arm in the Michelson interference light path to obtain the tomographic image of the sample. For retinal OCT images damaged by speckle noise, the wave algorithm has outstanding layer segmentation ability, so it can also extract CME boundaries from images containing CME. However, due to its unidirectional characteristics, the wave algorithm can only extract local contours. To solve this problem, this paper describes an omnidirectional wave operator, which endows the wave algorithm with multidirectional capability through the direction adjustment function, so that it can achieve complete segmentation of objects with closed curves.

2.1. Algorithm Overview

When the OCT system acquires fundus images, scattered light waves with random phases are coherently superimposed to produce speckle noise. Speckle noise appears as an irregular speckle pattern, which is randomly distributed in the image [18]. When directly processing the pixels in the image, speckle greatly affects the segmentation accuracy. For the contour extraction algorithm using gradient information, speckle will cause wrong segmentation. In order to reduce the influence of speckle noise, the first step of the omnidirectional wave operator in this paper is to construct a working environment. Through denoising and contrast stretching, the speckle noise is weakened and the contrast between the target and the background area is enhanced. The second step of the algorithm is image segmentation. As CME is located at a specific level of the retina, by determining the ROI area, segmentation results from other areas can be avoided, and the calculation speed can be greatly increased. The algorithm firstly performs ROI division in the second part, and then uses the omnidirectional wave operator to segment the CME. The third step of the algorithm is contour integration. After the results of the four directions are aggregated, the connected domain is processed to obtain the final CME contour. The algorithm flow chart is shown in Figure 1.

2.2. Construct the Working Environment

The basic principle of the OCT imaging system is a low-coherence interference. Coherent measurement will inevitably bring coherent noise. The optical characteristics and movement of the object to be measured, the coherence of the light source, the multiple scattering and distortion of the transmitted beam, and the size of the detector aperture will all affect speckle noise. Therefore, speckle noise is caused by interference and is difficult to remove [19]. Speckle noise is randomly distributed in the image, which affects the imaging quality of the OCT system. Denoising is necessary in the process and application of OCT images [20,21]. The wave algorithm uses gray information of retina layers in OCT images as prior knowledge that the gray value of the target area boundary changes gradually. Therefore, the algorithm in this paper uses Gaussian blur to process, which constructs the working condition.
Due to the weak backscattered light of the fundus structure and low optical power, the contrast between tissue area and background area is not high, which affects the accuracy of image segmentation results. Using gamma transformation can stretch the contrast between the tissue and the background. The contrast stretch should be moderate and not make boundaries excessively sharp.

2.3. Image Segmentation

2.3.1. ROI Extraction

The OCT imaging results of the retina include the retina and choroid. The tomography images of the capillary layer, middle vascular layer, and large vascular layer in the choroid will cause interference in CME segmentation. Therefore, before segmentation, the ROI region is extracted to avoid the influence of choroidal imaging.
CME is distributed in most layers of the retina. Once the internal and external environment of the retina is determined, the region to be segmented is determined. The calculation time is reduced and the calculation speed can be improved at the same time. The boundary of the ROI is the inner limiting membrane (ILM) and the outer Bruch membrane (OBM), as shown in Figure 2.
The ROI is judged by calculating the row gray mean distribution by Formula (1).
m e a n g r a y ( i ) = j = 1 N I ( i , j ) N ,   i = 1 , 2 M ,   j = 1 , 2 N
Given an image of size M × N pixels, the gray value of each pixel is I(i,j), where i is the row coordinate of the pixel and j is the column coordinate of the pixel. The average row gray value is calculated by using Formula (1).
For the convenience of observation, the white curve on the left side of Figure 3 indicates the change in the average gray value of each row, and the blue straight line indicates the mean gray value of the whole image. The intersection points A and B of the two curves can divide the image into three parts. From top to bottom are the posterior vitreous, retina, and sclera. The segmentation range of this algorithm is the retina which is the region between points A and B. Using the wave algorithm, the image is segmented from top to bottom to obtain the ILM layer and the OBM layer, that is, the ROI area.

2.3.2. Contour Extraction by Omnidirectional Wave Operator

The basic idea of the wave algorithm is to transform the gray value of pixels into the potential energy of fluid mechanics. By modifying the potential energy equation of fluid mechanics, it has good applicability in image applications. In the fluid potential energy equation (Formula (2)), g is gravity acceleration, h is fluid height, g h is gravity potential energy ( φ g ), p is fluid pressure, ρ is the fluid density, p / ρ represents pressure potential energy, v is the fluid velocity, and v 2 / 2 represents kinetic energy.
φ = g h + ( p / ρ ) + ( v 2 / 2 )
In the image, the pixel is analogized as fluid particles. Since the pixel is square, the density ρ = 1, and the z-axis of the image in three-dimensional space is set as a gray value, then the pressure p = 0 at H height, and the fluid pressure energy is p/ρ = 0. The traditional fluid potential energy equation evolves the wave potential energy equation (Formula (3)) and the wave correction equation (Formula (4)) for the image process. The fluid velocity (v) is replaced by the normalized gray difference value (v and v q ), v is the fluid velocity in the 3 × 3 template, and v q is the fluid velocity in the 3 × 2 template which is located in front of the central pixel. σ is the speed regulation factor. A set of points containing boundary lines can be obtained by the wave potential energy equation, and the wave correction equation can extract boundary lines from the set of points.
w a v e = φ v + φ g = v v q σ + φ g
P e n e r g y = { 1 ( K > 1 ) ( C > 1 ) 0 o t h e r
When extracting the contour of the target area, the wave algorithm can only extract curves whose tangent direction is vertical to the operation direction. In the segmentation of the ten-layer structure of the retina, the boundary line of the retinal inter-layer structure is approximately vertical to the operation direction, so the wave algorithm can effectively extract the retinal layer structure. However, for a vertical biconvex oval or transverse spindle-shaped structure such as CME, the wave algorithm can only extract the curve vertical to the operation direction, so it cannot obtain the complete contour. Aiming at the unidirectional problem of the wave algorithm, this paper puts forward an omnidirectional wave operator and adds a direction adjustment function, which makes the wave algorithm have the ability to extract closed curves.
An omnidirectional wave operator can perform the omnidirectional operation on the target in four directions. Specifically, the parameters in the equation are transformed by the direction adjustment function (Formula (5)), ( i , j ) is the pixel coordinate in the image, and ( i , j ) is the pixel coordinate after direction adjustment. θ is the rotation angle, and the positive angle is counterclockwise and the negative angle is clockwise. θ is set to 0, π / 2 , π , 3 π / 2 in turn.
( i j 1 ) = A ( i j 1 ) = ( cos θ sin θ 0 sin θ cos θ 0 0 0 1 ) ( i j 1 )
To give an example to gain a better understanding of the direction adjustment function, A rotates θ degrees counterclockwise to A (Figure 4). The mathematical relationship between the coordinates of point A , the coordinates of point A , and the rotation angle θ can be obtained by operating in polar coordinates. The mathematical relationship can be written as a matrix, as shown in Formula (5).
After direction adjustment, the parameters (Formulas (6) to (10)) in the wave potential energy equation and the wave correction equation have multidirectional properties. φ g represents gravitational potential energy, where Z is the 3 × 3 template where the central pixel is located, I ( i , j ) is the gray value of the central pixel of the template, g ( i , j ) is the Gaussian weighting function, and MAX is the maximum gray value of the whole image. φ v represents kinetic energy, v is central velocity, v q is wavefront velocity, σ is a regulation factor, and the function of wavefront velocity v q is to eliminate speckle noise near the boundary line, which leads to a sharp increase in kinetic energy. The Q template is a 3 × 2 template before the center pixel, and v q is the normalized difference value of adjacent pixels in the Q template. The function of regulation factor σ is to prevent the large area and high gray value area in the image from causing too much kinetic energy and over-segmentation of boundary areas. σ controls the influence of the gray value of the 3 × 3 template on kinetic energy, and the value is between 0 and 1. In the wave correction equation, K is a geometric shape determination function, which shows the geometric distribution of gray values near the point to be measured; C is the gray difference judgment function, which shows the numerical difference in gray values near the point to be measured.
φ g = i Z j Z I ( i , j ) × g ( i , j ) M A X
φ v = v × v q × σ
σ = exp ( ( I ( i , j ) 255 × δ ) 2 )
K = I ( i , j 1 ) I ( i , j 3 ) I ( i , j + 2 ) I ( i , j )
C = I ( i , j ) + I ( i , j + 1 ) + I ( i , j + 2 ) I ( i , j 1 ) + I ( i , j 2 ) + I ( i , j 3 )
After the process of the wave potential energy equation and wave correction equation, the contour of the target can be obtained. In this paper, an image containing CME is selected to verify the extraction results of the omnidirectional wave operator in four directions separately and the fusion results of four directions, as shown in Figure 5. Considering that the image is a two-dimensional plane, a two-dimensional coordinate is established in the image plane. The rotation angle θ is the angle formed with any point A and its rotated point A′, respectively, with the rotation center. When the operator turns to π/2, π, and 3 π/2, the operating direction is parallel to the positive direction of the y-axis, negative direction of the x-axis, and the negative direction of the y-axis.
It can be seen from Figure 5 that the contour lines extracted in four directions by using the omnidirectional wave operator all have approximately the same tangent direction, and the results in a single direction are all non-closed and incomplete, so the unidirectional wave algorithm cannot extract complete closed curves. As CME is elliptical, the wave algorithm cannot realize the complete segmentation of the cyst. The algorithm proposed in this paper adds the orientation characteristic based on the wave algorithm, and the curves in different directions can be obtained through the independent extraction in four directions, and then the complete results can be obtained by using curve fusion, realizing the accurate and complete segmentation of CME.
The omnidirectional wave operator has the advantages of the wave algorithm. It is insensitive to randomly distributed speckle noise by using gradual gray values to identify boundaries, so it can avoid false segmentation caused by noise or local mutation. In addition, compared with the segmentation methods, such as the active contour model and level set, which iterate many times to seek energy minimization, the pixel-based method has outstanding advantages in computing speed. Furthermore, the omnidirectional wave operator has directional characteristics and can extract arbitrary targets with closed curves. Among the segmentation algorithms with the ability to distinguish directions, the omnidirectional wave operator still has advantages. For example, the segmentation method based on the directional graph method [22] can only extract targets with a large number of curves in the same direction, and it depends on the reliability of the points’ directions, so the extraction of single-line items such as CME is not good.

2.4. Contour Integration

After the boundary of the target region is obtained by an omnidirectional wave algorithm, the process results in four directions that need to be integrated. In contour integration, it is necessary to consider: (a) that the boundary points and non-boundary points are classified uniformly; (b) eliminating other imaging interference, such as hard exudation [23]. Before contour integration, the segmentation results from four directions are binarized, and the boundary points and non-boundary points are marked, respectively. Then, the initial direction is fixed, and the extraction results obtained by operators in other directions are superimposed. As long as a pixel is identified as a boundary in a certain direction, it will become a point on the final omnidirectional boundary line. This method can make up for the shortcomings of the unidirectional wave algorithm or other algorithms which do not have the ability of omnidirectional target segmentation and can obtain a more complete target contour.
Since the fundus may have hard exudation, it also has the contour of a closed curve. To avoid the interference of hard exudation, the algorithm in this paper filters the segmentation results of hard exudation by judging the gray mean value of the connected domain. Since the CME capsule cavity is filled with liquid components such as plasma and vitreous humor, the imaging presents a low-reflective signal, and the internal gray value of the CME is low. Meanwhile, the hard exudation presents high-reflective particles, and its internal gray value is high. The gray-scale screening method uses the average gray value of the whole image as the threshold. For all connected areas, only those whose average gray value is lower than the threshold are retained, and those whose average gray value higher than the threshold are filtered, that is, the interference of hard exudation symptoms is eliminated.
In addition, since the cyst appears between the layers of the retina, the thickness of the cyst is within a certain range. According to the distribution of cysts in the data set, we choose 1.5 times the average thickness of the retina as the upper limit of the cyst size and 0.5 times the average thickness of the retina as the lower limit. By filtering the position of the fusion result, the contour information can be more accurate.

2.5. Evaluation Method

According to the correctness of the predicted result, the predicted result can be divided into four categories: true positive, false positive, true negative, and false negative. A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class. A false positive is an outcome where the model incorrectly predicts the positive class. Additionally, a false negative is an outcome where the model incorrectly predicts the negative class.
To quantitatively evaluate the correctness and feasibility of this algorithm, three indexes are used to evaluate the experimental results, Dice index, accuracy, and recall. The Dice index is used to measure the similarity between two sets. In image segmentation, it measures the proportion of predicted results in all results, including real results and false results. The value of the Dice index is between 0 and 1. The larger the Dice index is, the more accurate the prediction result is. It is generally believed that a Dice coefficient value higher than 0.70 indicates excellent consistency [24].
D i c e = 2 N t r u e _ p o s i t i v e 2 N t r u e _ p o s i t i v e + N f a l s e _ p o s i t i v e + N f a l s e _ n e g a t i v e
Precision, also known as a positive predictive value, indicates the proportion of correctly predicted results, that is, the ratio of true positives to true predictions. Precision is in the range [0,1]. The larger the value is, the higher the accuracy and the more accurate the segmentation results are. It reflects the anti-error detection ability of the segmentation algorithm.
P r e c i s i o n = N t r u e _ p o s i t i v e N t r u e _ p o s i t i v e + N f a l s e _ p o s i t i v e
Recall is also called sensitivity. Recall indicates the proportion predicted in the real result. Recall is in the range [0,1]. The larger the value is, the higher the accuracy and the more accurate the segmentation results are. It reflects the anti-missing detection ability of the segmentation algorithm.
R e c a l l = N t r u e _ p o s i t i v e N t r u e _ p o s i t i v e + N f a l s e _ n e g a t i v e
Precision and recall are often summarized as a single quantity, the F1-score, which is the harmonic mean of both measures and is in the range [0,1].
F 1 - S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l

3. Results

The data used in this paper are from an SD-OCT public data set that was collected by Farisu et al. from the Visual and Image Processing Laboratory of Duke University [14]. This data set is from the standard Spectralis (Heidelberg Engineering, Heidelberg, Germany) 61-line volume scan protocol. The volume scans were Q = 61   B s c a n s × N = 768   A s c a n s with an axial resolution of 3.87 um/pixel, lateral resolutions ranging from 10.94–11.98 um/pixel, and azimuthal resolution ranging from 118–128 um/pixel. This data set contains 110 OCT images of the fundus collected from 10 patients, including normal fundus images and fundus images with CME diseases. The public data set was manually marked by two experts from Duke Eye Center, who are both fellowship-trained medical retina specialists of the faculty at the Duke Eye Center. They have over 5 years of clinical experience working with diabetic subjects and many years of experience in studying OCT images.
The boundary obtained by this algorithm is the predicted value, and the cyst manually marked by two experts is the true value. Figure 6a,b show the manual segmentation curve of experts in the database, and Figure 6c is the result of manual segmentation of experts in Tianjin Eye Hospital, China.
Among the existing algorithms, two algorithms with accurate segmentation results and strong robustness were selected for comparative experiments. DRLSE [11] and the Snake active contour model [25] both need to select the initial contour, and the segmentation of target areas with complex contours is not good (Figure 6d,e). Figure 6f is the segmentation result of the proposed algorithm and it shows a good segmentation result. Table 1 shows the objective evaluation of the segmentation results obtained by DRLSE, the Snake active contour model, and the proposed algorithm. The Dice index, accuracy, and recall of this algorithm are 81.1%, 88.8%, 75.0%, and 81.3%, respectively.
Among the four indexes, the Dice index measures the similarity between the predicted result and the real result and the F1-score reflects the performance of accuracy and recall rate. The result of the proposed algorithm in this paper is the best. The accuracy of the three algorithms is very high, that is, the algorithm has a strong ability to segment the real target area. The pixels in the non-target area are not judged as the target area, which indicates that the algorithm does not misjudge. Recall is a measure of the recognition rate of real results. It reflects the anti-missing detection ability of the segmentation algorithm. Among the three algorithms, the recall of this algorithm is the highest. It should also be noted that the manual annotation by experts is subjective, and the results of manual annotation by three experts were slightly different, so we used the average value to evaluate the algorithm.
We compared the processing speed of the proposed algorithm with Snake and DRLSE. The average time of Snake and DRLSE is 49 s and 18.43 s, while the time of the proposed algorithm is only 1.2 s. The operating time is reduced by an order of magnitude.
According to the above evaluation indexes, the algorithm in this paper is not only superior to the other two algorithms in segmentation accuracy but also has great advantages in operation speed. It is a fast and fully automatic algorithm with high segmentation accuracy.
This paper proposes an automatic segmentation method with multiple operating directions. The selection of the number of directions has been verified by experiments and theoretical analysis. When designing the algorithm, based on experience, we believe that an arbitrarily shaped target with a closed contour needs to be segmented in eight directions, but in actual experiments, we found that segmentation in four directions is sufficient to determine any closed curve contour. In order to better explain this, we considered four directions, six directions, and eight directions, and calculated the evaluation index of the segmentation results in the corresponding number of directions. Figure 7 shows the segmentation results of the three cases. Table 2 shows four evaluation indicators.
It can be seen from Figure 7 that the four-direction operator and eight-direction operator have better segmentation performance than the six-direction operator, and the six-direction operator cannot fully identify the cyst. From the results of the evaluation indicators, the difference between the Dice index and the F1-score is very small, and they have similar segmentation performance. We choose the four-direction operator instead of the eight-direction operator, considering the following aspects:
(a) Recall rate is more important in disease detection applications. In disease screening and testing, we pay more attention to the anti-missing rate. Even if the anti-error detection rate (accuracy) is low, error detection can be avoided by secondary screening or in-depth inspection. We cannot tolerate missed inspections more than false inspections. Operators in four directions have a higher anti-missing rate than operators in eight directions.
(b) The authenticity of medical images. In the calculation process of six and eight directions, the rotated operator will change the image pixel distribution to a certain extent and affect the segmentation result. The four-direction operator does not. Even though such an impact is small, in the medical image process, the authenticity of the medical image should be maintained.
(c) Calculating speed. We compared the processing speed of the operator with different numbers of operating directions. In the CME detection experiment, the four-direction operator takes 1.2 s, the six-direction operator takes 1.273 s, and the eight-direction operator takes 1.286 s. The smaller the number of directions, the shorter the calculation time of the algorithm.

4. Conclusions

This paper proposes a method of automatically segmenting CME in fundus OCT images, which can achieve rapid segmentation while maintaining good segmentation accuracy. It also has good applicability and robustness. This algorithm compares image pixels with fluid particles. By considering the fluid potential energy equation in fluid mechanics, we propose a potential energy equation suitable for images and design an orientation adjustment function, based on the omnidirectional wave operator, to realize the segmentation of CME. The algorithm is insensitive to randomly distribute speckle noise through tracking the gradual gray value characteristics of the region boundary. Through ROI extraction, the interference of sub-choroidal angiography can be filtered, while accelerating the calculation process. Compared with the manual segmentation results of experts, the Dice index, accuracy, recall, and F1-score of the segmentation results of this algorithm are 81.1%, 88.8%, 75.0%, and 81.3%, respectively, which have a good consistency. The average operation time is 1.2 s, which can provide an important basis for the automatic detection of clinical ophthalmic diagnosis and treatment instruments.

Author Contributions

Conceptualization, J.L. and S.L.; methodology, J.L. and S.L.; software, J.L.; validation, J.L. and S.L.; formal analysis, H.C.; investigation, J.L. and S.L.; resources, J.L., S.L., X.C. and Y.W.; data curation, J.L.; writing—original draft preparation, J.L., S.L., X.C., H.C. and Y.W.; writing—review and editing, J.L., X.C. and Y.W.; supervision, X.C. and Y.W.; project administration, J.L., S.L. and Y.W.; funding acquisition, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2017YFC0109901.

Data Availability Statement

Data used in this paper is available at www.duke.edu/~sf59/Chiu_BOE_2014_dataset.htm. Data is also contained within the article [14].

Acknowledgments

The authors would like to thank S. Farsiu for the open-access data set they provided and thank Lin for the manual marking of the data set in Tianjin Eye Hospital, China.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A.; et al. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef] [Green Version]
  2. Schmitt, J.M. Optical coherence tomography (OCT): A review. IEEE J. Sel. Top. Quantum Electron. 2002, 5, 1205–1215. [Google Scholar] [CrossRef] [Green Version]
  3. Huber, R.; Wojtkowski, M.; Taira, K.; Fujimoto, J.G.; Hsu, K.J.O.E. Amplified, frequency swept lasers for frequency domain reflectometry and OCT imaging: Design and scaling principles. Opt. Express 2005, 13, 3513–3528. [Google Scholar] [CrossRef]
  4. Shuang-shuang, Z.; Gang, T.; Yi, S. Application of swept-source optical coherence tomography in ophthalmology. Racent Adv. Ophthalmol. 2017, 37, 788–792. [Google Scholar]
  5. Potsaid, B.; Baumann, B.; Huang, D.; Barry, S.; Cable, A.E.; Schuman, J.S.; Duker, J.S.; Fujimoto, J.G. Ultrahigh speed 1050 nm swept source/Fourier domain OCT retinal and anterior segment imaging at 100,000 to 400,000 axial scans per second. Opt. Express 2010, 18, 20029–20048. [Google Scholar] [CrossRef] [Green Version]
  6. Chen, X.J.; Niemeijer, M.; Zhang, L.; Lee, K.; Abramoff, M.D.; Sonka, M. Three-Dimensional Segmentation of Fluid-Associated Abnormalities in Retinal OCT: Probability Constrained Graph-Search-Graph-Cut. IEEE Trans. Med. Imaging 2012, 31, 1521–1531. [Google Scholar] [CrossRef] [Green Version]
  7. Le, D.; Alam, M.; Miao, B.A.; Lim, J.I.; Yao, X. Fully automated geometric feature analysis in optical coherence tomography angiography for objective classification of diabetic retinopathy. Biomed. Opt. Express 2019, 10, 2493–2503. [Google Scholar] [CrossRef]
  8. Debuc, D.C. A Review of Algorithms for Segmentation of Retinal Image Data Using Optical Coherence Tomography; InTech: London, UK, 2011. [Google Scholar] [CrossRef] [Green Version]
  9. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  10. Wang, J.; Zhang, M.; Pechauer, A.D.; Liu, L.; Hwang, T.S.; Wilson, D.J.; Li, D.W.; Jia, Y.L. Automated volumetric segmentation of retinal fluid on optical coherence tomography. Biomed. Opt. Express 2016, 7, 1577–1589. [Google Scholar] [CrossRef] [Green Version]
  11. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process 2010, 19, 3243–3254. [Google Scholar] [CrossRef]
  12. Fernández, D.C. Delineating fluid-filled region boundaries in optical coherence tomography images of the retina. IEEE Trans. Med. Imaging 2005, 24, 929–945. [Google Scholar] [CrossRef]
  13. Rashno, A.; Koozekanani, D.D.; Drayna, P.M.; Nazari, B.; Sadri, S.; Rabbani, H.; Parhi, K.K. Fully Automated Segmentation of Fluid/Cyst Regions in Optical Coherence Tomography Images with Diabetic Macular Edema Using Neutrosophic Sets and Graph Algorithms. IEEE Trans. Biomed. Eng. 2018, 65, 989–1001. [Google Scholar] [CrossRef] [Green Version]
  14. Chiu, S.J.; Allingham, M.J.; Mettu, P.S.; Cousins, S.W.; Izatt, J.A.; Farsiu, S. Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema. Biomed. Opt. Express 2015, 6, 1172–1194. [Google Scholar] [CrossRef] [Green Version]
  15. Montuoro, A.; Waldstein, S.M.; Gerendas, B.S.; Schmidt-Erfurth, U.; Bogunovic, H. Joint retinal layer and fluid segmentation in OCT scans of eyes with severe macular edema using unsupervised representation and auto-context. Biomed. Opt. Express 2017, 8, 1874–1888. [Google Scholar] [CrossRef] [PubMed]
  16. Lou, S.; Chen, X.; Han, X.; Liu, J.; Wang, Y.; Cai, H. Fast Retinal Segmentation Based on the Wave Algorithm. IEEE Access 2020, 8, 53678–53686. [Google Scholar] [CrossRef]
  17. Lou, S.; Chen, X.; Liu, J.; Shi, Y.; Qu, H.; Wang, Y.; Cai, H. Fast OCT image enhancement method based on the sigmoid-energy conservation equation. Biomed. Opt. Express 2021, 12, 1792–1803. [Google Scholar] [CrossRef]
  18. Zaki, F.; Wang, Y.; Su, H.; Yuan, X.; Liu, X. Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography. Biomed. Opt. Express 2017, 8, 2720–2731. [Google Scholar] [CrossRef] [Green Version]
  19. Baghaie, A.; D’Souza, R.M.; Yu, Z. State-of-the-Art in Retinal Optical Coherence Tomography Image Analysis. Quant. Imaging Med. Surg. 2014, 5, 603. [Google Scholar]
  20. Schmitt, J.M.; Xiang, S.H.; Yung, K.M. Speckle in optical coherence tomography. J. Biomed. Opt. 1999, 4, 95–105. [Google Scholar] [CrossRef]
  21. Karamata, B.; Hassler, K.; Laubscher, M.; Lasser, T. Speckle statistics in optical coherence tomography. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2005, 22, 593–596. [Google Scholar] [CrossRef]
  22. Jiang, J.-Y.; Hu, X.-D.; Xu, K.-X.; Yu, Q.-L. Segmentation of Fingerprint Images Using Genetic Algorithm and Directional Image Method. J. Eng. Graph. 2002, 76–81. [Google Scholar] [CrossRef]
  23. Arf, S.; Sayman Muslubas, I.; Hocaoglu, M.; Ersoz, M.G.; Ozdemir, H.; Karacorlu, M. Spectral domain optical coherence tomography classification of diabetic macular edema: A new proposal to clinical practice. Graefe’s Arch. Clin. Exp. Ophthalmol. 2020, 258, 1165–1172. [Google Scholar] [CrossRef] [PubMed]
  24. Zijdenbos, A.P.; Dawant, B.M.; Margolin, R.A.; Palmer, A.C. Morphometric analysis of white matter lesions in MR images: Method and validation. IEEE Trans. Med. Imaging 1994, 13, 716–724. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the algorithm.
Figure 1. The flowchart of the algorithm.
Applsci 11 06480 g001
Figure 2. The locations of ILM and OBM in retina image.
Figure 2. The locations of ILM and OBM in retina image.
Applsci 11 06480 g002
Figure 3. The result of the ROI.
Figure 3. The result of the ROI.
Applsci 11 06480 g003
Figure 4. A description of the direction adjustment function. The direction adjustment function is a matrix form of the rotation relationship of A and A .
Figure 4. A description of the direction adjustment function. The direction adjustment function is a matrix form of the rotation relationship of A and A .
Applsci 11 06480 g004
Figure 5. Segmentation results of the operator in different directions, (ad) are the results of setting the direction θ to 0, π/2, π, 3 π/2, (e) is the fusion of the segmentation results in four directions.
Figure 5. Segmentation results of the operator in different directions, (ad) are the results of setting the direction θ to 0, π/2, π, 3 π/2, (e) is the fusion of the segmentation results in four directions.
Applsci 11 06480 g005
Figure 6. Comparison of algorithm segmentation results, (ac) the results of expert manual segmentation, (d) the segmentation result of Snake active contour model algorithm, (e) the segmentation result of DRLSE algorithm, (f) the result of the proposed algorithm’s segmentation.
Figure 6. Comparison of algorithm segmentation results, (ac) the results of expert manual segmentation, (d) the segmentation result of Snake active contour model algorithm, (e) the segmentation result of DRLSE algorithm, (f) the result of the proposed algorithm’s segmentation.
Applsci 11 06480 g006
Figure 7. Segmentation results of the operator in different directions, (a) the result of 4-direction operator; (b) the result of 6-direction operator; (c) the result of 8-direction operator.
Figure 7. Segmentation results of the operator in different directions, (a) the result of 4-direction operator; (b) the result of 6-direction operator; (c) the result of 8-direction operator.
Applsci 11 06480 g007
Table 1. Comparison of evaluation indexes of segmentation results.
Table 1. Comparison of evaluation indexes of segmentation results.
DRLSE
Expert AExpert BExpert CAverage
Dice0.5160.5580.6110.562
Precision0.9690.9820.9710.974
Recall0.3520.3900.4450.396
F1-Score0.5160.5580.6100.563
Snake
Expert AExpert BExpert CAverage
Dice0.7420.7090.7990.750
Precision0.9680.8770.9090.918
Recall0.6010.5960.7130.637
F1-Score0.7420.7100.7990.752
Proposed Algorithm
Expert AExpert BExpert CAverage
Dice0.7720.7820.8800.811
Precision0.8980.8640.9020.888
Recall0.6780.7140.8590.750
F1-Score0.7730.7820.8800.813
Table 2. Comparison of the operator with a different number of directions.
Table 2. Comparison of the operator with a different number of directions.
Dice IndexPrecisionRecallF1-Score
4 directions0.8110.8880.7500.813
6 directions0.7870.9030.6830.777
8 directions0.8160.9550.7160.818
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, J.; Lou, S.; Chen, X.; Cai, H.; Wang, Y. Fast Segmentation Algorithm for Cystoid Macular Edema Based on Omnidirectional Wave Operator. Appl. Sci. 2021, 11, 6480. https://doi.org/10.3390/app11146480

AMA Style

Liu J, Lou S, Chen X, Cai H, Wang Y. Fast Segmentation Algorithm for Cystoid Macular Edema Based on Omnidirectional Wave Operator. Applied Sciences. 2021; 11(14):6480. https://doi.org/10.3390/app11146480

Chicago/Turabian Style

Liu, Jing, Shiliang Lou, Xiaodong Chen, Huaiyu Cai, and Yi Wang. 2021. "Fast Segmentation Algorithm for Cystoid Macular Edema Based on Omnidirectional Wave Operator" Applied Sciences 11, no. 14: 6480. https://doi.org/10.3390/app11146480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop