Next Article in Journal
Internet News User Analysis Using Deep Learning and Similarity Comparison
Next Article in Special Issue
Predictive Analysis of COVID-19 Symptoms in Social Networks through Machine Learning
Previous Article in Journal
Intelligent Load-Balancing Framework for Fog-Enabled Communication in Healthcare
Previous Article in Special Issue
Dynamically-Tunable Dataflow Architectures Based on Markov Queuing Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Visual System for Orientation Detection

1
Faculty of Engineering, University of Toyama, Toyama-shi 930-8555, Japan
2
School of Electrical and Computer Engineering, Kanazawa University, Kanazawa-shi 920-1192, Japan
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(4), 568; https://doi.org/10.3390/electronics11040568
Submission received: 6 January 2022 / Revised: 9 February 2022 / Accepted: 9 February 2022 / Published: 13 February 2022

Abstract

:
The human visual system is one of the most important components of the nervous system, responsible for visual perception. The research on orientation detection, in which neurons of the visual cortex respond only to a line stimulus in a particular orientation, is an important driving force of computer vision and biological vision. However, the principle underlying orientation detection remains a mystery. In order to solve this mystery, we first propose a completely new mechanism that explains planar orientation detection in a quantitative manner. First, we assume that there are planar orientation-detective neurons which respond only to a particular planar orientation locally and that these neurons detect local planar orientation information based on nonlinear interactions that take place on the dendrites. Then, we propose an implementation of these local planar orientation-detective neurons based on their dendritic computations, use them to extract the local planar orientation information, and infer the global planar orientation information from the local planar orientation information. Furthermore, based on this mechanism, we propose an artificial visual system (AVS) for planar orientation detection and other visual information processing. In order to prove the effectiveness of our mechanism and the AVS, we conducted a series of experiments on rectangular images which included rectangles of various sizes, shapes and positions. Computer simulations show that the mechanism can perfectly perform planar orientation detection regardless of their sizes, shapes and positions in all experiments. Furthermore, we compared the performance of both AVS and a traditional convolution neural network (CNN) on planar orientation detection and found that AVS completely outperformed CNN in planar orientation detection in terms of identification accuracy, noise resistance, computation and learning cost, hardware implementation and reasonability.

1. Introduction

In 1981, David Hubel and Torsten Wiesel won the Nobel Prize in Medicine because of their landmark discovery of orientation preference and related works [1,2]. Based on this remarkable discovery, Hubel and Wiesel found the orientation-selective cells in the primary visual cortex (V1) and proposed a simple yet powerful model of how such orientation selectivity could emerge from nonselective thalamocortical inputs [1]. The model has become a central frame of reference for understanding cortical computation and its underlying mechanisms [3,4,5]. Gaining more and more insight into the functional mechanisms of the visual cortex may bring the capabilities of artificial vision closer to those of biological systems and result in new developments of computer architectures. However, despite 60 years of intense research effort, three basic questions are still unanswered [6,7,8]: (1) how, (2) to what degree, and (3) by what mechanisms do the orientation-selective cells contribute to the detection of the global orientation of an object with different sizes or positions? In this paper, we first offer a novel quantitative mechanism to provide an explanation for how selectivity for planar orientation could be produced by a model with circuitry that is based on the anatomy of the V1 cortex and how the selectivity for planar orientation contributes to the detection of the global planar orientation of a rectangular object with different sizes, shapes or positions. We assume that planar orientation-selective cells—which we call local planar orientation-detective neurons—exist in in the retina of the visual nervous system. Each of these local planar orientation-detective neurons receives its own input through photoreceptors and ON-OFF response cells from the receptive field, picks up selectively an adjacent input, and computes a response only to the planar orientation from the selected adjacent input. We implement a model of the local planar orientation-detective neuron based on the dendritic neuron model that the authors proposed previously [9,10,11] and use it to realize several planar orientation-detective neuron models, each of which responds only to a particular planar orientation. Then, we propose four possible schemes to measure the activation of the local planar orientation-detective neurons: (1) scanning over two-dimensional inputs of an image, and for every input convolving their adjacent inputs with a local planar orientation-detective neuron; (2) scanning over two-dimensional inputs of an image with a group of local planar orientation-detective neurons, (3) sliding over two-dimensional inputs of an image with a small array of a grouped local planar orientation-detective neurons and (4) letting each input of a two-dimensional image follow its own independent local planar orientation-detective neurons. Since these neurons give local planar orientation responses that are localized in spacem and these neurons’ outputs can be taken as evidence about the global planar orientation, we can thus obtain the global planar orientation directly by measuring the outputs of these local planar orientation-detective neurons. Secondly, based on this mechanism, we propose an artificial visual system (AVS) for planar orientation detection and other visual information processing. To prove the effectiveness of our mechanism and the AVS based on this mechanism, we conducted a series of experiments using a dataset of a total of 20,000 images of rectangular objects with various sizes and positions at many different planar orientations. Computer simulations show that the mechanism and the mechanism-based AVS performed the detection of planar orientation very accurately in all experiments regardless of the sizes, shapes and positions of objects. Furthermore, we used a traditional convolution neural network (CNN), trained it to perform planar orientation detection and compared its results with those of the AVS. Based on the computer simulations and analysis, we conclude that the AVS outperforms the CNN in planar orientation detection in terms of identification accuracy, noise resistance, computation and learning cost and reasonability.

2. Methods

2.1. Dendritic Neuron Model

Artificial neural networks (ANNs) have been a research hotspot in the field of artificial intelligence since the 1980s [12,13]. An ANN is a mathematical model which mimics the information processing mechanism of the synaptic connection structure in the brain. To date, hundreds of models of artificial neural networks have been developed, and they have shown a very good performance in technical fields such as pattern recognition, medical diagnosis and time-series forecasting [13,14,15]. However, all these networks have used the traditional McCulloch & Pitts neuron model for their basic computation units [16]. This McCulloch & Pitts model does not take the nonlinear mechanisms of dendrites into account [17]. Meanwhile, recent studies have provided strong circumstantial support for the notion that dendrites play a key role in the overall computation performed by a neuron [18,19,20,21,22,23,24,25]. Koch, Poggio and Torre found that in the dendrites of a retinal ganglion cell, if an activated inhibitory synapse is closer than an excitatory synapse to the cell body, the excitatory synapse will be intercepted [26,27]. Thus, the interaction between the synapses on dendritic branches can be considered as a logical AND operation [28], and a dendritic branch point may sum currents from the dendritic branches, such that its output would be a logical OR on its inputs [29,30,31]. The signal is then conducted to the cell body (soma), and when it exceeds the threshold, the cell fires, sending a signal down the axon to other neurons. Figure 1a shows a model that implements an idealized δ cell. Here, if the inhibitory interaction is described as a NOT gate, the operation implemented in Figure 1a could be read as:
O u t p u t = X 1 X 2 + X 3 ¯ X 4 + X 5 ¯ X 6 X 2
where X 1 , X 2 , X 4 and X 6 denote excitatory inputs, and X 3 and X 5 represent inhibitory inputs. Each input is either a logical zero or one. Thus, the signal to the cell body (soma) becomes 1 when and only when X 1 = 1 and X 2 = 1 , or X 3 = 0 and X 4 = 1 or X 5 = 0 and X 6 = 1 and X 2 = 1 . Furthermore, the γ cell receives excitatory and inhibitory synapses distributed from the tip to the soma, as shown in Figure 1b, thus reading,
O u t p u t = X 1 ¯ X 2 X 3
Several experimental examples, such as direction selectivity in retinal ganglion cells [32] and coincidence detection in the auditory system [33], have provided strong circumstantial support to Koch’s model [27]. By taking the nonlinearity of synapses and nonlinear interaction among these synapses into consideration, researchers proposed a learnable dendritic neuron model (DNM) [9,10,11]. The DNM was successfully applied to many burning questions, such as liver disorders analysis, breast cancer classification, and financial time series prediction [34,35,36,37].

2.2. Local Planar Orientation-Detective Neuron

In this section, we describe the structure of DNM in detail for orientation detection. For simplicity, we only consider the composition of four neurons for orientation detection. Usually, the receptive field can be divided into two-dimensional M × N regions. Each region corresponds to a minimal visible region. For simplicity, we consider binary images. When light falls on a region, the electrical signal—for example, one—is transferred through its photoreceptor and ON-OFF response cells to ganglion cells and the ganglion cells perform various visual information processing steps [38]. Of course, by introducing horizontal cells, grayscale images and color images can also be treated easily. Here, we assume that there are simple neurons that can detect the specific orientation of a line. The input signal of region ( i , j ) is expressed by X i , j , where i and j correspond to position in the two-dimensional receptive field. Thus, for an input signal X i , j , if we use the local orientation-detective neurons and consider eight regions adjacent to X i , j only, we can implement four local orientation-detective neurons by picking up selectively a particular region in that direction.
Figure 2 shows an idealized γ cell for zero-degree planar orientation detection. What we need to consider is only the X i , j , X i 1 , j and X i + 1 , j . If and only if X i , j , X i 1 , j and X i + 1 , j are all equal to 1, the γ neuron is activated and the output of the soma is equal to 1. Similarly, the local planar orientation-detective neurons for other planar orientations can also be implemented.
Figure 3 shows the four structures of the local planar orientation-detective neurons. For example, the 45 degree detective neuron at region ( i , j ) has different input signals from adjacent inputs X i 1 , j + 1 and X i + 1 , j 1 besides X i , j , the inputs to the 90 degree detective (vertical detection) neuron at i, j come from X i , j 1 and X i , j + 1 besides X i , j , and the inputs to the 135 degree detective neuron at region ( i , j ) are set to X i 1 , j 1 , X i , j and X i + 1 , j + 1 . Therefore, we can ensure that all planar orientation-detective neurons can be realized by γ -like cells. For simplicity, the size of the window (pixel matrix) is 3 × 3 , so we can only select these four planar orientations. If the size of the window increases, more planar orientations can be detected. For example, a 5 × 5 window could detect 8 planar orientations, and so on.

2.3. Global Planar Orientation Detection

As mentioned above, the local planar orientation-detective neurons in the visual system respond by performing an interaction of the effect of light falling on their receptive field, for example, a local planar orientation-detective neuron extracts simple information on one planar orientation at one position in the receptive field by interacting with the input of the position from photoreceptors and ON-OFF response cells with its neighboring inputs. Here, we assume that the information of the local planar orientation can be used for judging global planar orientation. Thus, we can merely measure the strength of the activities of all local planar orientation-detective neurons over the receptive field (for example, the number of fired neurons) and derive a judgement as to planar orientation by summing the neurons’ outputs in different planar orientations respectively and taking the planar orientation with the maximal one as the global planar orientation. In order to measure the strength of the activities of the local planar orientation-detective neurons, with a total of M × N × 4 local planar orientation information, for a two-dimensional receptive field ( M × N ), we have four possible schemes:
  • One-neuron scheme: we assume that there is only one local planar orientation-detective neuron available and the local planar orientation-detective neuron is used to scan every region ( i , j ) for i = 1 , 2 , , M and j = 1 , 2 , , N over the two-dimensional receptive field ( M × N ), and at every position scans two adjacent positions at one direction, covering four directions in total, thus yielding M × N × 4 local planar orientation information;
  • Multi-neuron scheme: we assume, for simplicity, that there are four local planar orientation-detective neurons, and that they are used to scan every region ( i , j ) for i = 1 , 2 , , M and j = 1 , 2 , , N over the two-dimensional receptive field ( M × N ), thus yielding M × N × 4 pieces of local planar orientation information;
  • Neuron-array scheme: we assume, for simplicity, that there are four local planar orientation-detective neurons that are arrayed in m × n ( m < M , and n < N ), and that the arrayed neurons slide over the two-dimensional receptive field ( M × N ) without overlapping, thus yielding M × N × 4 pieces of local planar orientation information;
  • Full-neuron scheme: we assume that every input corresponding to the region ( i , j ) of a two-dimensional receptive field ( M × N ) has its own local planar orientation-detective neuron. That is to say that there are M × N × 4 local planar orientation-detective neurons. Thus, within the local receptive field, the local planar orientation-detective neurons can extract elementary local planar orientation information. The local planar orientation information is then used to judge the global planar orientation. In order to help the understanding of the mechanism with which the system performs planar orientation detection, we used a simple two-dimensional ( 5 × 5 ) image of a bar in 45 degrees, as shown in Figure 4. Without the loss of generality, we use the four-neuron scheme in which the four local planar orientation-detective neurons scan every position from (1, 1) to (5, 5) over the two-dimensional receptive field ( 5 × 5 ), and yield the local planar orientation of the positions.

2.4. Artificial Visual System (AVS)

The visual system comprises the sensory organ (the eyes), the connecting pathways through to the visual cortex and other parts of central nervous system. As mentioned above, in the visual system, the local visual feature-detective neurons such as local planar orientation-detective neurons can extract elementary local visual features such as local planar orientation information. These features are then combined by the subsequent layers in order to detect higher-order features, for example, the global planar orientation of an object. Based on this mechanism, we have developed a generalized artificial visual system (AVS), as shown in Figure 5. Neurons in layer 1 (also called the local feature-detective neuron (LFDN) layer), corresponding to neurons in the V1 cortical area, such as the local planar orientation-detective neurons, extract elementary local visual features, for example, the local planar orientation information. These features are then sent to the subsequent layers, corresponding to the middle temporal (MT) area of the primate brain, (also called the global feature-detective neuron (GFDN) layers) in order to detect higher-order features, for example, the global planar orientation of an object. Neurons in layers can be expressed simply as the summation of the outputs of neurons from layer 1, for example, for planar orientation detection, motion direction detection, motion speed detection and the perception of binocular vision; or one layer; or two layers corresponding to V4 and V6; or three layers corresponding to V2, V3 and V5; or even a multi-layer network, for example, for pattern recognition. It is worth noting that the AVS is a feedforward neural network, and any feedforward neural network can be trained by means of the error backpropagation method. The difference between AVS and traditional multi-layer neural networks and convolutional neural networks is that the local feature detective neurons (LFDNs) in layer 1 of the AVS can be designed in advance according to our prior knowledge, for example, how many neurons and what kind of neurons are needed, and in most cases they do not need to undergo the learning process. Even if learning is needed, learning with the AVS can start from a very good initial value, which can greatly improve the efficiency and speed of learning. Furthermore, hardware implementation of AVS is much simpler and more effective than in a CNN and for most applications the AVS only requires simple logical calculations. Finally, AVS is based completely on the mechanism of the visual system, so AVS is more reasonable than black-box systems such as neural networks and convolutional neural networks.

3. Results

In order to prove the effectiveness of our proposed mechanism and the AVS based on this mechanism, we randomly generated a large number of different 32 × 32 pixel images to test. We scanned every pixel of the two-dimensional images with a 3 × 3 window, used four planar orientation-detective neurons to extract the local planar orientation information at every pixel of the two-dimensional images, and made a judgement of the global planar orientation information based on the local planar orientation information. First, we chose four bars in three different planar orientations to test the proposed mechanism. The first two bars were set at 135 degree angles and had different length-width ratios, whereas the remaining two bars were horizontal and vertical, respectively. In all computer simulations, we scanned with a 3 × 3 window and the step size was set to 1. The data from each scanning process were transferred to four planar orientation-detective neurons during the scanning procedure, and we counted when the corresponding neurons fired. The experimental results are shown in Figure 6, Figure 7, Figure 8 and Figure 9. We set the fired neurons to 1 and the unfired neurons to 0. We used a simple function diagram to represent the output process of the four neurons and the types of detective neurons were labeled in the graph. Finally, we recorded the total number of activations and picked up their maximum. The activations of the four kinds of neurons are represented in the graph. The horizontal coordinate indicates the serial number of the corresponding scanning window, and the vertical coordinate indicates whether the neurons were activated or not. The number of activations is given and the orange box indicates the maximum value, which is our final judgment for global orientation. From Figure 6, we can see that the neurons for angles of 0 , 45 , 90 and 135 fired 38, 0, 39 and 74 times, respectively. Thus, the detection of 135 of planar orientation can be inferred.
Similarly, 135 , horizontal ( 0 ) and vertical ( 90 ) orientations can also be inferred from Figure 7, Figure 8 and Figure 9. Finally, we selected seven standard rectangles in 90 with different length–width ratios, and then used a bar chart to show the activation rates of each rectangle. The bar charts of experiments are presented in Figure 10, where the x-axis denotes the length–width ratios and y-axis represents the activation rates of four local planar orientation-detective neurons, with the length of the bar fixed at 30 pixels. According to this experiment, we found that the activation rate decreased with the decrease in the length–width ratios. The closer the rectangle was to a square, the more difficult it was to identify the planar orientation of the rectangle. For a square, with the length–width ratio of 1 : 1 , the firing rates of the neuron for 0 and the neuron for 90 were same because even humans cannot distinguish whether a square is at 0 or 90 . When the length-width became to 1 : 2 , the detective neuron for the 0 planar orientation fired the most, thus indicating a 0 orientation detection. This proves that our proposed mechanism is very close to the orientation detection mechanism of the human visual system. Based on all of these computer experiments, we found that our proposed mechanism can accurately detect the planar orientations of objects with different positions, length–width ratios and sizes. Therefore, we can conclude that our proposed mechanism is highly accurate in detecting objects in a specific planar orientation, which also suggests that our hypothesis about local planar orientation-detective neurons and the global planar orientation inference system is possibly correct.
In order to compare the planar orientation detection performance of the AVS with other methods, we selected a CNN because these networks are widely applied with great success in the detection, segmentation and recognition of objects in images. The CNN used in the experiments comprises seven layers: (1) a convolutional layer with 30 feature maps connected to a 3 × 3 neighborhood in the input; (2) a ReLu layer; (3) a Pooling layer with 2 × 2 maximum pooling; (4) an affine layer with a full net from 1024 to 720; (5) a ReLu layer; (6) an affine layer with a full net from 720 to 4 and (7) a Softmax layer. The inputs were 32 × 32 pixel images. The data used to train and test the system were 15,000 and 5000, respectively. The sizes of objects were from 3 pixels to 100 pixels. Learning was performed with backpropagation using the Adam optimizer. All computer experiments were conducted using a PC with an AMD Ryzen 5 3500 6-Core processor and the computational time was measured. The computational times of the CNN for learning and testing were 19.35 s (30epochs) and 1.91 s, respectively. On the contrary, because AVS does not need learning, its computational times required for learning and testing were only 0 s and 0.66 s, respectively, showing that it is faster than the CNN. The identification accuracy of both the CNN and AVS is summarized in Table 1. As expected, the CNN learned the planar orientation detection very well and reached 99.85% identification accuracy. The CNN did not performs as well as the AVS, which showed 100% accuracy. In order to compare the anti-noise ability of both AVS and the CNN, we added noise to the non-object area randomly and these noises were independent of each other and not connected to the object. Figure 11 shows an example image with 0, 1, 5, 10, 25, 50, 100 and 150 noises. Then, we used both AVS and CNN to detect the planar orientations of these object images with noise. Table 1 shows the identification accuracy of both systems, with data presented in Table 1. However, we can see that even if only one source of noise (a pixel) is added, CNN’s identification accuracy drops from 99.85% to 97.89% immediately. As the number of noises increased to 150, CNN’s identification accuracy dropped dramatically, even lower than 30%. In contrast, AVS was always able to maintain 100% identification accuracy, showing superior noise resistance.

4. Conclusions

This paper describes a mechanism for detecting global planar orientation by introducing local planar orientation-detective neurons to compute local planar orientation, and a scheme to judge global planar orientation based on local planar orientation information. That is to say that within a local receptive field, the local planar orientation-detective neurons can extract elementary visual features such as planar orientation. These features are then combined by the subsequent layers in order to detect higher-order features, for example, the global planar orientation. The proposed mechanism has many desirable properties that would be useful in any visual perception system and that seem to be an important part of the human visual system. The mechanism can be used as a framework for understanding many other basic phenomena in visual perception, including the perception of motion direction; the perception of motion speed; and the perception of binocular vision. Furthermore, the mechanism provides a functional architecture for visual computation in the primary visual cortex and provides unprecedented insights into how visual inputs are fragmented and reassembled at different stages of the visual system and how functions are divided across different element of the visual circuit. This mechanism of the primary visual cortex as a sensory system might also help us to understand how other sensory systems, such as olfaction, taste and touch, are encoded at the level of cortical circuits. Although the mechanism is based upon a highly simplified model and ignores most of what is known about the detailed functioning of the visual system and our brain, it does provide a mechanism to explain many known neurobiological visual phenomena in a quantitative manner, and might lead neuroanatomists and neurophysiologists to reexamine their observations, looking for corresponding structures and functions. Conversely, advances in the biological sciences might also lead to a modified and elaborated mechanism.
Based on this mechanism, we developed an artificial visual system (AVS). In order to compare the performance of the AVS and the CNN, we applied the AVS without learning and the CNN with learning to planar orientation detection and found that the AVS performed much better than the CNN in terms of accuracy and noise resistance, as well as in all other aspects. The AVS can be easily applied to other visual perceptions, such as the perception of motion direction, the perception of motion speed and the perception of binocular vision, and even to other sensory systems, such as olfaction, taste and touch. Therefore, we believe that the AVS is very likely to replace the CNN in the near future.
The most important novelty of this paper is that (1) we first proposed a mechanism to explain planar orientation detection in a quantitative manner and verified it through computer simulations successfully; (2) we predicted that local planar orientation-detective neurons might have a γ cell-like morphological shape and (3) based the proposed mechanism of planar orientation detection, we developed a generalized artificial visual system (AVS) and showed its superiority to a traditional CNN. In this paper, for the sake of simplicity, we only discussed four planar orientation detection problems, but they are potentially expandable to more planar orientation detection problems as long as we simply increase the sizes of the local receptive field. Similarly, although we have limited our discussions to binary images, we can easily extend the mechanism to grayscale and color images by introducing horizontal cells. Although our mechanism did explain most biological experimental results, the proposal has not yet been directly verified by biological experiments. In future works, our model needs to be confirmed through biological experiments, which may lead to a modified and elaborated model.

Author Contributions

Conceptualization, Z.T. and Y.T.; methodology, J.Y.; software, B.L. and Y.Z.; validation, J.Y.; writing—original draft preparation, J.Y.; writing—review and editing, J.Y. and Z.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by JSPS KAKENHI Grant No. 19K12136.

Acknowledgments

This work was supported by JSPS KAKENHI Grant No. 19K12136.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AVSArtificial Visual System
CNNConvolutional neural network
ANNArtificial neural network
LFDNLocal feature-detective neuron
GFDNGlobal feature-detective neuron

References

  1. Hubel, D.H.; Wiesel, T.N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 1962, 160, 106–154. [Google Scholar] [CrossRef] [PubMed]
  2. Hubel, D.H.; Wiesel, T.N. Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 1959, 148, 574–591. [Google Scholar] [CrossRef] [PubMed]
  3. Gilbert, C.D.; Li, W. Adult visual cortical plasticity. Neuron 2012, 75, 250–264. [Google Scholar] [CrossRef] [Green Version]
  4. Wilson, D.E.; Whitney, D.E.; Scholl, B.; Fitzpatrick, D. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex. Nat. Neurosci. 2016, 19, 1003–1009. [Google Scholar] [CrossRef]
  5. Roth, Z.N.; Heeger, D.J.; Merriam, E.P. Stimulus vignetting and orientation selectivity in human visual cortex. Elife 2018, 7, e37241. [Google Scholar] [CrossRef]
  6. Li, M.; Song, X.M.; Xu, T.; Hu, D.; Li, C.Y. Subdomains within orientation columns of primary visual cortex. Sci. Adv. 2019, 5, eaaw0807. [Google Scholar] [CrossRef] [Green Version]
  7. Stephen, G. Cortical Dynamics of Figure-Ground Separation in Response to 2D Pictures and 3D Scenes: How V2 Combines Border Ownership, Stereoscopic Cues, and Gestalt Grouping Rules. Front. Psychol. 2016, 6, 2054. [Google Scholar]
  8. Tang, S.; Lee, T.S.; Li, M.; Zhang, Y.; Jiang, H. Complex Pattern Selectivity in Macaque Primary Visual Cortex Revealed by Large-Scale Two-Photon Imaging. Curr. Biol. 2017, 28, 38–48. [Google Scholar] [CrossRef]
  9. Tang, Z.; Tamura, H.; Kuratu, M.; Ishizuka, O.; Tanno, K. A model of the neuron based on dendrite mechanisms. Electron. Commun. Japan Part III Fundam. Electron. Sci. 2001, 84, 11–24. [Google Scholar] [CrossRef]
  10. Todo, Y.; Tamura, H.; Yamashita, K.; Tang, Z. Unsupervised learnable neuron model with nonlinear interaction on dendrites. Neural Netw. 2014, 60, 96–103. [Google Scholar] [CrossRef]
  11. Todo, Y.; Tang, Z.; Todo, H.; Ji, J.; Yamashita, K. Neurons with multiplicative interactions of nonlinear synapses. Int. J. Neural Syst. 2019, 29, 1950012. [Google Scholar] [CrossRef]
  12. Hassoun, M.H. Fundamentals of Artificial Neural Networks; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  13. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  14. Al-Shayea, Q.K. Artificial neural networks in medical diagnosis. Int. J. Comput. Sci. Issues 2011, 8, 150–154. [Google Scholar]
  15. Khashei, M.; Bijari, M. An artificial neural network (p, d, q) model for timeseries forecasting. Expert Syst. Appl. 2010, 37, 479–489. [Google Scholar] [CrossRef]
  16. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  17. London, M.; Häusser, M. Dendritic computation. Annu. Rev. Neurosci. 2005, 28, 503–532. [Google Scholar] [CrossRef] [Green Version]
  18. Agmon-Snir, H.; Carr, C.E.; Rinzel, J. The role of dendrites in auditory coincidence detection. Nature 1998, 393, 268–272. [Google Scholar] [CrossRef] [PubMed]
  19. Anderson, J.; Binzegger, T.; Kahana, O.; Martin, K.; Segev, I. Dendritic asymmetry cannot account for directional responses of neurons in visual cortex. Nat. Neurosci. 1999, 2, 820–824. [Google Scholar] [CrossRef]
  20. Artola, A.; Bröcher, S.; Singer, W. Different voltage-dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex. Nature 1990, 347, 69–72. [Google Scholar] [CrossRef]
  21. Euler, T.; Detwiler, P.B.; Denk, W. Directionally selective calcium signals in dendrites of starburst amacrine cells. Nature 2002, 418, 845–852. [Google Scholar] [CrossRef]
  22. Magee, J.C. Dendritic integration of excitatory synaptic input. Nat. Rev. Neurosci. 2000, 1, 181–190. [Google Scholar] [CrossRef] [PubMed]
  23. Single, S.; Borst, A. Dendritic integration and its role in computing image velocity. Science 1998, 281, 1848–1850. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Crandall, S.R. Dendritic Properties of Inhibitory Thalamic Neurons: Implications in Sub-Cortical Sensory Processing; University of Illinois at Urbana-Champaign: Champaign, IL, USA, 2012. [Google Scholar]
  25. Dringenberg, H.C.; Hamze, B.; Wilson, A.; Speechley, W.; Kuo, M.C. Heterosynaptic facilitation of in vivo thalamocortical long-term potentiation in the adult rat visual cortex by acetylcholine. Cereb. Cortex 2007, 17, 839–848. [Google Scholar] [CrossRef]
  26. Koch, C.; Poggio, T.; Torre, V. Nonlinear interactions in a dendritic tree: Localization, timing, and role in information processing. Proc. Natl. Acad. Sci. USA 1983, 80, 2799–2802. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Koch, C.; Poggio, T.; Torre, V. Retinal ganglion cells: A functional interpretation of dendritic morphology. Philos. Trans. R. Soc. London. B Biol. Sci. 1982, 298, 227–263. [Google Scholar] [PubMed]
  28. Engelbrecht, A.P. Computational Intelligence: An Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  29. Bi, G.Q.; Poo, M.M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 1998, 18, 10464–10472. [Google Scholar] [CrossRef] [PubMed]
  30. Fortier, P.A.; Bray, C. Influence of asymmetric attenuation of single and paired dendritic inputs on summation of synaptic potentials and initiation of action potentials. Neuroscience 2013, 236, 195–209. [Google Scholar] [CrossRef]
  31. Holtmaat, A.; Svoboda, K. Experience-dependent structural synaptic plasticity in the mammalian brain. Nat. Rev. Neurosci. 2009, 10, 647–658. [Google Scholar] [CrossRef]
  32. Taylor, W.R.; He, S.; Levick, W.R.; Vaney, D.I. Dendritic computation of direction selectivity by retinal ganglion cells. Science 2000, 289, 2347–2350. [Google Scholar] [CrossRef]
  33. Segev, I.; Rall, W. Excitable dendrites and spines: Earlier theoretical insights elucidate recent direct observations. Trends Neurosci. 1998, 21, 453–460. [Google Scholar] [CrossRef]
  34. Ji, J.; Gao, S.; Cheng, J.; Tang, Z.; Todo, Y. An approximate logic neuron model with a dendritic structure. Neurocomputing 2016, 173, 1775–1783. [Google Scholar] [CrossRef]
  35. Jiang, T.; Gao, S.; Wang, D.; Ji, J.; Todo, Y.; Tang, Z. A neuron model with synaptic nonlinearities in a dendritic tree for liver disorders. IEEJ Trans. Electr. Electron. Eng. 2017, 12, 105–115. [Google Scholar] [CrossRef]
  36. Zhou, T.; Gao, S.; Wang, J.; Chu, C.; Todo, Y.; Tang, Z. Financial time series prediction using a dendritic neuron model. Knowl.-Based Syst. 2016, 105, 214–224. [Google Scholar] [CrossRef]
  37. Sekiya, Y.; Aoyama, T.; Hiroki, T.; Zheng, T. Learningpossibility that neuron model can recognize depth-rotation in three dimension. In Proceedings of the 1st International Conference on Control Automation and Systems, Tokyo, Japan, 25 July 2001; p. 149. [Google Scholar]
  38. Kepecs, A.; Wang, X.J.; Lisman, J. Bursting neurons signal input slope. J. Neurosci. 2002, 22, 9053–9062. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Structure of the dendritic neuron model with inhibitory input () and excitatory inputs (•). (a) δ cell and (b) γ cell.
Figure 1. Structure of the dendritic neuron model with inhibitory input () and excitatory inputs (•). (a) δ cell and (b) γ cell.
Electronics 11 00568 g001
Figure 2. A local planar orientation-detective neuron with γ cell for 0 degrees.
Figure 2. A local planar orientation-detective neuron with γ cell for 0 degrees.
Electronics 11 00568 g002
Figure 3. The local planar orientation-detective neurons. (a) 0 degree, (b) 45 degree, (c) 90 degree and (d) 135 degree neurons.
Figure 3. The local planar orientation-detective neurons. (a) 0 degree, (b) 45 degree, (c) 90 degree and (d) 135 degree neurons.
Electronics 11 00568 g003
Figure 4. Diagram of the judgment of global planar orientation detection by the local planar orientation-detective neurons.
Figure 4. Diagram of the judgment of global planar orientation detection by the local planar orientation-detective neurons.
Electronics 11 00568 g004
Figure 5. A generalized artificial visual system (AVS) with a local feature-detective neuron (LFDN) layer and one or many global feature-detective neuron (GFDN) layers.
Figure 5. A generalized artificial visual system (AVS) with a local feature-detective neuron (LFDN) layer and one or many global feature-detective neuron (GFDN) layers.
Electronics 11 00568 g005
Figure 6. Computer experiment on the mechanism for detecting a 135 bar with a width of 3.
Figure 6. Computer experiment on the mechanism for detecting a 135 bar with a width of 3.
Electronics 11 00568 g006
Figure 7. Computer experiment on the mechanism for detecting a 135 bar with a width of 7.
Figure 7. Computer experiment on the mechanism for detecting a 135 bar with a width of 7.
Electronics 11 00568 g007
Figure 8. Computer experiment on the mechanism for detecting a horizontal ( 0 ) bar.
Figure 8. Computer experiment on the mechanism for detecting a horizontal ( 0 ) bar.
Electronics 11 00568 g008
Figure 9. Computer experiment on the mechanism for detecting a vertical ( 90 ) bar.
Figure 9. Computer experiment on the mechanism for detecting a vertical ( 90 ) bar.
Electronics 11 00568 g009
Figure 10. The activation of four neurons for a 90 bar with different length–width ratios.
Figure 10. The activation of four neurons for a 90 bar with different length–width ratios.
Electronics 11 00568 g010
Figure 11. The example images with 0 (a), 1 (b), 5 (c), 10 (d), 25 (e), 50 (f), 100 (g) and 150 (h) noises.
Figure 11. The example images with 0 (a), 1 (b), 5 (c), 10 (d), 25 (e), 50 (f), 100 (g) and 150 (h) noises.
Electronics 11 00568 g011
Table 1. Comparison of identification accuracy between CNN and AVS.
Table 1. Comparison of identification accuracy between CNN and AVS.
Noises0 Noise1 Noise5 Noises10 Noises25 Noises50 Noises100 Noises150 Noises
CNN99.85%97.89%59.28%51.42%38.04%35.42%30.68%29.38%
AVS100%100%100%100%100%100%100%100%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ye, J.; Todo, Y.; Tang, Z.; Li, B.; Zhang, Y. Artificial Visual System for Orientation Detection. Electronics 2022, 11, 568. https://doi.org/10.3390/electronics11040568

AMA Style

Ye J, Todo Y, Tang Z, Li B, Zhang Y. Artificial Visual System for Orientation Detection. Electronics. 2022; 11(4):568. https://doi.org/10.3390/electronics11040568

Chicago/Turabian Style

Ye, Jiazhen, Yuki Todo, Zheng Tang, Bin Li, and Yu Zhang. 2022. "Artificial Visual System for Orientation Detection" Electronics 11, no. 4: 568. https://doi.org/10.3390/electronics11040568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop