Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 9 (September 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) In order to understand the logical architecture of living systems, von Neumann introduced the idea [...] Read more.
View options order results:
result details:
Displaying articles 1-73
Export citation of selected articles as:
Open AccessEditorial Nonequilibrium Phenomena in Confined Systems
Entropy 2017, 19(9), 507; https://doi.org/10.3390/e19090507
Received: 19 September 2017 / Revised: 19 September 2017 / Accepted: 19 September 2017 / Published: 20 September 2017
Cited by 1 | PDF Full-text (149 KB) | HTML Full-text | XML Full-text
Abstract
Confined systems exhibit a large variety of nonequilibrium phenomena. In this special issue, we have collected a limited number of papers that were presented during the XXV Sitges Conference on Statistical Mechanics, devoted to “Nonequilibrium phenomena in confined systems”.[...] Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Open AccessArticle An Efficient Advantage Distillation Scheme for Bidirectional Secret-Key Agreement
Entropy 2017, 19(9), 505; https://doi.org/10.3390/e19090505
Received: 30 July 2017 / Revised: 30 August 2017 / Accepted: 14 September 2017 / Published: 18 September 2017
PDF Full-text (579 KB) | HTML Full-text | XML Full-text
Abstract
The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted
[...] Read more.
The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted bits in the AD. The unidirectional SKA, whose transmission rate is 0 . 5, can be realized by using the original two-way wiretap channel as the AD phase. In this paper, we establish an efficient bidirectional SKA whose transmission rate is nearly 1 by modifying the two-way wiretap channel and using the modified two-way wiretap channel as the AD phase. The bidirectional SKA can be extended to multiple rounds of SKA with the same performance and transmission rate. For multiple rounds of bidirectional SKA, we have provided the bit error rate performance of the main channel and eavesdropper’s channel and the secret-key capacity. It is shown that the bit error rate (BER) of the main channel was lower than the eavesdropper’s channel and we prove that the transmission rate was nearly 1 when the number of rounds was large. Moreover, the secret-key capacity C s was from 0 . 04 to 0 . 1 as the error probability of channel was from 0 . 01 to 0 . 15 in binary symmetric channel (BSC). The secret-key capacity was close to 0 . 3 as the signal-to-noise ratio increased in the additive white Gaussian noise (AWGN) channel. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Second Law Analysis for Couple Stress Fluid Flow through a Porous Medium with Constant Heat Flux
Entropy 2017, 19(9), 498; https://doi.org/10.3390/e19090498
Received: 16 August 2017 / Revised: 4 September 2017 / Accepted: 11 September 2017 / Published: 18 September 2017
Cited by 1 | PDF Full-text (1758 KB) | HTML Full-text | XML Full-text
Abstract
In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel
[...] Read more.
In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel walls. The upper wall of the channel is subjected to a constant heat flux while the lower wall is insulated. The equations governing the fluid flow are formulated, non-dimensionalized and solved by using the Adomian decomposition method. The Adomian series solutions for the velocity and temperature fields are then used to compute the entropy generation rate and inherent heat irreversibility in the flow domain. The effects of various fluid parameters are presented graphically and discussed extensively. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Traction Inverter Open Switch Fault Diagnosis Based on Choi–Williams Distribution Spectral Kurtosis and Wavelet-Packet Energy Shannon Entropy
Entropy 2017, 19(9), 504; https://doi.org/10.3390/e19090504
Received: 4 September 2017 / Revised: 10 September 2017 / Accepted: 11 September 2017 / Published: 16 September 2017
PDF Full-text (5347 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively
[...] Read more.
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively indicate the presence of transients and locations in the frequency domain. Wavelet-packet energy Shannon entropy (WPESE) is appropriate for the transient changes detection of complex non-linear and non-stationary signals. Based on the analyses of currents in normal and fault conditions, SK based on CWD and WPESE are combined with the DC component method. SK based on CWD and WPESE are used for the fault detection, and the DC component method is used for the fault localization. This approach can diagnose the specific locations of faulty Insulated Gate Bipolar Transistors (IGBTs) with high accuracy, and it requires no additional devices. Experiments on the RT-LAB platform are carried out and the experimental results verify the feasibility and effectiveness of the diagnosis method. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Attribute Value Weighted Average of One-Dependence Estimators
Entropy 2017, 19(9), 501; https://doi.org/10.3390/e19090501
Received: 8 July 2017 / Revised: 16 August 2017 / Accepted: 11 September 2017 / Published: 16 September 2017
Cited by 1 | PDF Full-text (435 KB) | HTML Full-text | XML Full-text
Abstract
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is
[...] Read more.
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective, thus leading to excellent performance. In previous studies, ODEs were exploited directly in a simple way. For example, averaged one-dependence estimators (AODE) weaken the attribute independence assumption by directly averaging all of a constrained class of classifiers. However, all one-dependence estimators in AODE have the same weights and are treated equally. In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). AVWAODE assigns discriminative weights to different ODEs by computing the correlation between the different root attribute value and the class. Our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. We experimentally tested them using a collection of 36 University of California at Irvine (UCI) datasets and found that they both achieved better performance than some other state-of-the-art Bayesian classifiers used for comparison. Full article
Figures

Figure 1

Open AccessArticle Modeling NDVI Using Joint Entropy Method Considering Hydro-Meteorological Driving Factors in the Middle Reaches of Hei River Basin
Entropy 2017, 19(9), 502; https://doi.org/10.3390/e19090502
Received: 4 August 2017 / Revised: 8 September 2017 / Accepted: 13 September 2017 / Published: 15 September 2017
Cited by 1 | PDF Full-text (4562 KB) | HTML Full-text | XML Full-text
Abstract
Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled
[...] Read more.
Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled variables in the middle reaches of the Hei River basin. Based on the spatial distribution of mutual information, the whole study area was divided into five sub-regions. In each sub-region, nested statistical models were applied to model the NDVI on the grid and regional scales, respectively. Results showed that the annual average NDVI increased at a rate of 0.005/a over the past 11 years. In the desert regions, the NDVI increased significantly with an increase in precipitation and temperature, and a high accuracy of retrieving NDVI model was obtained by coupling precipitation and temperature, especially in sub-region I. In the oasis regions, groundwater was also an important factor driving vegetation growth, and the rise of the groundwater level contributed to the growth of vegetation. However, the relationship was weaker in artificial oasis regions (sub-region III and sub-region V) due to the influence of human activities such as irrigation. The overall correlation coefficient between the observed NDVI and modeled NDVI was observed to be 0.97. The outcomes of this study are suitable for ecosystem monitoring, especially in the realm of climate change. Further studies are necessary and should consider more factors, such as runoff and irrigation. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization
Entropy 2017, 19(9), 500; https://doi.org/10.3390/e19090500
Received: 24 May 2017 / Revised: 10 September 2017 / Accepted: 11 September 2017 / Published: 15 September 2017
PDF Full-text (1152 KB) | HTML Full-text | XML Full-text
Abstract
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps
[...] Read more.
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recall accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle Thermodynamics of Small Magnetic Particles
Entropy 2017, 19(9), 499; https://doi.org/10.3390/e19090499
Received: 2 August 2017 / Revised: 1 September 2017 / Accepted: 13 September 2017 / Published: 15 September 2017
Cited by 2 | PDF Full-text (568 KB) | HTML Full-text | XML Full-text
Abstract
In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called
[...] Read more.
In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called the thermodynamic limit, but not all of the approximations done for those conditions can be extended to all properties in the case of objects with less than a thousand elements. The starting point is the Ising model in two dimensions (2D) where an analytic solution exits, which allows validating the numerical techniques used in the present article. From there on, we introduce several variations bearing in mind the small systems such as the nanoscopic or even subnanoscopic particles, which are nowadays produced for several applications. Magnetization is the main property investigated aimed for two singular possible devices. The size of the systems (number of magnetic sites) is decreased so as to appreciate the departure from the results valid in the thermodynamic limit; periodic boundary conditions are eliminated to approach the reality of small particles; 1D, 2D and 3D systems are examined to appreciate the differences established by dimensionality is this small world; upon diluting the lattices, the effect of coordination number (bonding) is also explored; since the 2D Ising model is equivalent to the clock model with q = 2 degrees of freedom, we combine previous results with the supplementary degrees of freedom coming from the variation of q up to q = 20 . Most of the previous results are numeric; however, for the case of a very small system, we obtain the exact partition function to compare with the conclusions coming from our numerical results. Conclusions can be summarized in the following way: the laws of thermodynamics remain the same, but the interpretation of the results, averages and numerical treatments need special care for systems with less than about a thousand constituents, and this might need to be adapted for different properties or devices. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Sum Capacity for Single-Cell Multi-User Systems with M-Ary Inputs
Entropy 2017, 19(9), 497; https://doi.org/10.3390/e19090497
Received: 30 June 2017 / Revised: 30 August 2017 / Accepted: 12 September 2017 / Published: 15 September 2017
PDF Full-text (1699 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios.
[...] Read more.
This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios. Based on the successive interference cancellation (SIC) and the entropy power Gaussian approximation, it is shown that both the multi-user MAC and BC can be approximated to a bank of parallel channels with the channel gains being modified by an extra attenuate factor that equals to the negative exponential of the capacity of interfering users. With this result, the capacity of MAC and BC with arbitrary number of users and arbitrary constellations can be easily calculated which in sharp contrast with using traditional Monte Carlo simulation that the calculating amount increases exponentially with the increase of the number of users. Further, the sum capacity of multi-user under different power allocation strategies including equal power allocation, equal capacity power allocation and maximum capacity power allocation is also investigated. For the equal capacity power allocation, a recursive relation for the solution of power allocation is derived. For the maximum capacity power allocation, the necessary condition for optimal power allocation is obtained and an optimal algorithm for the power allocation optimization problem is proposed based on the necessary condition. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Entropic Data Envelopment Analysis: A Diversification Approach for Portfolio Optimization
Entropy 2017, 19(9), 352; https://doi.org/10.3390/e19090352
Received: 16 May 2017 / Revised: 7 July 2017 / Accepted: 7 July 2017 / Published: 15 September 2017
PDF Full-text (453 KB) | HTML Full-text | XML Full-text
Abstract
Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in
[...] Read more.
Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in association with a model resulting from the insertion of the Entropy function directly into the optimization procedure. First, the DEA model was applied to perform a pre-selection of the assets. Then, assets given as efficient were submitted to the proposed model, resulting from the insertion of the Entropy function into the simplified Sharpe’s portfolio optimization model. As a result, an improved asset participation was provided in the portfolio. In the DEA model, several variables were evaluated and a low value of beta was achieved, guaranteeing greater robustness to the portfolio. Entropy function has provided not only greater diversity but also more feasible asset allocation. Additionally, the proposed method has obtained a better portfolio performance, measured by the Sharpe Ratio, in relation to the comparative methods. Full article
Figures

Figure 1

Open AccessArticle Log Likelihood Spectral Distance, Entropy Rate Power, and Mutual Information with Applications to Speech Coding
Entropy 2017, 19(9), 496; https://doi.org/10.3390/e19090496
Received: 22 August 2017 / Revised: 9 September 2017 / Accepted: 10 September 2017 / Published: 14 September 2017
PDF Full-text (1194 KB) | HTML Full-text | XML Full-text
Abstract
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential
[...] Read more.
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential entropies, and further that it can be written as the difference of two mutual informations. These latter two expressions allow the analysis of signals via the log likelihood ratio to be extended beyond spectral matching to the study of their statistical quantities of differential entropy and mutual information. Examples from speech coding are presented to illustrate the utility of these new results. These new expressions allow the log likelihood ratio to be of interest in applications beyond those of just spectral matching for speech. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle On the Capacity and the Optimal Sum-Rate of a Class of Dual-Band Interference Channels
Entropy 2017, 19(9), 495; https://doi.org/10.3390/e19090495
Received: 29 June 2017 / Revised: 8 September 2017 / Accepted: 11 September 2017 / Published: 14 September 2017
PDF Full-text (632 KB) | HTML Full-text | XML Full-text
Abstract
We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the
[...] Read more.
We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the incumbent microwave band. The mm-wave band has a key modeling feature: due to severe path loss and relatively small wavelength, a transmitter must employ highly directional antenna arrays to reach its desired receiver. This feature causes the mm-wave channels to become highly directional, and thus can be used by a transmitter to transmit to its designated receiver or the other receiver. We consider two classes of such channels, where the underlying GIC in the microwave band has weak and strong interference, and obtain sufficient channel conditions under which the capacity is characterized. Moreover, we assess the impact of the additional mm-wave band spectrum on the performance, by characterizing the transmit power allocation for the direct and cross channels that maximizes the sum-rate of this dual-band channel. The solution reveals conditions under which different power allocations, such as allocating the power budget only to direct or only to cross channels, or sharing it among them, becomes optimal. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Graphical abstract

Open AccessFeature PaperArticle Quantifying Information Modification in Developing Neural Networks via Partial Information Decomposition
Entropy 2017, 19(9), 494; https://doi.org/10.3390/e19090494
Received: 13 July 2017 / Revised: 12 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
Cited by 2 | PDF Full-text (4808 KB) | HTML Full-text | XML Full-text
Abstract
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information
[...] Read more.
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems. Full article
Figures

Figure 1

Open AccessFeature PaperArticle On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures
Entropy 2017, 19(9), 493; https://doi.org/10.3390/e19090493
Received: 21 August 2017 / Revised: 8 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
PDF Full-text (489 KB) | HTML Full-text | XML Full-text
Abstract
Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of
[...] Read more.
Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures) of the internal complexity of a (quantum) system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ ) -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range). We determine as well the distribution that saturates the inequality: the ( p , β , λ ) -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main prototypes of physical systems subject to a central potential. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle Eigentimes and Very Slow Processes
Entropy 2017, 19(9), 492; https://doi.org/10.3390/e19090492
Received: 1 July 2017 / Revised: 28 August 2017 / Accepted: 8 September 2017 / Published: 14 September 2017
PDF Full-text (1712 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we
[...] Read more.
We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we wish to understand we need a set of variables which are linked through closed equations, i.e., everything can meaningfully be described in terms of those variables without the need to investigate other scales. Examples from physics, chemistry, and evolution are presented. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Back to Top