Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 8 (August 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) How much uncertainty do we have about a given issue? And how relevant is it to another? Information [...] Read more.
View options order results:
result details:
Displaying articles 1-54
Export citation of selected articles as:
Open AccessArticle Entropy Generation Analysis of Wildfire Propagation
Entropy 2017, 19(8), 433; https://doi.org/10.3390/e19080433
Received: 30 June 2017 / Revised: 4 August 2017 / Accepted: 11 August 2017 / Published: 22 August 2017
Cited by 1 | PDF Full-text (2215 KB) | HTML Full-text | XML Full-text
Abstract
Entropy generation is commonly applied to describe the evolution of irreversible processes, such as heat transfer and turbulence. These are both dominating phenomena in fire propagation. In this paper, entropy generation analysis is applied to a grassland fire event, with the aim of
[...] Read more.
Entropy generation is commonly applied to describe the evolution of irreversible processes, such as heat transfer and turbulence. These are both dominating phenomena in fire propagation. In this paper, entropy generation analysis is applied to a grassland fire event, with the aim of finding possible links between entropy generation and propagation directions. The ultimate goal of such analysis consists in helping one to overcome possible limitations of the models usually applied to the prediction of wildfire propagation. These models are based on the application of the superimposition of the effects due to wind and slope, which has proven to fail in various cases. The analysis presented here shows that entropy generation allows a detailed analysis of the landscape propagation of a fire and can be thus applied to its quantitative description. Full article
(This article belongs to the Section Thermodynamics)
Figures

Graphical abstract

Open AccessArticle Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels
Entropy 2017, 19(8), 432; https://doi.org/10.3390/e19080432
Received: 6 July 2017 / Revised: 16 August 2017 / Accepted: 18 August 2017 / Published: 22 August 2017
Cited by 2 | PDF Full-text (2379 KB) | HTML Full-text | XML Full-text
Abstract
A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties
[...] Read more.
A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties on the two grouped channel coefficients to improve the channel estimation performance in a mixed noise environment. As a result, a zero attraction term is obtained from the expected l 0 and l 1 penalty techniques. Furthermore, a reweighting factor is adopted and incorporated into the zero-attraction term of the GC-MCC algorithm which is denoted as the reweighted GC-MCC (RGC-MMC) algorithm to enhance the estimation performance. Both the GC-MCC and RGC-MCC algorithms are developed to exploit well the inherent sparseness properties of the sparse multi-path channels due to the expected zero-attraction terms in their iterations. The channel estimation behaviors are discussed and analyzed over sparse channels in mixed Gaussian noise environments. The computer simulation results show that the estimated steady-state error is smaller and the convergence is faster than those of the previously reported MCC and sparse MCC algorithms. Full article
Figures

Figure 1

Open AccessArticle Influence of Failure Probability Due to Parameter and Anchor Variance of a Freeway Dip Slope Slide—A Case Study in Taiwan
Entropy 2017, 19(8), 431; https://doi.org/10.3390/e19080431
Received: 19 June 2017 / Revised: 10 August 2017 / Accepted: 18 August 2017 / Published: 22 August 2017
PDF Full-text (5968 KB) | HTML Full-text | XML Full-text
Abstract
The traditional slope stability analysis used the Factor of Safety (FS) from the Limit Equilibrium Theory as the determinant. If the FS was greater than 1, it was considered as “safe” and variables or parameters of uncertainty in the analysis model
[...] Read more.
The traditional slope stability analysis used the Factor of Safety (FS) from the Limit Equilibrium Theory as the determinant. If the FS was greater than 1, it was considered as “safe” and variables or parameters of uncertainty in the analysis model were not considered. The objective of research was to analyze the stability of natural slope, in consideration of characteristics of rock layers and the variability of pre-stressing force. By sensitivity and uncertainty analysis, the result showed the sensitivity for pre-stressing force of rock anchor was significantly smaller than the cohesive (c) of rock layer and the varying influence of the friction angle (ϕ) in rock layers. In addition, the immersion by water at the natural slope would weaken the rock layers, in which the cohesion c was reduced to 6 kPa and the friction angle ϕ was decreased below 14°, and it started to show instability and failure in the balance as FS became smaller than 1. The failure rate to the slope could be as high as 50%. By stabilizing with a rock anchor, the failure rate could be reduced below 3%, greatly improving the stability and the reliability of the slope. Full article
Figures

Figure 1

Open AccessArticle Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case
Entropy 2017, 19(8), 429; https://doi.org/10.3390/e19080429
Received: 4 July 2017 / Revised: 6 August 2017 / Accepted: 17 August 2017 / Published: 21 August 2017
Cited by 3 | PDF Full-text (304 KB) | HTML Full-text | XML Full-text
Abstract
In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition,
[...] Read more.
In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition, we define the logical entropy of an IF-dynamical system. It is shown that the logical entropy of IF-dynamical systems is invariant under isomorphism. Finally, an analogy of the Kolmogorov–Sinai theorem on generators for IF-dynamical systems is proved. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Open AccessArticle The Potential Application of Multiscale Entropy Analysis of Electroencephalography in Children with Neurological and Neuropsychiatric Disorders
Entropy 2017, 19(8), 428; https://doi.org/10.3390/e19080428
Received: 29 June 2017 / Revised: 11 August 2017 / Accepted: 16 August 2017 / Published: 21 August 2017
Cited by 1 | PDF Full-text (227 KB) | HTML Full-text | XML Full-text
Abstract
Electroencephalography (EEG) is frequently used in functional neurological assessment of children with neurological and neuropsychiatric disorders. Multiscale entropy (MSE) can reveal complexity in both short and long time scales and is more feasible in the analysis of EEG. Entropy-based estimation of EEG complexity
[...] Read more.
Electroencephalography (EEG) is frequently used in functional neurological assessment of children with neurological and neuropsychiatric disorders. Multiscale entropy (MSE) can reveal complexity in both short and long time scales and is more feasible in the analysis of EEG. Entropy-based estimation of EEG complexity is a powerful tool in investigating the underlying disturbances of neural networks of the brain. Most neurological and neuropsychiatric disorders in childhood affect the early stage of brain development. The analysis of EEG complexity may show the influences of different neurological and neuropsychiatric disorders on different regions of the brain during development. This article aims to give a brief summary of current concepts of MSE analysis in pediatric neurological and neuropsychiatric disorders. Studies utilizing MSE or its modifications for investigating neurological and neuropsychiatric disorders in children were reviewed. Abnormal EEG complexity was shown in a variety of childhood neurological and neuropsychiatric diseases, including autism, attention deficit/hyperactivity disorder, Tourette syndrome, and epilepsy in infancy and childhood. MSE has been shown to be a powerful method for analyzing the non-linear anomaly of EEG in childhood neurological diseases. Further studies are needed to show its clinical implications on diagnosis, treatment, and outcome prediction. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Open AccessArticle Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations
Entropy 2017, 19(8), 427; https://doi.org/10.3390/e19080427
Received: 27 June 2017 / Revised: 8 August 2017 / Accepted: 18 August 2017 / Published: 21 August 2017
PDF Full-text (3212 KB) | HTML Full-text | XML Full-text
Abstract
Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower
[...] Read more.
Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower bounds on the entropy for the minimum entropy distribution over arbitrarily large collections of binary units with any fixed set of mean values and pairwise correlations. We also construct specific low-entropy distributions for several relevant cases. Surprisingly, the minimum entropy solution has entropy scaling logarithmically with system size for any set of first- and second-order statistics consistent with arbitrarily large systems. We further demonstrate that some sets of these low-order statistics can only be realized by small systems. Our results show how only small amounts of randomness are needed to mimic low-order statistical properties of highly entropic distributions, and we discuss some applications for engineered and biological information transmission systems. Full article
(This article belongs to the Special Issue Thermodynamics of Information Processing)
Figures

Figure 1

Open AccessArticle Iterative QR-Based SFSIC Detection and Decoder Algorithm for a Reed–Muller Space-Time Turbo System
Entropy 2017, 19(8), 426; https://doi.org/10.3390/e19080426
Received: 30 June 2017 / Revised: 24 July 2017 / Accepted: 15 August 2017 / Published: 20 August 2017
PDF Full-text (1038 KB) | HTML Full-text | XML Full-text
Abstract
An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive
[...] Read more.
An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive interference cancellation, stemmed from the a priori log-likelihood ratio (LLR) of encoded bits. Then, the signal originating from the symbols of the reliable segment, the symbol reliability metric, in terms of an a posteriori LLR of encoded bits which is larger than a certain threshold, is iteratively cancelled with the QRSFSIC in order to further obtain the residual signal for evaluating the symbols in the unreliable segment. This is done until the unreliable segment is empty, resulting in the extrinsic information for a RM turbo-coded bit with the greatest likelihood. Bridged by de-multiplexing and multiplexing, an iterative QRSFSIC detection is concatenated with an iterative trellis-based maximum a posteriori probability RM turbo decoder as if a principal Turbo detection and decoder is embedded with an iterative subordinate QRSFSIC detection and a RM turbo decoder, exchanging each other’s detection and decoding soft-decision information iteratively. These three stages let the proposed algorithm approach the upper bound of the diversity. The simulation results also show that the proposed scheme outperforms the other suboptimum detectors considered in this paper. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle A Sparse Multiwavelet-Based Generalized Laguerre–Volterra Model for Identifying Time-Varying Neural Dynamics from Spiking Activities
Entropy 2017, 19(8), 425; https://doi.org/10.3390/e19080425
Received: 13 July 2017 / Revised: 7 August 2017 / Accepted: 18 August 2017 / Published: 20 August 2017
PDF Full-text (6929 KB) | HTML Full-text | XML Full-text
Abstract
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying
[...] Read more.
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying neural dynamics from multiple spike train data. First, the significant inputs are selected by using a group least absolute shrinkage and selection operator (LASSO) method, which can capture the sparsity within the neural system. Second, the multiwavelet-based basis function expansion scheme with an efficient forward orthogonal regression (FOR) algorithm aided by mutual information is utilized to rapidly capture the time-varying characteristics from the sparse model. Quantitative simulation results demonstrate that the proposed sMGLV model in this paper outperforms the initial full model and the state-of-the-art modeling methods in tracking performance for various time-varying kernels. Analyses of experimental data show that the proposed sMGLV model can capture the timing of transient changes accurately. The proposed framework will be useful to the study of how, when, and where information transmission processes across brain regions evolve in behavior. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Survey on Probabilistic Models of Low-Rank Matrix Factorizations
Entropy 2017, 19(8), 424; https://doi.org/10.3390/e19080424
Received: 12 June 2017 / Revised: 11 August 2017 / Accepted: 16 August 2017 / Published: 19 August 2017
PDF Full-text (589 KB) | HTML Full-text | XML Full-text
Abstract
Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption
[...] Read more.
Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption that the data matrices are contaminated stochastically by some type of noise. Thus the point estimations of low-rank components can be obtained by Maximum Likelihood (ML) estimation or Maximum a posteriori (MAP). In the past decade, a variety of probabilistic models of low-rank matrix factorizations have emerged. The most significant difference between low-rank matrix factorizations and their corresponding probabilistic models is that the latter treat the low-rank components as random variables. This paper makes a survey of the probabilistic models of low-rank matrix factorizations. Firstly, we review some probability distributions commonly-used in probabilistic models of low-rank matrix factorizations and introduce the conjugate priors of some probability distributions to simplify the Bayesian inference. Then we provide two main inference methods for probabilistic low-rank matrix factorizations, i.e., Gibbs sampling and variational Bayesian inference. Next, we classify roughly the important probabilistic models of low-rank matrix factorizations into several categories and review them respectively. The categories are performed via different matrix factorizations formulations, which mainly include PCA, matrix factorizations, robust PCA, NMF and tensor factorizations. Finally, we discuss the research issues needed to be studied in the future. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle An Improved System for Utilizing Low-Temperature Waste Heat of Flue Gas from Coal-Fired Power Plants
Entropy 2017, 19(8), 423; https://doi.org/10.3390/e19080423
Received: 10 July 2017 / Revised: 6 August 2017 / Accepted: 15 August 2017 / Published: 19 August 2017
Cited by 1 | PDF Full-text (1463 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas
[...] Read more.
In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas is not only used to preheat air for assisting coal combustion as usual but also to heat up feedwater and for low-pressure steam extraction. Air preheating is performed by both the exhaust flue gas in the boiler island and the low-pressure steam extraction in the turbine island; thereby part of the flue gas heat originally exchanged in the air preheater can be saved and introduced to heat the feedwater and the high-temperature condensed water. Consequently, part of the high-pressure steam is saved for further expansion in the steam turbine, which results in additional net power output. Based on the design data of a typical 1000 MW ultra-supercritical coal-fired power plant in China, an in-depth analysis of the energy-saving characteristics of the improved waste heat utilization system (WHUS) and the conventional WHUS is conducted. When the improved WHUS is adopted in a typical 1000 MW unit, net power output increases by 19.51 MW, exergy efficiency improves to 45.46%, and net annual revenue reaches USD 4.741 million while for the conventional WHUS, these performance parameters are 5.83 MW, 44.80% and USD 1.244 million, respectively. The research described in this paper provides a feasible energy-saving option for coal-fired power plants. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Computing Entropies with Nested Sampling
Entropy 2017, 19(8), 422; https://doi.org/10.3390/e19080422
Received: 12 July 2017 / Revised: 14 August 2017 / Accepted: 16 August 2017 / Published: 18 August 2017
PDF Full-text (782 KB) | HTML Full-text | XML Full-text
Abstract
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be
[...] Read more.
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be evaluated. This paper introduces a computational approach, based on Nested Sampling, to evaluate entropies of probability distributions that can only be sampled. I demonstrate the method on three examples: a simple Gaussian example where the key quantities are available analytically; (ii) an experimental design example about scheduling observations in order to measure the period of an oscillating signal; and (iii) predicting the future from the past in a heavy-tailed scenario. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Incipient Fault Diagnosis of Rolling Bearings Based on Impulse-Step Impact Dictionary and Re-Weighted Minimizing Nonconvex Penalty Lq Regular Technique
Entropy 2017, 19(8), 421; https://doi.org/10.3390/e19080421
Received: 31 July 2017 / Revised: 15 August 2017 / Accepted: 16 August 2017 / Published: 18 August 2017
Cited by 6 | PDF Full-text (2928 KB) | HTML Full-text | XML Full-text
Abstract
The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and
[...] Read more.
The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and always corrupted by heavy background noise. In this paper, a new transient impulse extraction methodology is proposed based on impulse-step dictionary and re-weighted minimizing nonconvex penalty Lq regular (R-WMNPLq, q = 0.5) for the incipient fault diagnosis of rolling bearings. Prior to the sparse representation, the original vibration signal is preprocessed by the variational mode decomposition (VMD) technique. Due to the physical mechanism of periodic double impacts, including step-like and impulse-like impacts, an impulse-step impact dictionary atom could be designed to match the natural waveform structure of vibration signals. On the other hand, the traditional sparse reconstruction approaches such as orthogonal matching pursuit (OMP), L1-norm regularization treat all vibration signal values equally and thus ignore the fact that the vibration peak value may have more useful information about periodical transient impulses and should be preserved at a larger weight value. Therefore, penalty and smoothing parameters are introduced on the reconstructed model to guarantee the reasonable distribution consistence of peak vibration values. Lastly, the proposed technique is applied to accelerated lifetime testing of rolling bearings, where it achieves a more noticeable and higher diagnostic accuracy compared with OMP, L1-norm regularization and traditional spectral Kurtogram (SK) method. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessReview Securing Wireless Communications of the Internet of Things from the Physical Layer, An Overview
Entropy 2017, 19(8), 420; https://doi.org/10.3390/e19080420
Received: 3 July 2017 / Revised: 14 August 2017 / Accepted: 16 August 2017 / Published: 18 August 2017
Cited by 2 | PDF Full-text (1109 KB) | HTML Full-text | XML Full-text
Abstract
The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative
[...] Read more.
The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption. These schemes can be implemented and are lightweight, and thus offer practical solutions for providing effective IoT wireless security. Future research to make IoT-based physical layer security more robust and pervasive is also covered. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Deformed Jarzynski Equality
Entropy 2017, 19(8), 419; https://doi.org/10.3390/e19080419
Received: 24 July 2017 / Revised: 15 August 2017 / Accepted: 15 August 2017 / Published: 18 August 2017
Cited by 2 | PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
The well-known Jarzynski equality, often written in the form eβΔF=eβW, provides a non-equilibrium means to measure the free energy difference ΔF of a system at the same inverse temperature β
[...] Read more.
The well-known Jarzynski equality, often written in the form e β Δ F = e β W , provides a non-equilibrium means to measure the free energy difference Δ F of a system at the same inverse temperature β based on an ensemble average of non-equilibrium work W. The accuracy of Jarzynski’s measurement scheme was known to be determined by the variance of exponential work, denoted as var e β W . However, it was recently found that var e β W can systematically diverge in both classical and quantum cases. Such divergence will necessarily pose a challenge in the applications of Jarzynski equality because it may dramatically reduce the efficiency in determining Δ F . In this work, we present a deformed Jarzynski equality for both classical and quantum non-equilibrium statistics, in efforts to reuse experimental data that already suffers from a diverging var e β W . The main feature of our deformed Jarzynski equality is that it connects free energies at different temperatures and it may still work efficiently subject to a diverging var e β W . The conditions for applying our deformed Jarzynski equality may be met in experimental and computational situations. If so, then there is no need to redesign experimental or simulation methods. Furthermore, using the deformed Jarzynski equality, we exemplify the distinct behaviors of classical and quantum work fluctuations for the case of a time-dependent driven harmonic oscillator dynamics and provide insights into the essential performance differences between classical and quantum Jarzynski equalities. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle Spatio-Temporal Variability of Soil Water Content under Different Crop Covers in Irrigation Districts of Northwest China
Entropy 2017, 19(8), 410; https://doi.org/10.3390/e19080410
Received: 25 April 2017 / Revised: 3 August 2017 / Accepted: 4 August 2017 / Published: 18 August 2017
PDF Full-text (6516 KB) | HTML Full-text | XML Full-text
Abstract
The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced
[...] Read more.
The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced spatial and temporal variation of soil water. During a study, SWC was measured under maize and wheat for two years in northwest China. Statistical methods and entropy analysis were applied to investigate the spatio-temporal variability of SWC and the interaction between SWC and its influencing factors. The SWC variability changed within the field plot, with the standard deviation reaching a maximum value under intermediate mean SWC in different layers under various conditions (climatic conditions, soil conditions, crop type conditions). The spatial-temporal-distribution of the SWC reflects the variability of precipitation and potential evapotranspiration (ET0) under different crop covers. The mutual entropy values between SWC and precipitation were similar in two years under wheat cover but were different under maize cover. However, the mutual entropy values at different depths were different under different crop covers. The entropy values changed with SWC following an exponential trend. The informational correlation coefficient (R0) between the SWC and the precipitation was higher than that between SWC and other factors at different soil depths. Precipitation was the dominant factor controlling the SWC variability, and the crop efficient was the second dominant factor. This study highlights the precipitation is a paramount factor for investigating the spatio-temporal variability of soil water content in Northwest China. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Back to Top