Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 3 (March 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The Integrated Information Theory (IIT) proposes that in order to quantify information integration [...] Read more.
View options order results:
result details:
Displaying articles 1-65
Export citation of selected articles as:
Open AccessArticle Some Inequalities Combining Rough and Random Information
Entropy 2018, 20(3), 211; https://doi.org/10.3390/e20030211
Received: 1 February 2018 / Revised: 18 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
PDF Full-text (744 KB) | HTML Full-text | XML Full-text
Abstract
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich
[...] Read more.
Rough random theory, generally applied to statistics, decision-making, and so on, is an extension of rough set theory and probability theory, in which a rough random variable is described as a random variable taking “rough variable” values. In order to extend and enrich the research area of rough random theory, in this paper, the well-known probabilistic inequalities (Markov inequality, Chebyshev inequality, Holder’s inequality, Minkowski inequality and Jensen’s inequality) are proven for rough random variables, which gives a firm theoretical support to the further development of rough random theory. Besides, considering that the critical values always act as a vital tool in engineering, science and other application fields, some significant properties of the critical values of rough random variables involving the continuity and the monotonicity are investigated deeply to provide a novel analytical approach for dealing with the rough random optimization problems. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Open AccessArticle Amplitude- and Fluctuation-Based Dispersion Entropy
Entropy 2018, 20(3), 210; https://doi.org/10.3390/e20030210
Received: 2 November 2017 / Revised: 5 February 2018 / Accepted: 13 March 2018 / Published: 20 March 2018
PDF Full-text (956 KB) | HTML Full-text | XML Full-text
Abstract
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of
[...] Read more.
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of different mappings has not been studied yet. Here, we investigate the effect of linear and nonlinear mapping approaches in DispEn. We also inspect the sensitivity of different parameters of DispEn to noise. Moreover, we develop fluctuation-based DispEn (FDispEn) as a measure to deal with only the fluctuations of time series. Furthermore, the original and fluctuation-based forbidden dispersion patterns are introduced to discriminate deterministic from stochastic time series. Finally, we compare the performance of DispEn, FDispEn, permutation entropy, sample entropy, and Lempel–Ziv complexity on two physiological datasets. The results show that DispEn is the most consistent technique to distinguish various dynamics of the biomedical signals. Due to their advantages over existing entropy methods, DispEn and FDispEn are expected to be broadly used for the characterization of a wide variety of real-world time series. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/2326. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Prior and Posterior Linear Pooling for Combining Expert Opinions: Uses and Impact on Bayesian Networks—The Case of the Wayfinding Model
Entropy 2018, 20(3), 209; https://doi.org/10.3390/e20030209
Received: 15 December 2017 / Revised: 17 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
PDF Full-text (1924 KB) | HTML Full-text | XML Full-text
Abstract
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability
[...] Read more.
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability assessments from multiple experts. In particular, Prior Linear Pooling (PrLP), which pools opinions and then places them into the BN, is a common method. This paper considers this approach and an alternative pooling method, Posterior Linear Pooling (PoLP). The PoLP method constructs a BN for each expert, and then pools the resulting probabilities at the nodes of interest. The advantages and disadvantages of these two methods are identified and compared and the methods are applied to an existing BN, the Wayfinding Bayesian Network Model, to investigate the behavior of different groups of people and how these different methods may be able to capture such differences. The paper focusses on six nodes Human Factors, Environmental Factors, Wayfinding, Communication, Visual Elements of Communication and Navigation Pathway, and three subgroups Gender (Female, Male), Travel Experience (Experienced, Inexperienced), and Travel Purpose (Business, Personal), and finds that different behaviors can indeed be captured by the different methods. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle Deconstructing Cross-Entropy for Probabilistic Binary Classifiers
Entropy 2018, 20(3), 208; https://doi.org/10.3390/e20030208
Received: 22 February 2018 / Revised: 16 March 2018 / Accepted: 18 March 2018 / Published: 20 March 2018
PDF Full-text (1789 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze
[...] Read more.
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze its motivation, meaning and interpretation from an information-theoretical point of view. In this sense, this article presents several contributions: First, we explicitly analyze the contribution to cross-entropy of (i) prior knowledge; and (ii) the value of the features in the form of a likelihood ratio. Second, we introduce a decomposition of cross-entropy into two components: discrimination and calibration. This decomposition enables the measurement of different performance aspects of a classifier in a more precise way; and justifies previously reported strategies to obtain reliable probabilities by means of the calibration of the output of a discriminating classifier. Third, we give different information-theoretical interpretations of cross-entropy, which can be useful in different application scenarios, and which are related to the concept of reference probabilities. Fourth, we present an analysis tool, the Empirical Cross-Entropy (ECE) plot, a compact representation of cross-entropy and its aforementioned decomposition. We show the power of ECE plots, as compared to other classical performance representations, in two diverse experimental examples: a speaker verification system, and a forensic case where some glass findings are present. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Graphical abstract

Open AccessArticle Application of Entropy Ensemble Filter in Neural Network Forecasts of Tropical Pacific Sea Surface Temperatures
Entropy 2018, 20(3), 207; https://doi.org/10.3390/e20030207
Received: 1 February 2018 / Revised: 13 March 2018 / Accepted: 15 March 2018 / Published: 20 March 2018
PDF Full-text (5543 KB) | HTML Full-text | XML Full-text
Abstract
Recently, the Entropy Ensemble Filter (EEF) method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging) method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging.
[...] Read more.
Recently, the Entropy Ensemble Filter (EEF) method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging) method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging. In this study, we evaluate, for the first time, the application of the EEF method in Neural Network (NN) modeling of El Nino-southern oscillation. Specifically, we forecast the first five principal components (PCs) of sea surface temperature monthly anomaly fields over tropical Pacific, at different lead times (from 3 to 15 months, with a three-month increment) for the period 1979–2017. We apply the EEF method in a multiple-linear regression (MLR) model and two NN models, one using Bayesian regularization and one Levenberg-Marquardt algorithm for training, and evaluate their performance and computational efficiency relative to the same models with conventional bagging. All models perform equally well at the lead time of 3 and 6 months, while at higher lead times, the MLR model’s skill deteriorates faster than the nonlinear models. The neural network models with both bagging methods produce equally successful forecasts with the same computational efficiency. It remains to be shown whether this finding is sensitive to the dataset size. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Graphical abstract

Open AccessArticle Irreversibility and Action of the Heat Conduction Process
Entropy 2018, 20(3), 206; https://doi.org/10.3390/e20030206
Received: 24 January 2018 / Revised: 21 February 2018 / Accepted: 14 March 2018 / Published: 20 March 2018
Cited by 2 | PDF Full-text (604 KB) | HTML Full-text | XML Full-text
Abstract
Irreversibility (that is, the “one-sidedness” of time) of a physical process can be characterized by using Lyapunov functions in the modern theory of stability. In this theoretical framework, entropy and its production rate have been generally regarded as Lyapunov functions in order to
[...] Read more.
Irreversibility (that is, the “one-sidedness” of time) of a physical process can be characterized by using Lyapunov functions in the modern theory of stability. In this theoretical framework, entropy and its production rate have been generally regarded as Lyapunov functions in order to measure the irreversibility of various physical processes. In fact, the Lyapunov function is not always unique. In the represent work, a rigorous proof is given that the entransy and its dissipation rate can also serve as Lyapunov functions associated with the irreversibility of the heat conduction process without the conversion between heat and work. In addition, the variation of the entransy dissipation rate can lead to Fourier’s heat conduction law, while the entropy production rate cannot. This shows that the entransy dissipation rate, rather than the entropy production rate, is the unique action for the heat conduction process, and can be used to establish the finite element method for the approximate solution of heat conduction problems and the optimization of heat transfer processes. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Generalized Lagrangian Path Approach to Manifestly-Covariant Quantum Gravity Theory
Entropy 2018, 20(3), 205; https://doi.org/10.3390/e20030205
Received: 10 January 2018 / Revised: 25 February 2018 / Accepted: 8 March 2018 / Published: 19 March 2018
Cited by 1 | PDF Full-text (410 KB) | HTML Full-text | XML Full-text
Abstract
A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP) approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The
[...] Read more.
A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP) approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The result is established in the framework of the manifestly-covariant quantum gravity theory (CQG-theory) proposed recently and the related CQG-wave equation advancing in proper-time the quantum state associated with massive gravitons. Generally non-stationary analytical solutions for the CQG-wave equation with non-vanishing cosmological constant are determined in such a framework, which exhibit Gaussian-like probability densities that are non-dispersive in proper-time. As a remarkable outcome of the theory achieved by implementing these analytical solutions, the existence of an emergent gravity phenomenon is proven to hold. Accordingly, it is shown that a mean-field background space-time metric tensor can be expressed in terms of a suitable statistical average of stochastic fluctuations of the quantum gravitational field whose quantum-wave dynamics is described by GLP trajectories. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Open AccessArticle Output-Feedback Control for Discrete-Time Spreading Models in Complex Networks
Entropy 2018, 20(3), 204; https://doi.org/10.3390/e20030204
Received: 3 February 2018 / Revised: 6 March 2018 / Accepted: 7 March 2018 / Published: 19 March 2018
PDF Full-text (820 KB) | HTML Full-text | XML Full-text
Abstract
The problem of stabilizing the spreading process to a prescribed probability distribution over a complex network is considered, where the dynamics of the nodes in the network is given by discrete-time Markov-chain processes. Conditions for the positioning and identification of actuators and sensors
[...] Read more.
The problem of stabilizing the spreading process to a prescribed probability distribution over a complex network is considered, where the dynamics of the nodes in the network is given by discrete-time Markov-chain processes. Conditions for the positioning and identification of actuators and sensors are provided, and sufficient conditions for the exponential stability of the desired distribution are derived. Simulations results for a network of N = 10 6 corroborate our theoretical findings. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessFeature PaperArticle Computational Information Geometry for Binary Classification of High-Dimensional Random Tensors
Entropy 2018, 20(3), 203; https://doi.org/10.3390/e20030203
Received: 25 January 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 17 March 2018
PDF Full-text (520 KB) | HTML Full-text | XML Full-text
Abstract
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero
[...] Read more.
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank-R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R , i.e., for 1 q Q , where R , N q with R 1 / q / N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear ( M 1 , , M Q ) -rank with large factors of size N q × M q , i.e., for 1 q Q , where N q , M q with M q / N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes’ error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s . In general, due to the asymmetry of the s-divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s = 1 / 2 ) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s . However, thanks to powerful random matrix theory tools, a simple analytical expression of s is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true. Full article
Figures

Figure 1

Open AccessArticle Global Reliability Sensitivity Analysis Based on Maximum Entropy and 2-Layer Polynomial Chaos Expansion
Entropy 2018, 20(3), 202; https://doi.org/10.3390/e20030202
Received: 5 February 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
PDF Full-text (3431 KB) | HTML Full-text | XML Full-text
Abstract
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the
[...] Read more.
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the traditional global sensitivity indices of model output, because statistical parameters are more difficult to obtain, Monte Carlo simulation (MCS)-related methods seem to be the only ways for GRSA but they are usually computationally demanding. This paper presents a new non-MCS calculation to evaluate global reliability sensitivity indices. This method proposes: (i) a 2-layer polynomial chaos expansion (PCE) framework to solve the global reliability sensitivity indices; and (ii) an efficient method to build a surrogate model of the statistical parameter using the maximum entropy (ME) method with the moments provided by PCE. This method has a dramatically reduced computational cost compared with traditional approaches. Two examples are introduced to demonstrate the efficiency and accuracy of the proposed method. It also suggests that the important ranking of model output and associated failure probability may be different, which could help improve the understanding of the given model in further optimization design. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Global Optimization Employing Gaussian Process-Based Bayesian Surrogates
Entropy 2018, 20(3), 201; https://doi.org/10.3390/e20030201
Received: 22 December 2017 / Revised: 8 March 2018 / Accepted: 13 March 2018 / Published: 16 March 2018
PDF Full-text (639 KB) | HTML Full-text | XML Full-text
Abstract
The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has
[...] Read more.
The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has been acquired, one could be interested in taking these data as a basis for finding an extremum and possibly an input parameter set for further computer simulations to determine it—a task which belongs to the realm of global optimization. Within the Bayesian framework we utilize Gaussian processes for the creation of a surrogate model function adjusted self-consistently via hyperparameters to represent the data. Although the probability distribution of the hyperparameters may be widely spread over phase space, we make the assumption that only the use of their expectation values is sufficient. While this shortcut facilitates a quickly accessible surrogate, it is somewhat justified by the fact that we are not interested in a full representation of the model by the surrogate but to reveal its maximum. To accomplish this the surrogate is fed to a utility function whose extremum determines the new parameter set for the next data point to obtain. Moreover, we propose to alternate between two utility functions—expected improvement and maximum variance—in order to avoid the drawbacks of each. Subsequent data points are drawn from the model function until the procedure either remains in the points found or the surrogate model does not change with the iteration. The procedure is applied to mock data in one and two dimensions in order to demonstrate proof of principle of the proposed approach. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Leggett-Garg Inequalities for Quantum Fluctuating Work
Entropy 2018, 20(3), 200; https://doi.org/10.3390/e20030200
Received: 1 February 2018 / Revised: 28 February 2018 / Accepted: 8 March 2018 / Published: 16 March 2018
PDF Full-text (317 KB) | HTML Full-text | XML Full-text
Abstract
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a
[...] Read more.
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a quantum system unitarily driven in time. It is shown that these inequalities can be violated in a driven two-level system, thereby demonstrating that there exists no general macrorealistic description of quantum work. These violations are shown to emerge within the standard Two-Projective-Measurement scheme as well as for alternative definitions of fluctuating work that are based on weak measurement. Our results elucidate the influences of temporal correlations on work extraction in the quantum regime and highlight a key difference between quantum and classical thermodynamics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Figures

Figure 1

Open AccessArticle Criticality Analysis of the Lower Ionosphere Perturbations Prior to the 2016 Kumamoto (Japan) Earthquakes as Based on VLF Electromagnetic Wave Propagation Data Observed at Multiple Stations
Entropy 2018, 20(3), 199; https://doi.org/10.3390/e20030199
Received: 17 February 2018 / Revised: 12 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
PDF Full-text (3502 KB) | HTML Full-text | XML Full-text
Abstract
The perturbations of the ionosphere which are observed prior to significant earthquakes (EQs) have long been investigated and could be considered promising for short-term EQ prediction. One way to monitor ionospheric perturbations is by studying VLF/LF electromagnetic wave propagation through the lower ionosphere
[...] Read more.
The perturbations of the ionosphere which are observed prior to significant earthquakes (EQs) have long been investigated and could be considered promising for short-term EQ prediction. One way to monitor ionospheric perturbations is by studying VLF/LF electromagnetic wave propagation through the lower ionosphere between specific transmitters and receivers. For this purpose, a network of eight receivers has been deployed throughout Japan which receive subionospheric signals from different transmitters located both in the same and other countries. In this study we analyze, in terms of the recently proposed natural time analysis, the data recorded by the above-mentioned network prior to the catastrophic 2016 Kumamoto fault-type EQs, which were as huge as the former 1995 Kobe EQ. These EQs occurred within a two-day period (14 April: M W = 6.2 and M W = 6.0 , 15 April: M W = 7.0 ) at shallow depths (~10 km), while their epicenters were adjacent. Our results show that lower ionospheric perturbations present critical dynamics from two weeks up to two days before the main shock occurrence. The results are compared to those by the conventional nighttime fluctuation method obtained for the same dataset and exhibit consistency. Finally, the temporal evolutions of criticality in ionospheric parameters and those in the lithosphere as seen from the ULF electromagnetic emissions are discussed in the context of the lithosphere-atmosphere-ionosphere coupling. Full article
Figures

Figure 1

Open AccessArticle Modulation Signal Recognition Based on Information Entropy and Ensemble Learning
Entropy 2018, 20(3), 198; https://doi.org/10.3390/e20030198
Received: 30 January 2018 / Revised: 13 March 2018 / Accepted: 14 March 2018 / Published: 16 March 2018
PDF Full-text (1772 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, information entropy and ensemble learning based signal recognition theory and algorithms have been proposed. We have extracted 16 kinds of entropy features out of 9 types of modulated signals. The types of information entropy used are numerous, including Rényi entropy
[...] Read more.
In this paper, information entropy and ensemble learning based signal recognition theory and algorithms have been proposed. We have extracted 16 kinds of entropy features out of 9 types of modulated signals. The types of information entropy used are numerous, including Rényi entropy and energy entropy based on S Transform and Generalized S Transform. We have used three feature selection algorithms, including sequence forward selection (SFS), sequence forward floating selection (SFFS) and RELIEF-F to select the optimal feature subset from 16 entropy features. We use five classifiers, including k-nearest neighbor (KNN), support vector machine (SVM), Adaboost, Gradient Boosting Decision Tree (GBDT) and eXtreme Gradient Boosting (XGBoost) to classify the original feature set and the feature subsets selected by different feature selection algorithms. The simulation results show that the feature subsets selected by SFS and SFFS algorithms are the best, with a 48% increase in recognition rate over the original feature set when using KNN classifier and a 34% increase when using SVM classifier. For the other three classifiers, the original feature set can achieve the best recognition performance. The XGBoost classifier has the best recognition performance, the overall recognition rate is 97.74% and the recognition rate can reach 82% when the signal to noise ratio (SNR) is −10 dB. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Low Probability of Intercept-Based Radar Waveform Design for Spectral Coexistence of Distributed Multiple-Radar and Wireless Communication Systems in Clutter
Entropy 2018, 20(3), 197; https://doi.org/10.3390/e20030197
Received: 17 December 2017 / Revised: 21 February 2018 / Accepted: 23 February 2018 / Published: 16 March 2018
PDF Full-text (1067 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the problem of low probability of intercept (LPI)-based radar waveform design for distributed multiple-radar system (DMRS) is studied, which consists of multiple radars coexisting with a wireless communication system in the same frequency band. The primary objective of the multiple-radar
[...] Read more.
In this paper, the problem of low probability of intercept (LPI)-based radar waveform design for distributed multiple-radar system (DMRS) is studied, which consists of multiple radars coexisting with a wireless communication system in the same frequency band. The primary objective of the multiple-radar system is to minimize the total transmitted energy by optimizing the transmission waveform of each radar with the communication signals acting as interference to the radar system, while meeting a desired target detection/characterization performance. Firstly, signal-to-clutter-plus-noise ratio (SCNR) and mutual information (MI) are used as the practical metrics to evaluate target detection and characterization performance, respectively. Then, the SCNR- and MI-based optimal radar waveform optimization methods are formulated. The resulting waveform optimization problems are solved through the well-known bisection search technique. Simulation results demonstrate utilizing various examples and scenarios that the proposed radar waveform design schemes can evidently improve the LPI performance of DMRS without interfering with friendly communications. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Back to Top