Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = maximum extrapolation procedure

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 7714 KiB  
Article
Investigation of Material Loading on an Evolved Antecedent Hexagonal CSRR-Loaded Electrically Small Antenna
by Jake Peng Sean Ng, Yee Loon Sum, Boon Hee Soong and Paulo J. M. Monteiro
Sensors 2023, 23(20), 8624; https://doi.org/10.3390/s23208624 - 21 Oct 2023
Cited by 1 | Viewed by 1676
Abstract
Recent advances in embedded antenna and sensor technologies for 5G communications have galvanized a response toward the investigation of their electromagnetic performance for urban contexts and civil engineering applications. This article quantitatively investigates the effects of material loading on an evolved antecedent hexagonal [...] Read more.
Recent advances in embedded antenna and sensor technologies for 5G communications have galvanized a response toward the investigation of their electromagnetic performance for urban contexts and civil engineering applications. This article quantitatively investigates the effects of material loading on an evolved antecedent hexagonal complementary split-ring resonator (CSRR)-loaded antenna design through simulation and experimentation. Optimization of the narrowband antenna system was first performed in a simulation environment to achieve resonance at 3.50 GHz, featuring an impedance bandwidth of 1.57% with maximum return loss and theoretical gain values of 20.0 dB and 1.80 dBi, respectively. As a proof-of-concept, a physical prototype is fabricated on a printed circuit board followed by a simulation-based parametric study involving antenna prototypes embedded into Ordinary Portland Cement pastes with varying weight percentages of iron(III) oxide inclusions. Simulation-derived and experimental results are mutually verified, achieving a systemic downward shift in resonant frequency and corresponding variations in impedance matching induced by changes in loading reactance. Finally, an inversion modeling procedure is employed using perturbation theory to extrapolate the relative permittivity of the dielectric loaded materials. Our proposed analysis contributes to optimizing concrete-embedded 5G antenna sensor designs and establishes a foundational framework for estimating unknown dielectric parameters of cementitious composites. Full article
(This article belongs to the Special Issue Microwave Sensors for Industrial Applications)
Show Figures

Figure 1

14 pages, 319 KiB  
Article
Numerical Method for Solving the Nonlinear Superdiffusion Equation with Functional Delay
by Vladimir Pimenov and Andrei Lekomtsev
Mathematics 2023, 11(18), 3941; https://doi.org/10.3390/math11183941 - 16 Sep 2023
Viewed by 1157
Abstract
For a space-fractional diffusion equation with a nonlinear superdiffusion coefficient and with the presence of a delay effect, the grid numerical method is constructed. Interpolation and extrapolation procedures are used to account for the functional delay. At each time step, the algorithm reduces [...] Read more.
For a space-fractional diffusion equation with a nonlinear superdiffusion coefficient and with the presence of a delay effect, the grid numerical method is constructed. Interpolation and extrapolation procedures are used to account for the functional delay. At each time step, the algorithm reduces to solving a linear system with a main matrix that has diagonal dominance. The convergence of the method in the maximum norm is proved. The results of numerical experiments with constant and variable delays are presented. Full article
15 pages, 1382 KiB  
Article
A Simple and Low-Cost Technique for 5G Conservative Human Exposure Assessment
by Fulvio Schettino, Gaetano Chirico, Ciro D’Elia, Mario Lucido, Daniele Pinchera and Marco Donald Migliore
Appl. Sci. 2023, 13(6), 3524; https://doi.org/10.3390/app13063524 - 9 Mar 2023
Cited by 2 | Viewed by 1881
Abstract
The purpose of this paper is to introduce a simple, low-cost methodology for estimating a conservative value of the maximum field level that can be radiated by a 5G base station useful for human exposure assessment. The method is based on a Maximum [...] Read more.
The purpose of this paper is to introduce a simple, low-cost methodology for estimating a conservative value of the maximum field level that can be radiated by a 5G base station useful for human exposure assessment. The method is based on a Maximum Power Extrapolation (MPE) approach and requires the measurement of a reference quantity associated with the SS-PBCH, such as Primary Synchronization Signal (PSS), Secondary Synchronization Signal (SSS), Physical Broadcast CHannel (PBCH), or PBCH Demodulation Reference Signal (PBCH-DMRS). This step requires a simple spectrum analyzer and allows one to obtain the Resource Element (RE) power of a signal transmitted through broadcast beams. In the second phase, the RE power of the signal transmitted through the traffic beam is estimated using the Cumulative Distribution Function (CDF) of the antenna boost factor obtained from the broadcast and the traffic envelope radiation patterns made available by the base station vendor. The use of the CDF allows us to mitigate the problems related to the exact estimation of the direction of the measurement point with respect to the beam of the 5G antenna. The method is applied to a real 5G communication system, and the result is compared with the value given by other MPE methods proposed in the literature. Full article
Show Figures

Figure 1

20 pages, 6957 KiB  
Article
Optimising a Biogas and Photovoltaic Hybrid System for Sustainable Power Supply in Rural Areas
by Carlos Roldán-Porta, Carlos Roldán-Blay, Daniel Dasí-Crespo and Guillermo Escrivá-Escrivá
Appl. Sci. 2023, 13(4), 2155; https://doi.org/10.3390/app13042155 - 7 Feb 2023
Cited by 10 | Viewed by 3199
Abstract
This paper proposes a method for evaluating the optimal configuration of a hybrid system (biomass power plant and photovoltaic plant), which is connected to the electrical grid, to achieve minimum energy costs. The study is applied to a small rural municipality in the [...] Read more.
This paper proposes a method for evaluating the optimal configuration of a hybrid system (biomass power plant and photovoltaic plant), which is connected to the electrical grid, to achieve minimum energy costs. The study is applied to a small rural municipality in the Valencian Community, Spain, as an energy community. The approach takes into account the daily energy demand variation and price curves for energy that are either imported or exported to the grid. The optimal configuration is determined by the highest internal rate of return (IRR) over a 12-year period while providing a 20% discount in electricity prices for the energy community. The approach is extrapolated to an annual period using the statistical data of sunny and cloudy days, considering 23.8% of the year as cloudy. The methodology provides a general procedure for hybridising both plants and the grid to meet the energy needs of a small rural population. In the analysed case, an optimal combination of 140 kW of rated power from the biogas generator was found, which is lower than the maximum demand of 366 kW and 80 kW installed power in the photovoltaic plant, resulting in an IRR of 6.13% over 12 years. Sensitivity studies for data variations are also provided. Full article
Show Figures

Figure 1

14 pages, 4753 KiB  
Article
A Computational Framework for 2D Crack Growth Based on the Adaptive Finite Element Method
by Abdulnaser M. Alshoaibi and Yahya Ali Fageehi
Appl. Sci. 2023, 13(1), 284; https://doi.org/10.3390/app13010284 - 26 Dec 2022
Cited by 2 | Viewed by 2264
Abstract
As a part of a damage tolerance assessment, the goal of this research is to estimate the two-dimensional crack propagation trajectory and its accompanying stress intensity factors (SIFs) using the adaptive finite element method. The adaptive finite element code was developed using the [...] Read more.
As a part of a damage tolerance assessment, the goal of this research is to estimate the two-dimensional crack propagation trajectory and its accompanying stress intensity factors (SIFs) using the adaptive finite element method. The adaptive finite element code was developed using the Visual Fortran language. The advancing-front method is used to construct an adaptive mesh structure, whereas the singularity is represented through construction of quarter-point single elements around the crack tip. To generate an optimal mesh, an adaptive mesh refinement procedure based on the posteriori norm stress error estimator is used. The splitting node strategy is used to model the fracture, and the trajectory follows the successive linear extensions for every crack increment. The stress intensity factors (SIFs) for each crack extension increment are calculated using the displacement extrapolation technique. The direction of crack propagation is determined using the theory of maximum circumferential stress. The present study is carried out for two geometries, namely a rectangular structure with two holes and one central crack, and a cracked plate with four holes. The results demonstrate that, depending on the position of the hole, the crack propagates in the direction of the hole due to the unequal stresses at the crack tip, which are caused by the hole’s influence. The results are consistent with other numerical investigations for predicting crack propagation trajectories and SIFs. Full article
(This article belongs to the Special Issue Focus on Fatigue and Fracture of Engineering Materials)
Show Figures

Figure 1

21 pages, 6260 KiB  
Article
Adaptive Finite Element Modeling of Linear Elastic Fatigue Crack Growth
by Abdulnaser M. Alshoaibi and Abdullateef H. Bashiri
Materials 2022, 15(21), 7632; https://doi.org/10.3390/ma15217632 - 30 Oct 2022
Cited by 7 | Viewed by 2990
Abstract
This paper proposed an efficient two-dimensional fatigue crack growth simulation program for linear elastic materials using an incremental crack growth procedure. The Visual Fortran programming language was used to develop the finite element code. The adaptive finite element mesh was generated using the [...] Read more.
This paper proposed an efficient two-dimensional fatigue crack growth simulation program for linear elastic materials using an incremental crack growth procedure. The Visual Fortran programming language was used to develop the finite element code. The adaptive finite element mesh was generated using the advancing front method. Stress analysis for each increment was carried out using the adaptive mesh finite element technique. The equivalent stress intensity factor is the most essential parameter that should be accurately estimated for the mixed-mode loading condition which was used as the onset criterion for the crack growth. The node splitting and relaxation method advances the crack once the failure mechanism and crack direction have been determined. The displacement extrapolation technique (DET) was used to calculate stress intensity factors (SIFs) at each crack extension increment. Then, these SIFs were analyzed using the maximum circumferential stress theory (MCST) to predict the crack propagation trajectory and the fatigue life cycles using the Paris’ law model. Finally, the performance and capability of the developed program are shown in the application examples. Full article
(This article belongs to the Special Issue Finite Element Analysis and Simulation of Materials)
Show Figures

Figure 1

14 pages, 5288 KiB  
Technical Note
Clustering of Handheld Thermal Camera Images in Volcanic Areas and Temperature Statistics
by Francesca Cirillo, Gala Avvisati, Pasquale Belviso, Enrica Marotta, Rosario Peluso and Romano Antonio Pescione
Remote Sens. 2022, 14(15), 3789; https://doi.org/10.3390/rs14153789 - 6 Aug 2022
Cited by 4 | Viewed by 2130
Abstract
Thermal camera use is becoming ever more widespread in volcanic and environmental research and monitoring activities. Depending on the scope of an investigation and on the type of thermal camera used, different software for thermal infrared (IR) images analysis is employed. The Osservatorio [...] Read more.
Thermal camera use is becoming ever more widespread in volcanic and environmental research and monitoring activities. Depending on the scope of an investigation and on the type of thermal camera used, different software for thermal infrared (IR) images analysis is employed. The Osservatorio Vesuviano Sezione in Napoli of the Istituto Nazionale di Geofisica e Vulcanologia (INGV-OV) processes the images acquired during thermal monitoring activities acquired in the Neapolitan areas (Vesuvio, Ischia and Campi Flegrei) with different FLIR software that returns for each image, or for each selected area within the image, a series of parameters (maximum temperature, average temperature, standard deviation, etc.). An operator selects the area of interest and later “manually” inserts the relevant parameters in Excel sheets to generate graphs. Such a tedious, time- and resource-consuming procedure gave reason to implement a software able to automatically analyze sets of thermal images taken with a handheld thermal camera without any manual action. This paper describes the method and the software implemented to “automate” and refine the extrapolation process and the analysis of the relevant information. The employed method clusters thermal images by applying K-MEANS and DBSCAN techniques. After clustering a series of images, the software displays the necessary statistics to highlight possible fluctuations in temperature values. The software, “StaTistical Analysis clusteRed ThErmal Data” (STARTED), is already available. Although it has been developed mostly to support monitoring of the volcanoes in Campania, it is quite versatile and can be used for any activity that implies thermal data analysis. In this paper, we describe the workflow and the dataset used to develop the software, as well as the first result obtained from it. Full article
(This article belongs to the Special Issue Remote Sensing of Geothermal and Volcanic Environments)
Show Figures

Figure 1

16 pages, 1261 KiB  
Article
Financial Stability Control for Business Sustainability: A Case Study from Food Production
by Tomas Macak
Mathematics 2022, 10(3), 292; https://doi.org/10.3390/math10030292 - 18 Jan 2022
Viewed by 2714
Abstract
Conventional financial management methods, based on extrapolation approaches to financial analysis, often reach their limits due to violations of stationary controlled financial variables, for example, interventions in the economy and social life necessary to manage the COVID-19 pandemic. Therefore, we have created a [...] Read more.
Conventional financial management methods, based on extrapolation approaches to financial analysis, often reach their limits due to violations of stationary controlled financial variables, for example, interventions in the economy and social life necessary to manage the COVID-19 pandemic. Therefore, we have created a procedure for controlling financial quantities, which respects the non-stationarity of the controlled quantity using the maximum control deviation covering the confidence interval of a random variable or random vector. For this interval, we then determined the algebraic criteria of the transfer functions using the Laplace transform. For the Laplace transform, we determined the theorem on the values of the stable roots of the characteristic equation, including the deductive proof. This theorem is directly usable for determining the stability of the management for selected financial variables. For the practical application, we used the consistency of the stable roots of the characteristic equation with the Stodola and Hurwitz stability conditions. We demonstrated the procedure for selected quantities of financial management in food production. In conclusion, we proposed a control mechanism for the convergence of regulatory deviation using a combination of proportional and integration schemes. We also determined the diversification of action interventions (into development, production, and marketing) using a factorial design. Full article
(This article belongs to the Special Issue Quantitative Analysis and DEA Modeling in Applied Economics)
Show Figures

Figure 1

25 pages, 7331 KiB  
Article
Design Floods Considering the Epistemic Uncertainty
by Radu Drobot, Aurelian Florentin Draghia, Daniel Ciuiu and Romică Trandafir
Water 2021, 13(11), 1601; https://doi.org/10.3390/w13111601 - 6 Jun 2021
Cited by 6 | Viewed by 3958
Abstract
The Design Flood (DF) concept is an essential tool in designing hydraulic works, defining reservoir operation programs, and identifying reliable flood hazard maps. The purpose of this paper is to present a methodology for deriving a Design Flood hydrograph considering the epistemic uncertainty. [...] Read more.
The Design Flood (DF) concept is an essential tool in designing hydraulic works, defining reservoir operation programs, and identifying reliable flood hazard maps. The purpose of this paper is to present a methodology for deriving a Design Flood hydrograph considering the epistemic uncertainty. Several appropriately identified statistical distributions allow for the acceptable approximation of the frequent values of maximum discharges or flood volumes, and display a significant spread for their medium/low Probabilities of Exceedance (PE). The referred scattering, as a consequence of epistemic uncertainty, defines an area of uncertainty for both recorded data and extrapolated values. In considering the upper and lower values of the uncertainty intervals as limits for maximum discharges and flood volumes, and by further combining them compatibly, a set of DFs as completely defined hydrographs with different shapes result for each PE. The herein proposed procedure defines both uni-modal and multi-modal DFs. Subsequently, such DFs help water managers in examining and establishing tailored approaches for a variety of input hydrographs, which might be typically generated in river basins. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

15 pages, 4375 KiB  
Article
Experimental Procedure for Fifth Generation (5G) Electromagnetic Field (EMF) Measurement and Maximum Power Extrapolation for Human Exposure Assessment
by Daniele Franci, Stefano Coltellacci, Enrico Grillo, Settimio Pavoncello, Tommaso Aureli, Rossana Cintoli and Marco Donald Migliore
Environments 2020, 7(3), 22; https://doi.org/10.3390/environments7030022 - 17 Mar 2020
Cited by 39 | Viewed by 9517
Abstract
The fifth generation (5G) technology has been conceived to cover multiple usage scenarios from enhanced mobile broadband to ultra-reliable low-latency communications (URLLC) to massive machine type communications. However, the implementation of this new technology is causing increasing concern over the possible impact on [...] Read more.
The fifth generation (5G) technology has been conceived to cover multiple usage scenarios from enhanced mobile broadband to ultra-reliable low-latency communications (URLLC) to massive machine type communications. However, the implementation of this new technology is causing increasing concern over the possible impact on health and safety arising from exposure to electromagnetic field radiated by 5G systems, making imperative the development of accurate electromagnetic field (EMF) measurement techniques and protocols. Measurement techniques used to assess the compliance with EMF exposure limits are object to international regulation. The basic principle of the assessment is to measure the power received from a constant radio frequency source, typically a pilot signal, and to apply a proper extrapolation factor. This kind of approach is standardized for 2G, 3G, and 4G technologies, but is still under investigation for 5G technology. Indeed, the use of flexible numerologies and advanced Time Division Duplexing (TDD) and spatial multiplexing techniques, such as beam sweeping and Massive Multiple Input Multiple Output (MIMO), requires the definition of new procedures and protocols for EMF measurement of 5G signals. In this paper a procedure for an accurate estimation of the instant maximum power received from a 5G source is proposed. The extrapolation technique is based on the introduction of proper factors that take into account the effect of the TDD and of the sweep beam in the measured value of the 5G signal level. Preliminary experimental investigation, based on code domain measurement of appropriate broadcast channels, and carried out in a controlled environment are reported, confirming the effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Physical Agents: Measurement Methods, Modelling and Mitigations)
Show Figures

Figure 1

22 pages, 3929 KiB  
Article
An Experimental Investigation on the Impact of Duplexing and Beamforming Techniques in Field Measurements of 5G Signals
by Daniele Franci, Stefano Coltellacci, Enrico Grillo, Settimio Pavoncello, Tommaso Aureli, Rossana Cintoli and Marco Donald Migliore
Electronics 2020, 9(2), 223; https://doi.org/10.3390/electronics9020223 - 29 Jan 2020
Cited by 28 | Viewed by 6902
Abstract
The fifth generation mobile network introduces dramatic improvements with respect to the previous technologies. Features such as variable numerology, bandwidth parts, massive Multiple Input Multiple Output (MIMO) and Time Division Duplex (TDD) will extend the capabilities of the 5G wireless systems and, at [...] Read more.
The fifth generation mobile network introduces dramatic improvements with respect to the previous technologies. Features such as variable numerology, bandwidth parts, massive Multiple Input Multiple Output (MIMO) and Time Division Duplex (TDD) will extend the capabilities of the 5G wireless systems and, at the same time, will influence the measurement techniques used to assess the compliance with general public electromagnetic field exposure limits. In this study, a heterogeneous set of 5G signals is investigated with the aim of establishing an effective measurement technique suitable for the new technology. Following an experimental approach based on both modulation and zero span analysis, some important characteristics of the 5G system are highlighted and extensively discussed, and experimental procedures for estimating factors associated to TDD (FTDC factor) and beam sweeping (R factor), to be used in the extrapolation formulas, are presented. The results of this study represent a starting point for future investigations on effective methods to estimate both the instant maximum power and the total power transmitted during a 5G radio frame. Full article
Show Figures

Figure 1

20 pages, 1069 KiB  
Article
Determination of Fracture Properties of Concrete Using Size and Boundary Effect Models
by Xiaofeng Gao, Chunfeng Liu, Yaosheng Tan, Ning Yang, Yu Qiao, Yu Hu, Qingbin Li, Georg Koval and Cyrille Chazallon
Appl. Sci. 2019, 9(7), 1337; https://doi.org/10.3390/app9071337 - 29 Mar 2019
Cited by 6 | Viewed by 3291
Abstract
Tensile strength and fracture toughness are two essential material parameters for the study of concrete fracture. The experimental procedures to measure these two fracture parameters might be complicated due to their dependence on the specimen size or test method. Alternatively, based on the [...] Read more.
Tensile strength and fracture toughness are two essential material parameters for the study of concrete fracture. The experimental procedures to measure these two fracture parameters might be complicated due to their dependence on the specimen size or test method. Alternatively, based on the fracture test results only, size and boundary effect models can determine both parameters simultaneously. In this study, different versions of boundary effect models developed by Hu et al. were summarized, and a modified Hu-Guan’s boundary effect model with a more appropriate equivalent crack length definition is proposed. The proposed model can correctly combine the contributions of material strength and linear elastic fracture mechanics on the failure of concrete material with any maximum aggregate size. Another size and boundary model developed based on the local energy concept is also introduced, and its capability to predict the fracture parameters from the fracture test results of wedge-splitting and compact tension specimens is first validated. In addition, the classical Bažant’s Type 2 size effect law is transformed to its boundary effect shape with the same equivalent crack length as Koval-Gao’s size and boundary effect model. This improvement could extend the applicability of the model to infer the material parameters from the test results of different types of specimens, including the geometrically similar specimens with constant crack-length-to-height ratios and specimens with different initial crack-length-to-height ratios. The test results of different types of specimens are adopted to verify the applicability of different size and boundary effect models for the determination of fracture toughness and tensile strength of concrete material. The quality of the extrapolated fracture parameters of the different models are compared and discussed in detail, and the corresponding recommendations for predicting the fracture parameters for dam concrete are proposed. Full article
(This article belongs to the Special Issue Fatigue and Fracture of Non-metallic Materials and Structures)
Show Figures

Figure 1

15 pages, 1361 KiB  
Article
Using Probable Maximum Precipitation to Bound the Disaggregation of Rainfall
by Neil McIntyre and András Bárdossy
Water 2017, 9(7), 496; https://doi.org/10.3390/w9070496 - 7 Jul 2017
Cited by 2 | Viewed by 4157
Abstract
The Multiplicative Discrete Random Cascade (MDRC) class of model is used to temporally disaggregate rainfall volumes through multiplying the volumes by random weights, which is repeated through multiple disaggregation levels. The model development involves the identification of probability density functions from which to [...] Read more.
The Multiplicative Discrete Random Cascade (MDRC) class of model is used to temporally disaggregate rainfall volumes through multiplying the volumes by random weights, which is repeated through multiple disaggregation levels. The model development involves the identification of probability density functions from which to sample the weights. The parameters of the probability density functions are known to be dependent on the rainfall volume. This paper characterises the volume dependency over the scarcely observed extreme ranges of rainfall, introducing the concept of volume-bounded MDRC models. Probable maximum precipitation (PMP) estimates are used to define theoretically-based points and asymptotes to which the observation-based estimates of the MDRC model parameters are extrapolated. Alternative models are tested using a case study of rainfall data from Brisbane, Australia covering the period 1908 to 2015. The results show that moving from a baseline model with constant parameters to incorporating the volume dependency of the parameters is essential for acceptable performance in terms of the frequency and magnitude of modelled extremes. As well as providing better estimates of parameters at each disaggregation level, the volume dependency provides an in-built bias correction when moving from one level to the next. A further, relatively small performance gain is obtained by extrapolating the observed dependency to the theoretically-based bounds. The volume dependency of the parameters is found to be reasonably time-scaleable, providing opportunity for advances in the generalisation of MDRC models. Sensitivity analysis shows that the subjectivities and uncertainties in the modelling procedure have mixed effects on the performance. A principal uncertainty, to which the results are sensitive, is the PMP estimate. Therefore, in applications of the bounded approach, the PMP should ideally be described by a probability distribution function. Full article
Show Figures

Figure 1

26 pages, 31882 KiB  
Article
Using Tree Detection Algorithms to Predict Stand Sapwood Area, Basal Area and Stocking Density in Eucalyptus regnans Forest
by Dominik Jaskierniak, George Kuczera, Richard Benyon and Luke Wallace
Remote Sens. 2015, 7(6), 7298-7323; https://doi.org/10.3390/rs70607298 - 3 Jun 2015
Cited by 16 | Viewed by 6596
Abstract
Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR) data with as few as 0.9 pulses m? [...] Read more.
Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR) data with as few as 0.9 pulses m?2, we applied a local maximum filtering (LMF) method and normalised cut (NCut) algorithm to predict stocking density (SDen) of a 69-year-old Eucalyptus regnans forest comprising 251 plots with resolution of the order of 0.04 ha. Using the NCut method we predicted basal area (BAHa) per hectare and sapwood area (SAHa) per hectare, a well-established proxy for transpiration. Sapwood area was also indirectly estimated with allometric relationships dependent on LiDAR derived SDen and BAHa using a computationally efficient procedure. The individual tree detection (ITD) rates for the LMF and NCut methods respectively had 72% and 68% of stems correctly identified, 25% and 20% of stems missed, and 2% and 12% of stems over-segmented. The significantly higher computational requirement of the NCut algorithm makes the LMF method more suitable for predicting SDen across large forested areas. Using NCut derived ITD segments, observed versus predicted stand BAHa had R2 ranging from 0.70 to 0.98 across six catchments, whereas a generalised parsimonious model applied to all sites used the portion of hits greater than 37 m in height (PH37) to explain 68% of BAHa. For extrapolating one ha resolution SAHa estimates across large forested catchments, we found that directly relating SAHa to NCut derived LiDAR indices (R2 = 0.56) was slightly more accurate but computationally more demanding than indirect estimates of SAHa using allometric relationships consisting of BAHa (R2 = 0.50) or a sapwood perimeter index, defined as (BAHaSDen)½ (R2 = 0.48). Full article
Show Figures

Graphical abstract

24 pages, 453 KiB  
Article
Empirical Information Metrics for Prediction Power and Experiment Planning
by Christopher Lee
Information 2011, 2(1), 17-40; https://doi.org/10.3390/info2010017 - 11 Jan 2011
Cited by 1 | Viewed by 8984
Abstract
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of [...] Read more.
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of inference. To integrate these approaches we note a common theme they share, namely the measurement of prediction power. We generalize this concept as an information metric, subject to several requirements: Calculation of the metric must be objective or model-free; unbiased; convergent; probabilistically bounded; and low in computational complexity. Unfortunately, widely used model selection metrics such as Maximum Likelihood, the Akaike Information Criterion and Bayesian Information Criterion do not necessarily meet all these requirements. We define four distinct empirical information metrics measured via sampling, with explicit Law of Large Numbers convergence guarantees, which meet these requirements: Ie, the empirical information, a measure of average prediction power; Ib, the overfitting bias information, which measures selection bias in the modeling procedure; Ip, the potential information, which measures the total remaining information in the observations not yet discovered by the model; and Im, the model information, which measures the model’s extrapolation prediction power. Finally, we show that Ip + Ie, Ip + Im, and Ie — Im are fixed constants for a given observed dataset (i.e. prediction target), independent of the model, and thus represent a fundamental subdivision of the total information contained in the observations. We discuss the application of these metrics to modeling and experiment planning. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

Back to TopTop