Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (24)

Search Parameters:
Keywords = energy spread compensation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 784 KB  
Article
Towards the in Silico Design of Diets: A Method for Reference Diet Templates Based on Objective Data and Institution Guidelines
by Paolo Tessari and Anna Lante
Appl. Sci. 2026, 16(1), 257; https://doi.org/10.3390/app16010257 - 26 Dec 2025
Viewed by 366
Abstract
Background: In silico diet design may represent a flexible approach in diet planning and adaptation to a variety of conditions, and it may take advantage from standard diet(s) as reference template(s). The concept of standard diet(s) is, however, quite vague and poorly [...] Read more.
Background: In silico diet design may represent a flexible approach in diet planning and adaptation to a variety of conditions, and it may take advantage from standard diet(s) as reference template(s). The concept of standard diet(s) is, however, quite vague and poorly defined. Objective: The aim of this work was to develop templates of omnivorous (OMN), lacto-ovo-vegetarian (LOV), and vegan (VEG) standard diets, based on data produced in European countries and the USA in 1998–2024, and adapted to an adult subject requiring ≈2200 kcal/day. Design: Online databases were used to identify papers containing experimentally determined (EXP) data of daily food frequencies, or reporting dietary recommendations (REC) from (inter)national agencies or specific studies. Only sources reporting quantitative food data (as g/day) in OMN, LOV, and VEG diets were accepted. Results: Out of >200 publications initially identified, 24 EXP and 20 REC sources complied with the selection criteria. By combining the EXP and REC data within each diet type, total meat intake in OMN diet was 99 ± 36 g/day. Total dairy food in LOV diets (247 ± 107 g/day) tended to be lower (by ≈15%, NS) than in OMN diets (272 ± 100). In VEG diets, total vegetal foods were ≈33% greater than in LOV (p < 0.01), and ≈1-fold greater than in OMN ones (p < 0.00001). Total cereal foods were similar in OMN (272 ± 122) and LOV (264 ± 122) diets, but tended to be ≈20–25% greater in VEG diets (to 326 ± 103, NS). Potato and other starchy foods were not different among the three diets. Legumes and pulses were modestly but insignificantly greater in LOV (55 ± 25) and VEG diets (112 ± 137) than in OMN ones (31 ± 24). Soy products were greater in VEG than in LOV diets. The “nuts, seeds, and spreads” food group in VEG diets was ≈3-fold greater than in OMN (p < 0.0005), and ≈90% greater than in LOV diets (p < 0.002). Fruit intake in VEG diets was ≈14% (p = NS) and ≈ 60% (p < 0.005) greater than in LOV and OMN diets, respectively. Finally, the “protein and energy-rich vegetal alternatives” food group in LOV and VEG diets was ≈5- to ≈6-fold greater than in the OMN diet (p ≤ 0.001). Conclusions: The exclusion of meat, fish, and egg in LOV diets is not compensated by increased dairy foods, rather by more total vegetal foods and protein-rich vegetal alternatives. VEG diets replace animal-derived proteins mainly with nuts, seeds, and spreads, soy products and protein-rich vegetal alternatives. On the basis of these data, templates to design “standard” OMN, LOV, and VEG diets are proposed. Full article
(This article belongs to the Section Food Science and Technology)
Show Figures

Figure 1

20 pages, 16838 KB  
Article
Multi-Criteria Visual Quality Control Algorithm for Selected Technological Processes Designed for Budget IIoT Edge Devices
by Piotr Lech
Electronics 2025, 14(16), 3204; https://doi.org/10.3390/electronics14163204 - 12 Aug 2025
Cited by 2 | Viewed by 998
Abstract
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the [...] Read more.
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the need to reduce these costs while maintaining high defect detection efficiency. The developed algorithm largely eliminates the need for time- and energy-intensive neural network training or retraining, though these capabilities remain optional. Consequently, the reliance on human labor, particularly for tasks such as manual data labeling, has been significantly reduced. The algorithm is optimized to run on low-power computing units typical of budget industrial computers, making it a viable alternative to server- or cloud-based solutions. The system supports flexible integration with existing industrial automation infrastructure, but it can also be deployed at manual workstations. The algorithm’s primary application is to assess the spread quality of thick liquid mold filling; however, its effectiveness has also been demonstrated for 3D printing processes. The proposed hybrid algorithm combines three approaches: (1) the classical SSIM image quality metric, (2) depth image measurement using Intel MiDaS technology combined with analysis of depth map visualizations and histogram analysis, and (3) feature extraction using selected artificial intelligence models based on the OpenCLIP framework and publicly available pretrained models. This combination allows the individual methods to compensate for each other’s limitations, resulting in improved defect detection performance. The use of hybrid metrics in defective sample selection has been shown to yield superior algorithmic performance compared to the application of individual methods independently. Experimental tests confirmed the high effectiveness and practical applicability of the proposed solution, preserving low hardware requirements. Full article
Show Figures

Figure 1

22 pages, 1240 KB  
Article
Angle Estimation for Range-Spread Targets Based on Scatterer Energy Focusing
by Zekai Huang, Peiwu Jiang, Maozhong Fu and Zhenmiao Deng
Sensors 2025, 25(6), 1723; https://doi.org/10.3390/s25061723 - 11 Mar 2025
Cited by 1 | Viewed by 1356
Abstract
Wideband radar is becoming increasingly significant in modern radar systems. However, traditional monopulse angle estimation techniques are not suitable for wideband targets exhibiting range extension effects. To address this, we explore the angle estimation problem for wideband Linear Frequency-Modulated (LFM) signals and propose [...] Read more.
Wideband radar is becoming increasingly significant in modern radar systems. However, traditional monopulse angle estimation techniques are not suitable for wideband targets exhibiting range extension effects. To address this, we explore the angle estimation problem for wideband Linear Frequency-Modulated (LFM) signals and propose a new monopulse angle estimation algorithm tailored for range-spread targets. In this paper, the phase of the highest energy scatterer is used as the reference to compensate for the phases of other scatterers. The compensated scatterers are then accumulated for energy focusing. Finally, the angle of the energy-focused signal is estimated using the sum-and-difference amplitude comparison method. The proposed method can effectively focus the scatterers’ energy. Moreover, since the echo of a range-spread target can be regarded as the sum of sinusoids with different frequencies, scatterer energy focusing can effectively improve the performance of the detector. To further demonstrate the practicality of the proposed angle estimation method, it is combined with the detector to evaluate its performance. Simulation results comparing the proposed method with other approaches validate its effectiveness and demonstrate that it achieves a lower signal-to-noise ratio (SNR) threshold and higher angular accuracy. Through the proposed method, tracking and imaging can be achieved entirely within the wideband radar framework. The proposed method can also be extended to other sensor systems, advancing the development of sensor technologies. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

23 pages, 1814 KB  
Article
Doppler-Spread Space Target Detection Based on Overlapping Group Shrinkage and Order Statistics
by Linsheng Bu, Tuo Fu, Defeng Chen, Huawei Cao, Shuo Zhang and Jialiang Han
Remote Sens. 2024, 16(18), 3413; https://doi.org/10.3390/rs16183413 - 13 Sep 2024
Cited by 2 | Viewed by 1955
Abstract
The Doppler-spread problem is commonly encountered in space target observation scenarios using ground-based radar when prolonged coherent integration techniques are utilized. Even when the translational motion is accurately compensated, the phase resulting from changes in the target observation attitude (TOA) still leads to [...] Read more.
The Doppler-spread problem is commonly encountered in space target observation scenarios using ground-based radar when prolonged coherent integration techniques are utilized. Even when the translational motion is accurately compensated, the phase resulting from changes in the target observation attitude (TOA) still leads to extension of the target’s echo energy across multiple Doppler cells. In particular, as the TOA change undergoes multiple cycles within a coherent processing interval (CPI), the Doppler spectrum spreads into equidistant sparse line spectra, posing a substantial challenge for target detection. Aiming to address such problems, we propose a generalized likelihood ratio test based on overlapping group shrinkage denoising and order statistics (OGSos-GLRT) in this study. First, the Doppler domain signal is denoised according to its equidistant sparse characteristics, allowing for the recovery of Doppler cells where line spectra may be situated. Then, several of the largest Doppler cells are integrated into the GLRT for detection. An analytical expression for the false alarm probability of the proposed detector is also derived. Additionally, a modified OGSos-GLRT method is proposed to make decisions based on an increasing estimated number of line spectra (ENLS), thus increasing the robustness of OGSos-GLRT when the ENLS mismatches the actual value. Finally, Monte Carlo simulations confirm the effectiveness of the proposed detector, even at low signal-to-noise ratios (SNRs). Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

42 pages, 6360 KB  
Review
Shunt Active Power Filters in Three-Phase, Three-Wire Systems: A Topical Review
by Mihaela Popescu, Alexandru Bitoleanu, Constantin Vlad Suru, Mihaita Linca and Laurentiu Alboteanu
Energies 2024, 17(12), 2867; https://doi.org/10.3390/en17122867 - 11 Jun 2024
Cited by 30 | Viewed by 5771
Abstract
The increasingly extensive use of non-linear loads, mostly including static power converters, in large industry, commercial, and domestic applications, as well as the spread of renewable energy sources in distribution-generated units, make the use of the most efficient power quality improvement systems a [...] Read more.
The increasingly extensive use of non-linear loads, mostly including static power converters, in large industry, commercial, and domestic applications, as well as the spread of renewable energy sources in distribution-generated units, make the use of the most efficient power quality improvement systems a current concern. The use of active power filters proved to be the most advanced solution with the best compensation performance for harmonics, reactive power, and load unbalance. Thus, issues related to improving the power quality through active power filters are very topical and addressed by many researchers. This paper presents a topical review on the shunt active power filters in three-phase, three-wire systems. The power circuit and configurations of shunt active filtering systems are considered, including the multilevel topologies and use of advanced power semiconductor devices with lower switching losses and higher switching frequencies. Several compensation strategies, reference current generation methods, current control techniques, and DC-voltage control are pointed out and discussed. The direct power control method is also discussed. New advanced control methods that have better performance than conventional ones and gained attention in the recent literature are highlighted. The current state of renewable energy sources integration with shunt active power filters is analyzed. Concerns regarding the optimum placement and sizing of the active power filters in a given power network to reduce the investment costs are also presented. Trends and future developments are discussed at the end of this paper. For a rigorous substantiation, more than 250 publications on this topic, most of them very recent, constitute the basis of bibliographic references and can assist readers who are interested to explore the subject in greater detail. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

14 pages, 458 KB  
Article
A Systematic Study of Two-Neutrino Double Electron Capture
by Ovidiu Niţescu, Stefan Ghinescu, Sabin Stoica and Fedor Šimkovic
Universe 2024, 10(2), 98; https://doi.org/10.3390/universe10020098 - 17 Feb 2024
Cited by 6 | Viewed by 2594
Abstract
In this paper, we update the phase-space factors for all two-neutrino double electron capture processes. The Dirac–Hartree–Fock–Slater self-consistent method is employed to describe the bound states of captured electrons, enabling a more realistic treatment of atomic screening and more precise binding energies of [...] Read more.
In this paper, we update the phase-space factors for all two-neutrino double electron capture processes. The Dirac–Hartree–Fock–Slater self-consistent method is employed to describe the bound states of captured electrons, enabling a more realistic treatment of atomic screening and more precise binding energies of the captured electrons compared to previous investigations. Additionally, we consider all s-wave electrons available for capture, expanding beyond the K and L1 orbitals considered in prior studies. For light atoms, the increase associated with additional captures compensates for the decrease in decay rate caused by the more precise atomic screening. However, for medium and heavy atoms, an increase in the decay rate, up to 10% for the heaviest atoms, is observed due to the combination of these two effects. In the systematic analysis, we also include capture fractions for the first few dominant partial captures. Our precise model enables a close examination of low Q-value double electron capture in 152Gd, 164Er, and 242Cm, where partial KK captures are energetically forbidden. Finally, with the updated phase-space values, we recalculate the effective nuclear matrix elements and compare their spread with those associated with 2νββ decay. Full article
Show Figures

Figure 1

29 pages, 24262 KB  
Article
Influences of Cloud Microphysics on the Components of Solar Irradiance in the WRF-Solar Model
by Xin Zhou, Yangang Liu, Yunpeng Shan, Satoshi Endo, Yu Xie and Manajit Sengupta
Atmosphere 2024, 15(1), 39; https://doi.org/10.3390/atmos15010039 - 28 Dec 2023
Cited by 7 | Viewed by 2977
Abstract
An accurate forecast of Global Horizontal solar Irradiance (GHI) and Direct Normal Irradiance (DNI) in cloudy conditions remains a major challenge in the solar energy industry. This study focuses on the impact of cloud microphysics on GHI and its partition into DNI and [...] Read more.
An accurate forecast of Global Horizontal solar Irradiance (GHI) and Direct Normal Irradiance (DNI) in cloudy conditions remains a major challenge in the solar energy industry. This study focuses on the impact of cloud microphysics on GHI and its partition into DNI and Diffuse Horizontal Irradiance (DHI) using the Weather Research and Forecasting model specifically designed for solar radiation applications (WRF-Solar) and seven microphysical schemes. Three stratocumulus (Sc) and five shallow cumulus (Cu) cases are simulated and evaluated against measurements at the US Department of Energy’s Atmospheric Radiation Measurement (ARM) user facility, Southern Great Plains (SGP) site. Results show that different microphysical schemes lead to spreads in simulated solar irradiance components up to 75% and 350% from their ensemble means in the Cu and Sc cases, respectively. The Cu cases have smaller microphysical sensitivity due to a limited cloud fraction and smaller domain-averaged cloud water mixing ratio compared to Sc cases. Cloud properties also influence the partition of GHI into DNI and DHI, and the model simulates better GHI than DNI and DHI due to a non-physical error compensation between DNI and DHI. The microphysical schemes that produce more accurate liquid water paths and effective radii of cloud droplets have a better overall performance. Full article
Show Figures

Figure 1

28 pages, 10082 KB  
Article
Spatiotemporal Pattern of Carbon Compensation Potential and Network Association in Urban Agglomerations in the Yellow River Basin
by Haihong Song, Yifan Li, Liyuan Gu, Jingnan Tang and Xin Zhang
ISPRS Int. J. Geo-Inf. 2023, 12(10), 435; https://doi.org/10.3390/ijgi12100435 - 23 Oct 2023
Cited by 6 | Viewed by 2629
Abstract
The Yellow River Basin is an important energy base and economic belt in China, but its water resources are scarce, its ecology is fragile, and the task of achieving the goal of carbon peak and carbon neutrality is arduous. Carbon compensation potential can [...] Read more.
The Yellow River Basin is an important energy base and economic belt in China, but its water resources are scarce, its ecology is fragile, and the task of achieving the goal of carbon peak and carbon neutrality is arduous. Carbon compensation potential can also be used to study the path to achieving carbon neutrality, which can clarify the potential of one region’s carbon sink surplus to be compensated to the other areas. Still, there needs to be more research on the carbon compensation potential of the Yellow River Basin. Therefore, this study calculated the carbon compensation potential using the β convergence test and parameter comparison method. With the help of spatial measurement tools such as GIS, GeoDa, Stata, and social network analysis methods, the spatiotemporal pattern and network structure of the carbon compensation potential in the Yellow River Basin were studied from the perspective of urban agglomeration. The results demonstrate the following: (1) The overall carbon compensation rate of the YRB showed a downward trend from 2005 to 2019, falling by 0.94, and the specific pattern was “high in the northwest and low in the southeast”. The spatial distribution is roughly spread along the east–west axis, and the distribution axis and the center of gravity keep shifting to the northwest. It also showed a weak divergence and a bifurcation trend. (2) The carbon compensation rate in the YRB passed the spatial correlation and β convergence tests, demonstrating the existence of spatial correlation and a “catch-up effect” among cities. (3) The overall distribution pattern of the carbon compensation potential in the YRB is a “low in the west and high in the east” pattern, and its value increased by 8.86% during the sampled period. (4) The network correlation of carbon compensation potential in the YRB has been significantly enhanced, with the downstream region being more connected than the upstream region. (5) The Shandong Peninsula Urban Agglomeration has the largest network center, followed by the Central Plains Urban Agglomeration, and the Ningxia along the Yellow River Urban Agglomeration has the fewest linked conduction paths. According to the research results, accurate and efficient planning and development suggestions are proposed for urban agglomeration in the Yellow River Basin. Full article
Show Figures

Figure 1

15 pages, 3992 KB  
Article
On the Arrays Distribution, Scan Sequence and Apodization in Coherent Dual-Array Ultrasound Imaging Systems
by Laura Peralta, Daniele Mazierli, Kirsten Christensen-Jeffries, Alessandro Ramalli, Piero Tortoli and Joseph V. Hajnal
Appl. Sci. 2023, 13(19), 10924; https://doi.org/10.3390/app131910924 - 2 Oct 2023
Cited by 11 | Viewed by 2273
Abstract
Coherent multi-transducer ultrasound (CoMTUS) imaging creates an extended effective aperture through the coherent combination of multiple arrays, which results in images with enhanced resolution, extended field-of-view, and higher sensitivity. However, this also creates a large discontinuous effective aperture that presents additional challenges for [...] Read more.
Coherent multi-transducer ultrasound (CoMTUS) imaging creates an extended effective aperture through the coherent combination of multiple arrays, which results in images with enhanced resolution, extended field-of-view, and higher sensitivity. However, this also creates a large discontinuous effective aperture that presents additional challenges for current beamforming methods. The discontinuities may increase the level of grating and side lobes and degrade contrast. Also, direct transmissions between multiple arrays, happening at certain transducer relative positions, produce undesirable cross-talk artifacts. Hence, the position of the transducers and the scan sequence play key roles in the beamforming algorithm and imaging performance of CoMTUS. This work investigates the role of the distribution of the individual arrays and the scan sequence in the imaging performance of a coherent dual-array system. First, the imaging performance for different configurations was assessed numerically using the point-spread-function, and then optimized settings were tested on a tissue mimicking phantom. Finally, a subset of the proposed optimum imaging schemes was experimentally validated on two synchronized ULA OP-256 systems equipped with identical linear arrays. Results show that CoMTUS imaging performance can be enhanced by optimizing the relative position of the arrays and the scan sequence together, and that the use of apodization can reduce cross-talk artifacts without degrading spatial resolution. Adding weighted compounding further decreases artifacts and helps to compensate for the differences in the brightness across the image. Setting the maximum steering angle according to the spatial configuration of the arrays reduces the sidelobe energy up to 10 dB plus an extra 4 dB reduction is possible when increasing the number of PWs compounded. Full article
Show Figures

Figure 1

18 pages, 4970 KB  
Article
Effect of Monodisperse Coal Particles on the Maximum Drop Spreading After Impact on a Solid Wall
by Alexander Ashikhmin, Nikita Khomutov, Roman Volkov, Maxim Piskunov and Pavel Strizhak
Energies 2023, 16(14), 5291; https://doi.org/10.3390/en16145291 - 10 Jul 2023
Cited by 3 | Viewed by 2105 | Correction
Abstract
The effect of coal hydrophilic particles in water-glycerol drops on the maximum diameter of spreading along a hydrophobic solid surface is experimentally studied by analyzing the velocity of internal flows by Particle Image Velocimetry (PIV). The grinding fineness of coal particles was 45–80 [...] Read more.
The effect of coal hydrophilic particles in water-glycerol drops on the maximum diameter of spreading along a hydrophobic solid surface is experimentally studied by analyzing the velocity of internal flows by Particle Image Velocimetry (PIV). The grinding fineness of coal particles was 45–80 μm and 120–140 μm. Their concentration was 0.06 wt.% and 1 wt.%. The impact of particle-laden drops on a solid surface occurred at Weber numbers (We) from 30 to 120. It revealed the interrelated influence of We and the concentration of coal particles on changes in the maximum absolute velocity of internal flows in a drop within the kinetic and spreading phases of the drop-wall impact. It is explored the behavior of internal convective flows in the longitudinal section of a drop parallel to the plane of the solid wall. The kinetic energy of the translational motion of coal particles in a spreading drop compensates for the energy expended by the drop on sliding friction along the wall. At We = 120, the inertia-driven spreading of the particle-laden drop is mainly determined by the dynamics of the deformable Taylor rim. An increase in We contributes to more noticeable differences in the convection velocities in spreading drops. When the drop spreading diameter rises at the maximum velocity of internal flows, a growth of the maximum spreading diameter occurs. The presence of coal particles causes a general tendency to reduce drop spreading. Full article
(This article belongs to the Special Issue Heat Transfer and Fluid Dynamics in Boiling Systems)
Show Figures

Figure 1

29 pages, 12996 KB  
Article
Numerical Study of Velocity and Mixture Fraction Fields in a Turbulent Non-Reacting Propane Jet Flow Issuing into Parallel Co-Flowing Air in Isothermal Condition through OpenFOAM
by Abdolreza Aghajanpour and Seyedalireza Khatibi
AppliedMath 2023, 3(2), 468-496; https://doi.org/10.3390/appliedmath3020025 - 27 May 2023
Cited by 3 | Viewed by 3210
Abstract
This research employs computational methods to analyze the velocity and mixture fraction distributions of a non-reacting Propane jet flow that is discharged into parallel co-flowing air under iso-thermal conditions. This study includes a comparison between the numerical results and experimental results obtained from [...] Read more.
This research employs computational methods to analyze the velocity and mixture fraction distributions of a non-reacting Propane jet flow that is discharged into parallel co-flowing air under iso-thermal conditions. This study includes a comparison between the numerical results and experimental results obtained from the Sandia Laboratory (USA). The objective is to improve the understanding of flow structure and mixing mechanisms in situations where there is no involvement of chemical reactions or heat transfer. In this experiment, the Realizable k-ε eddy viscosity turbulence model with two equations was utilized to simulate turbulent flow on a nearly 2D plane (specifically, a 5-degree partition of the experimental cylinder domain). This was achieved using OpenFOAM open-source software and swak4Foam utility, with the reactingFoam solver being manipulated carefully. The selection of this turbulence model was based on its superior predictive capability for the spreading rate of both planar and round jets, as compared to other variants of the k-ε models. Numerical axial and radial profiles of different parameters were obtained for a mesh that is independent of the grid (mesh B). These profiles were then compared with experimental data to assess the accuracy of the numerical model. The parameters that are being referred to are mean velocities, turbulence kinetic energy, mean mixture fraction, mixture fraction half radius (Lf), and the mass flux diagram. The validity of the assumption that w߰ = v߰ for the determination of turbulence kinetic energy, k, seems to hold true in situations where experimental data is deficient in w߰. The simulations have successfully obtained the mean mixture fraction and its half radius, Lf, which is a measure of the jet’s width. These values were determined from radial profiles taken at specific locations along the X-axis, including x/D = 0, 4, 15, 30, and 50. The accuracy of the mean vertical velocity fields in the X-direction (Umean) is noticeable, despite being less well-captured. The resolution of mean vertical velocity fields in the Y-direction (Vmean) is comparatively lower. The accuracy of turbulence kinetic energy (k) is moderate when it is within the range of Umean and Vmean. The absence of empirical data for absolute pressure (p) is compensated by the provision of numerical pressure contours. Full article
Show Figures

Figure 1

13 pages, 1158 KB  
Article
Decoding Algorithms and HW Strategies to Mitigate Uncertainties in a PCM-Based Analog Encoder for Compressed Sensing
by Carmine Paolino, Alessio Antolini, Francesco Zavalloni, Andrea Lico, Eleonora Franchi Scarselli, Mauro Mangia, Alex Marchioni, Fabio Pareschi, Gianluca Setti, Riccardo Rovatti, Mattia Luigi Torres, Marcella Carissimi and Marco Pasotti
J. Low Power Electron. Appl. 2023, 13(1), 17; https://doi.org/10.3390/jlpea13010017 - 13 Feb 2023
Cited by 4 | Viewed by 3416
Abstract
Analog In-Memory computing (AIMC) is a novel paradigm looking for solutions to prevent the unnecessary transfer of data by distributing computation within memory elements. One such operation is matrix-vector multiplication (MVM), a workhorse of many fields ranging from linear regression to Deep Learning. [...] Read more.
Analog In-Memory computing (AIMC) is a novel paradigm looking for solutions to prevent the unnecessary transfer of data by distributing computation within memory elements. One such operation is matrix-vector multiplication (MVM), a workhorse of many fields ranging from linear regression to Deep Learning. The same concept can be readily applied to the encoding stage in Compressed Sensing (CS) systems, where an MVM operation maps input signals into compressed measurements. With a focus on an encoder built on top of a Phase-Change Memory (PCM) AIMC platform, the effects of device non-idealities, namely programming spread and drift over time, are observed in terms of the reconstruction quality obtained for synthetic signals, sparse in the Discrete Cosine Transform (DCT) domain. PCM devices are simulated using statistical models summarizing the properties experimentally observed in an AIMC prototype, designed in a 90 nm STMicroelectronics technology. Different families of decoders are tested, and tradeoffs in terms of encoding energy are analyzed. Furthermore, the benefits of a hardware drift compensation strategy are also observed, highlighting its necessity to prevent the need for a complete reprogramming of the entire analog array. The results show >30 dB average reconstruction quality for mid-range conductances and a suitably selected decoder right after programming. Additionally, the hardware drift compensation strategy enables robust performance even when different drift conditions are tested. Full article
Show Figures

Figure 1

32 pages, 10649 KB  
Article
Multi-Indices Diagnosis of the Conditions That Led to the Two 2017 Major Wildfires in Portugal
by Cristina Andrade and Lourdes Bugalho
Fire 2023, 6(2), 56; https://doi.org/10.3390/fire6020056 - 6 Feb 2023
Cited by 11 | Viewed by 4583
Abstract
Forest fires, though part of a natural forest renewal process, when frequent and on a large -scale, have detrimental impacts on biodiversity, agroforestry systems, soil erosion, air, and water quality, infrastructures, and the economy. Portugal endures extreme forest fires, with a record extent [...] Read more.
Forest fires, though part of a natural forest renewal process, when frequent and on a large -scale, have detrimental impacts on biodiversity, agroforestry systems, soil erosion, air, and water quality, infrastructures, and the economy. Portugal endures extreme forest fires, with a record extent of burned areas in 2017. These complexes of extreme wildfire events (CEWEs) concentrated in a few days but with highly burned areas are, among other factors, linked to severe fire weather conditions. In this study, a comparison between several fire danger indices (named ‘multi-indices diagnosis’) is performed for the control period 2001–2021, 2007 and 2017 (May–October) for the Fire Weather Index (FWI), Burning Index (BI), Forest Fire Danger Index (FFDI), Continuous Haines Index (CHI), and the Keetch–Byram Drought Index (KBDI). Daily analysis for the so-called Pedrógão Grande wildfire (17 June) and the October major fires (15 October) included the Spread Component (SC), Ignition Component (IC), Initial Spread Index (ISI), Buildup Index (BUI), and the Energy Release Component (ERC). Results revealed statistically significant high above-average values for most of the indices for 2017 in comparison with 2001–2021, particularly for October. The spatial distribution of BI, IC, ERC, and SC had the best performance in capturing the locations of the two CEWEs that were driven by atmospheric instability along with a dry environment aloft. These results were confirmed by the hotspot analysis that showed statistically significant intense spatial clustering between these indices and the burned areas. The spatial patterns for SC and ISI showed high values associated with high velocities in the spread of these fires. The outcomes allowed us to conclude that since fire danger depends on several factors, a multi-indices diagnosis can be highly relevant. The implementation of a Multi-index Prediction Methodology should be able to further enhance the ability to track and forecast unique CEWEs since the shortcomings of some indices are compensated by the information retrieved by others, as shown in this study. Overall, a new forecast method can help ensure the development of appropriate spatial preparedness plans, proactive responses by civil protection regarding firefighter management, and suppression efforts to minimize the detrimental impacts of wildfires in Portugal. Full article
(This article belongs to the Special Issue Mediterranean Fires)
Show Figures

Figure 1

34 pages, 8941 KB  
Article
A Novel MOGNDO Algorithm for Security-Constrained Optimal Power Flow Problems
by Sundaram B. Pandya, James Visumathi, Miroslav Mahdal, Tapan K. Mahanta and Pradeep Jangir
Electronics 2022, 11(22), 3825; https://doi.org/10.3390/electronics11223825 - 21 Nov 2022
Cited by 14 | Viewed by 2474
Abstract
The current research investigates a new and unique Multi-Objective Generalized Normal Distribution Optimization (MOGNDO) algorithm for solving large-scale Optimal Power Flow (OPF) problems of complex power systems, including renewable energy sources and Flexible AC Transmission Systems (FACTS). A recently reported single-objective generalized normal [...] Read more.
The current research investigates a new and unique Multi-Objective Generalized Normal Distribution Optimization (MOGNDO) algorithm for solving large-scale Optimal Power Flow (OPF) problems of complex power systems, including renewable energy sources and Flexible AC Transmission Systems (FACTS). A recently reported single-objective generalized normal distribution optimization algorithm is transformed into the MOGNDO algorithm using the nondominated sorting and crowding distancing mechanisms. The OPF problem gets even more challenging when sources of renewable energy are integrated into the grid system, which are unreliable and fluctuating. FACTS devices are also being used more frequently in contemporary power networks to assist in reducing network demand and congestion. In this study, a stochastic wind power source was used with different FACTS devices, including a static VAR compensator, a thyristor- driven series compensator, and a thyristor—driven phase shifter, together with an IEEE-30 bus system. Positions and ratings of the FACTS devices can be intended to reduce the system’s overall fuel cost. Weibull probability density curves were used to highlight the stochastic character of the wind energy source. The best compromise solutions were obtained using a fuzzy decision-making approach. The results obtained on a modified IEEE-30 bus system were compared with other well-known optimization algorithms, and the obtained results proved that MOGNDO has improved convergence, diversity, and spread behavior across PFs. Full article
Show Figures

Figure 1

10 pages, 1932 KB  
Article
A Laser Frequency Transverse Modulation Might Compensate for the Spectral Broadening Due to Large Electron Energy Spread in Thomson Sources
by Vittoria Petrillo, Illya Drebot, Geoffrey Krafft, Cesare Maroli, Andrea R. Rossi, Marcello Rossetti Conti, Marcel Ruijter and Balša Terzić
Photonics 2022, 9(2), 62; https://doi.org/10.3390/photonics9020062 - 25 Jan 2022
Cited by 1 | Viewed by 2983
Abstract
Compact laser plasma accelerators generate high-energy electron beams with increasing quality. When used in inverse Compton backscattering, however, the relatively large electron energy spread jeopardizes potential applications requiring small bandwidths. We present here a novel interaction scheme that allows us to compensate for [...] Read more.
Compact laser plasma accelerators generate high-energy electron beams with increasing quality. When used in inverse Compton backscattering, however, the relatively large electron energy spread jeopardizes potential applications requiring small bandwidths. We present here a novel interaction scheme that allows us to compensate for the negative effects of the electron energy spread on the spectrum, by introducing a transverse spatial frequency modulation in the laser pulse. Such a laser chirp, together with a properly dispersed electron beam, can substantially reduce the broadening of the Compton bandwidth due to the electron energy spread. We show theoretical analysis and numerical simulations for hard X-ray Thomson sources based on laser plasma accelerators. Full article
(This article belongs to the Special Issue Advances and Application of Electron Beam Dynamics)
Show Figures

Figure 1

Back to TopTop