Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (106)

Search Parameters:
Keywords = line simplification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 898 KB  
Article
Heat Conduction Model Based on the Explicit Euler Method for Non-Stationary Cases
by Attila Érchegyi and Ervin Rácz
Entropy 2025, 27(10), 994; https://doi.org/10.3390/e27100994 - 24 Sep 2025
Viewed by 258
Abstract
This article presents an optimization of the explicit Euler method for a heat conduction model. The starting point of the paper was the analysis of the limitations of the explicit Euler scheme and the classical CFL condition in the transient domain, which pointed [...] Read more.
This article presents an optimization of the explicit Euler method for a heat conduction model. The starting point of the paper was the analysis of the limitations of the explicit Euler scheme and the classical CFL condition in the transient domain, which pointed to the oscillation occurring in the intermediate states. To eliminate this phenomenon, we introduced the No-Sway Threshold given for the Fourier number (K), stricter than the CFL, which guarantees the monotonic approximation of the temperature–time evolution. Thereafter, by means of the identical inequalities derived based on the Method of Equating Coefficients, we determined the optimal values of Δt and Δx. Finally, for the construction of the variable grid spacing (M2), we applied the equation expressing the R of the identical inequality system and accordingly specified the thickness of the material elements (Δξ). As a proof-of-concept, we demonstrate the procedure on an application case with major simplifications: during an emergency shutdown of the Flexblue® SMR, the temperature of the air inside the tank instantly becomes 200 °C, while the initial temperatures of the water and the steel are 24 °C. For a 50.003 mm × 50.003 mm surface patch of the tank, we keep the leftmost and rightmost material elements of the uniform-grid (M1) and variable-grid (M2) single-line models at constant temperature; we scale the results up to the total external surface (6714.39 m2). In the M2 case, a larger portion of the heat power taken up from the air is expended on heating the metal, while the rise in the heat power delivered to the seawater is more moderate. At the 3000th min, the steel-wall temperature in M1 falls between 26.229 °C and 25.835 °C, whereas in M2 the temperature gradient varies between 34.648 °C and 30.041 °C, which confirms the advantage of the combination of variable grid spacing and the No-Sway Threshold. Full article
(This article belongs to the Special Issue Dissipative Physical Dynamics)
Show Figures

Figure 1

16 pages, 2946 KB  
Article
AI-Driven Comprehensive SERS-LFIA System: Improving Virus Automated Diagnostics Through SERS Image Recognition and Deep Learning
by Shuai Zhao, Meimei Xu, Chenglong Lin, Weida Zhang, Dan Li, Yusi Peng, Masaki Tanemura and Yong Yang
Biosensors 2025, 15(7), 458; https://doi.org/10.3390/bios15070458 - 16 Jul 2025
Cited by 2 | Viewed by 694
Abstract
Highly infectious and pathogenic viruses seriously threaten global public health, underscoring the need for rapid and accurate diagnostic methods to effectively manage and control outbreaks. In this study, we developed a comprehensive Surface-Enhanced Raman Scattering–Lateral Flow Immunoassay (SERS-LFIA) detection system that integrates SERS [...] Read more.
Highly infectious and pathogenic viruses seriously threaten global public health, underscoring the need for rapid and accurate diagnostic methods to effectively manage and control outbreaks. In this study, we developed a comprehensive Surface-Enhanced Raman Scattering–Lateral Flow Immunoassay (SERS-LFIA) detection system that integrates SERS scanning imaging with artificial intelligence (AI)-based result discrimination. This system was based on an ultra-sensitive SERS-LFIA strip with SiO2-Au NSs as the immunoprobe (with a theoretical limit of detection (LOD) of 1.8 pg/mL). On this basis, a negative–positive discrimination method combining SERS scanning imaging with a deep learning model (ResNet-18) was developed to analyze probe distribution patterns near the T line. The proposed machine learning method significantly reduced the interference of abnormal signals and achieved reliable detection at concentrations as low as 2.5 pg/mL, which was close to the theoretical Raman LOD. The accuracy of the proposed ResNet-18 image recognition model was 100% for the training set and 94.52% for the testing set, respectively. In summary, the proposed SERS-LFIA detection system that integrates detection, scanning, imaging, and AI automated result determination can achieve the simplification of detection process, elimination of the need for specialized personnel, reduction in test time, and improvement of diagnostic reliability, which exhibits great clinical potential and offers a robust technical foundation for detecting other highly pathogenic viruses, providing a versatile and highly sensitive detection method adaptable for future pandemic prevention. Full article
(This article belongs to the Special Issue Surface-Enhanced Raman Scattering in Biosensing Applications)
Show Figures

Figure 1

16 pages, 334 KB  
Entry
Data Structures for 2D Representation of Terrain Models
by Eric Guilbert and Bernard Moulin
Encyclopedia 2025, 5(3), 98; https://doi.org/10.3390/encyclopedia5030098 - 7 Jul 2025
Viewed by 646
Definition
This entry gives an overview of the main data structures and approaches used for a two-dimensional representation of the terrain surface using a digital elevation model (DEM). A DEM represents the elevation of the earth surface from a set of points. It is [...] Read more.
This entry gives an overview of the main data structures and approaches used for a two-dimensional representation of the terrain surface using a digital elevation model (DEM). A DEM represents the elevation of the earth surface from a set of points. It is used for terrain analysis, visualisation and interpretation. DEMs are most commonly defined as a grid where an elevation is assigned to each grid cell. Due to its simplicity, the square grid structure is the most common DEM structure. However, it is less adaptive and shows limitations for more complex processing and reasoning. Hence, the triangulated irregular network is a more adaptive structure and explicitly stores the relationships between the points. Other topological structures (contour graphs, contour trees) have been developed to study terrain morphology. Topological relationships are captured in another structure, the surface network (SN), composed of critical points (peaks, pits, saddles) and critical lines (thalweg, ridge lines). The SN can be computed using either a TIN or a grid. The Morse Theory provides a mathematical approach to studying the topology of surfaces, which is applied to the SN. It has been used for terrain simplification, multi-resolution modelling, terrain segmentation and landform identification. The extended surface network (ESN) extends the classical SN by integrating both the surface and the drainage networks. The ESN can itself be extended for the cognitive representation of the terrain based on saliences (typical points, lines and regions) and skeleton lines (linking critical points), while capturing the context of the appearance of landforms using topo-contexts. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

21 pages, 2118 KB  
Article
What Is a (Pressure) Wavefront?
by Alan E. Vardy and Arris S. Tijsseling
Water 2025, 17(13), 1907; https://doi.org/10.3390/w17131907 - 27 Jun 2025
Cited by 1 | Viewed by 470 | Correction
Abstract
In the context of two- and three-dimensional wave propagation, the word ‘wavefront’ is used to describe a surface that is expanding in time, and in the corresponding two-dimensional case, it describes a line that is also expanding in time. The equivalent definition in [...] Read more.
In the context of two- and three-dimensional wave propagation, the word ‘wavefront’ is used to describe a surface that is expanding in time, and in the corresponding two-dimensional case, it describes a line that is also expanding in time. The equivalent definition in one-dimensional applications is simply a point, but the word ‘wavefront’ is nevertheless commonly used in the context of fluid waves in pipes, albeit for a very different purpose. However, no formal definition exists in the 1D context and the word is not interpreted consistently by all authors, thereby creating a potential for miscommunication that is clearly undesirable in scientific or engineering contexts. This paper uses a wide range of one-dimensional, wave-like phenomena to illustrate how varied wave propagation can be, even when the initial conditions are undisturbed and the trigger of the disturbance is simple. Despite these simplifications, no universally meaningful, formal definition of a wavefront seems to be achievable, partly because, in some instances, key flow parameters such as pressure and velocity exhibit completely different behaviour. Accordingly, instead of seeking a universal, unambiguous definition, a qualitative definition is proposed that is consistent with most of the presented examples and with the most common uses of the word in the general literature. Full article
(This article belongs to the Special Issue Hydrodynamics in Pressurized Pipe Systems)
Show Figures

Figure 1

16 pages, 5088 KB  
Article
Analysis of Selected Methods of Computer-Aided Design for Stage Structures
by Szymon Wyrąbkiewicz, Marcin Zastempowski, Jurand Burczyński and Maciej Gajewski
Appl. Sci. 2025, 15(11), 6146; https://doi.org/10.3390/app15116146 - 29 May 2025
Viewed by 499
Abstract
This article presents the design process for a modern stage trapdoor, which was designed to optimize the work of cultural facilities personnel and increase the attractiveness of future performances and events. Strength calculations for the supporting structure were carried out in the Soldis [...] Read more.
This article presents the design process for a modern stage trapdoor, which was designed to optimize the work of cultural facilities personnel and increase the attractiveness of future performances and events. Strength calculations for the supporting structure were carried out in the Soldis DESIGNER program, and based on these, a 3D model of the stage trapdoor was designed and placed in the space of the stage chimney. In order to verify and analyze the strength of the structure, the 3D model was prepared for detailed analysis in the Autodesk Inventor program. Tests were carried out for four load cases of the structure for 15 different load values. Information about the maximum value of the deflection arrow and the maximum stress was obtained. Collected data were organized in tables and displayed in line and column charts, based on which conclusions were drawn. These analyses showed a high degree of compliance between calculations from both programs. It was found that in this type of structure, a detailed analysis in 3D CAD programs is not necessary for the proper design of the supporting structure, which allows for simplification of the design process. The designed trapdoor meets all design requirements and can be implemented as a solution to improve the functionality and aesthetics of the stage’s technical equipment. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

12 pages, 2147 KB  
Article
Adaptive Neural-Network-Based Lossless Image Coder with Preprocessed Input Data
by Grzegorz Ulacha and Ryszard Stasinski
Appl. Sci. 2025, 15(5), 2603; https://doi.org/10.3390/app15052603 - 28 Feb 2025
Viewed by 811
Abstract
It is shown in this paper that the appropriate preprocessing of input data may result in an important reduction of Artificial Neural Network (ANN) training time and simplification of its structure, while improving its performance. The ANN is working as a data predictor [...] Read more.
It is shown in this paper that the appropriate preprocessing of input data may result in an important reduction of Artificial Neural Network (ANN) training time and simplification of its structure, while improving its performance. The ANN is working as a data predictor in a lossless image coder. Its adaptation is done for each coded pixel separately; no initial training using learning image sets is necessary. This means that there is no extra off-line time needed for initial ANN training, and there are no problems with network overfitting. There are two concepts covered in this paper: Replacement of image pixels by their differences diminishes data variability and increases ANN convergence (Concept 1); Preceding ANN by advanced predictors reduces ANN complexity (Concept 2). The obtained codecs are much faster than one without modifications, while their data compaction properties are clearly better. It outperforms the JPEG-LS codec by approximately 10%. Full article
(This article belongs to the Special Issue Advanced Digital Signal Processing and Its Applications)
Show Figures

Figure 1

23 pages, 7787 KB  
Article
Layout Planning of a Basic Public Transit Network Considering Expected Travel Times and Transportation Efficiency
by Mingzhang Liang, Wei Wang, Ye Chao and Changyin Dong
Systems 2024, 12(12), 550; https://doi.org/10.3390/systems12120550 - 10 Dec 2024
Cited by 1 | Viewed by 1684
Abstract
Urban transit systems are crucial for modern cities, providing sustainable and efficient transportation solutions for residents’ daily commutes. Extensive research has been conducted on optimizing the design of transit systems. Among these studies, designing transit line trajectories and setting operating frequencies are critical [...] Read more.
Urban transit systems are crucial for modern cities, providing sustainable and efficient transportation solutions for residents’ daily commutes. Extensive research has been conducted on optimizing the design of transit systems. Among these studies, designing transit line trajectories and setting operating frequencies are critical components at the strategic planning level, and they are typically implemented in an urban integrated transportation network. However, its computational complexity grows exponentially with the expansion of urban integrated transportation networks, resulting in challenges to global optimization in large-scale cities. To address this problem, this study investigates the layout planning of a basic public transit network (BPTN) to simplify the urban integrated transportation network by filtering out road segments and intersections that are unattractive for both users and operators. A non-linear integer programming model is proposed to maximize the utility of the BPTN, which is defined as a weighted sum of expected travel times (from a user perspective) and transportation efficiency (from an operator perspective). An expected transit flow distribution (ETFD) analysis method is developed, combining different assignment approaches to evaluate the expected travel time and transportation efficiency of the BPTN under various types of transit systems. Moreover, we propose an objective–subjective integrated weighting approach to determine reasonable weight coefficients for travel time and transportation efficiency. The problem is solved by a heuristic solution framework with a topological graph simplification (TGS) process that further simplifies the BPTN into a small-scale graph. Numerical experiments demonstrate the efficacy of the proposed model and algorithm in achieving desirable BPTN layouts for different types of transit systems under variable demand structures. The scale of the BPTN is significantly reduced while maintaining a well-balanced trade-off between expected travel time and transportation efficiency. Full article
Show Figures

Figure 1

21 pages, 6066 KB  
Article
Algorithm for Trajectory Simplification Based on Multi-Point Construction in Preselected Area and Noise Smoothing Processing
by Simin Huang and Zhiying Yang
Data 2024, 9(12), 140; https://doi.org/10.3390/data9120140 - 29 Nov 2024
Viewed by 1414
Abstract
Simplifying trajectory data can improve the efficiency of trajectory data analysis and query and reduce the communication cost and computational overhead of trajectory data. In this paper, a real-time trajectory simplification algorithm (SSFI) based on the spatio-temporal feature information of implicit trajectory points [...] Read more.
Simplifying trajectory data can improve the efficiency of trajectory data analysis and query and reduce the communication cost and computational overhead of trajectory data. In this paper, a real-time trajectory simplification algorithm (SSFI) based on the spatio-temporal feature information of implicit trajectory points is proposed. The algorithm constructs the preselected area through the error measurement method based on the feature information of implicit trajectory points (IEDs) proposed in this paper, predicts the falling point of trajectory points, and realizes the one-way error-bounded simplified trajectory algorithm. Experiments show that the simplified algorithm has obvious progress in three aspects: running speed, compression accuracy, and simplification rate. When the trajectory data scale is large, the performance of the algorithm is much better than that of other line segment simplification algorithms. The GPS error cannot be avoided. The Kalman filter smoothing trajectory can effectively eliminate the influence of noise and significantly improve the performance of the simplified algorithm. According to the characteristics of the trajectory data, this paper accurately constructs a mathematical model to describe the motion state of objects, so that the performance of the Kalman filter is better than other filters when smoothing trajectory data. In this paper, the trajectory data smoothing experiment is carried out by adding random Gaussian noise to the trajectory data. The experiment shows that the Kalman filter’s performance under the mathematical model is better than other filters. Full article
Show Figures

Figure 1

11 pages, 1517 KB  
Article
Modified Use of the Component Method to Get More Realistic Force Distribution in Joints of Steel Structures
by László Radnay and Imre Kovács
Buildings 2024, 14(11), 3553; https://doi.org/10.3390/buildings14113553 - 7 Nov 2024
Cited by 1 | Viewed by 1071
Abstract
According to the EN 1993-1-8 Standard, the moment resistance of end-plated connections can be calculated with the use of the component method. In this, a pair-of-force defines the moment resistance. The magnitude and the location of the compression force can be accurately identified. [...] Read more.
According to the EN 1993-1-8 Standard, the moment resistance of end-plated connections can be calculated with the use of the component method. In this, a pair-of-force defines the moment resistance. The magnitude and the location of the compression force can be accurately identified. The tension force part is usually the resultant of parallel forces appearing in line with the bolt rows. Following the rules of manual calculation and using a mechanical finite element model, where each component is modelled with spring elements, in some—easily identifiable—cases, leads to different force distribution. The simplifications defined in the Standard provide a longer arm for the same force, and because of this, larger moment resistance at the expense of safety. In this work, an alternative calculation method will be presented that provides the same force values in each bolt row, as the mechanical model of the connection, without constructing the finite element model. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

19 pages, 536 KB  
Article
Optimizing Convolutional Neural Network Architectures
by Luis Balderas, Miguel Lastra and José M. Benítez
Mathematics 2024, 12(19), 3032; https://doi.org/10.3390/math12193032 - 28 Sep 2024
Cited by 10 | Viewed by 4135
Abstract
Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited [...] Read more.
Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited resources (e.g., edge devices). Furthermore, a new line of research seeking more sustainable approaches to Artificial Intelligence development and research is increasingly drawing attention: Green AI. Motivated by an interest in optimizing Machine Learning models, in this paper, we propose Optimizing Convolutional Neural Network Architectures (OCNNA). It is a novel CNN optimization and construction method based on pruning designed to establish the importance of convolutional layers. The proposal was evaluated through a thorough empirical study including the best known datasets (CIFAR-10, CIFAR-100, and Imagenet) and CNN architectures (VGG-16, ResNet-50, DenseNet-40, and MobileNet), setting accuracy drop and the remaining parameters ratio as objective metrics to compare the performance of OCNNA with the other state-of-the-art approaches. Our method was compared with more than 20 convolutional neural network simplification algorithms, obtaining outstanding results. As a result, OCNNA is a competitive CNN construction method which could ease the deployment of neural networks on the IoT or resource-limited devices. Full article
Show Figures

Figure 1

11 pages, 766 KB  
Article
A Synthesis Analysis of the Relationship between Main and Ratoon Crop Grain Yields in Ratoon Rice
by Bin Liu, Shen Yuan and Shaobing Peng
Agronomy 2024, 14(9), 2170; https://doi.org/10.3390/agronomy14092170 - 23 Sep 2024
Cited by 1 | Viewed by 2239
Abstract
Ratoon rice represents a viable means to enhance rice production efficiency in terms of both area and time. Nonetheless, the development of specific varieties tailored for ratoon rice has been hindered by the complexity of trait considerations required during breeding/screening processes. A pivotal [...] Read more.
Ratoon rice represents a viable means to enhance rice production efficiency in terms of both area and time. Nonetheless, the development of specific varieties tailored for ratoon rice has been hindered by the complexity of trait considerations required during breeding/screening processes. A pivotal step towards advancing ratoon rice breeding programs involves reducing the dimensionality of selection traits. In this study, we performed a comprehensive analysis exploring whether the yield of the main crop could serve as a predictor for ratoon crop yield, thereby simplifying the selection process. Our findings revealed significant variability in the rice yields of both main and ratoon crops, with the ratoon crop yield averaging 51% of the main crop. Importantly, the correlation between grain yields of the main and ratoon crops did not deviate from the identity line, substantiating the feasibility of predicting ratoon crop yield based on the main crop yield. The number of panicles in the ratoon crops was found to be closely linked to that of the main crop; however, the size values of the panicles in the ratoon crops exhibited less of a dependency on the main crop’s panicle size. Additionally, a general decrease in grain weight was observed in the ratoon crops compared to the main crop. In summary, this study elucidates a pathway for the simplification of selection traits, thereby enhancing the efficiency of breeding high-yielding ratoon rice varieties, with the ultimate aim of fostering the sustainable development of ratoon rice. Full article
(This article belongs to the Section Innovative Cropping Systems)
Show Figures

Figure 1

19 pages, 2299 KB  
Review
Critical Review and Potential Improvement of the New International Airport Pavement Strength Rating System
by Greg White
Appl. Sci. 2024, 14(18), 8491; https://doi.org/10.3390/app14188491 - 20 Sep 2024
Cited by 1 | Viewed by 2587
Abstract
Most airports rate and publish the strength of their runway pavement using the international system known as Aircraft Classification Number–Pavement Classification Number (ACN–PCN). The ACN–PCN system has been in place since 1981 and includes many simplifications that were necessary at the time of [...] Read more.
Most airports rate and publish the strength of their runway pavement using the international system known as Aircraft Classification Number–Pavement Classification Number (ACN–PCN). The ACN–PCN system has been in place since 1981 and includes many simplifications that were necessary at the time of its development, primarily due to the general absence of computer power to support more sophisticated analysis. However, airport pavement thickness determination has evolved since that time and now includes much more sophisticated analysis methods. To bring the strength rating system into line with contemporary pavement thickness determination methods, a new system has been developed, known as Aircraft Classification Rating–Pavement Classification Rating (ACR–PCR). This critical review found that ACR–PCR provides many improvements over ACN–PCN, including minimizing anomalies between pavement thickness design and subsequent pavement strength rating, the use of more representative aircraft traffic loadings and pavement structures, and the alignment of rigid and flexible subgrade support categories. However, ACR–PCR could be improved with regard to the representative subgrade characteristic values, the retention of an overly simple tire pressure category limit approach for surface protection, the provisions for single-wheeled light aircraft pavements, and the absence of a rational approach to strength rating that is substantially better than a usage-based approach but does not necessarily follow the formalized technical rating protocol. Despite these limitations, the current ACN–PCN system has been in place for over 40 years without significant change, so it is expected that ACR–PCR will be in place for many years as well. Consequently, airports should prepare for its imminent introduction, regardless of the associated limitations. Full article
Show Figures

Figure 1

19 pages, 20339 KB  
Article
Enhancing Colorimetric Detection of Nucleic Acids on Nitrocellulose Membranes: Cutting-Edge Applications in Diagnostics and Forensics
by Nidhi Subhashini, Yannick Kerler, Marcus M. Menger, Olga Böhm, Judith Witte, Christian Stadler and Alexander Griberman
Biosensors 2024, 14(9), 430; https://doi.org/10.3390/bios14090430 - 5 Sep 2024
Cited by 2 | Viewed by 2759
Abstract
This study re-introduces a protein-free rapid test method for nucleic acids on paper based lateral flow assays utilizing special multichannel nitrocellulose membranes and DNA-Gold conjugates, achieving significantly enhanced sensitivity, easier protocols, reduced time of detection, reduced costs of production and advanced multiplexing possibilities. [...] Read more.
This study re-introduces a protein-free rapid test method for nucleic acids on paper based lateral flow assays utilizing special multichannel nitrocellulose membranes and DNA-Gold conjugates, achieving significantly enhanced sensitivity, easier protocols, reduced time of detection, reduced costs of production and advanced multiplexing possibilities. A protein-free nucleic acid-based lateral flow assay (NALFA) with a limit of detection of 1 pmol of DNA is shown for the first time. The total production duration of such an assay was successfully reduced from the currently known several days to just a few hours. The simplification and acceleration of the protocol make the method more accessible and practical for various applications. The developed method supports multiplexing, enabling the simultaneous detection of up to six DNA targets. This multiplexing capability is a significant improvement over traditional line tests and offers more comprehensive diagnostic potential in a single assay. The approach significantly reduces the run time compared to traditional line tests, which enhances the efficiency of diagnostic procedures. The protein-free aspect of this assay minimizes the prevalent complications of cross-reactivity in immunoassays especially in cases of multiplexing. It is also demonstrated that the NALFA developed in this study is amplification-free and hence does not rely on specialized technicians, nor does it involve labour-intensive steps like DNA extraction and PCR processes. Overall, this study presents a robust, efficient, and highly sensitive platform for DNA or RNA detection, addressing several limitations of current methods documented in the literature. The advancements in sensitivity, cost reduction, production time, and multiplexing capabilities mark a substantial improvement, holding great potential for various applications in diagnostics, forensics, and molecular biology. Full article
(This article belongs to the Section Biosensors and Healthcare)
Show Figures

Figure 1

18 pages, 9963 KB  
Article
Research on Cable Tension Prediction Based on Neural Network
by Hongbin Zhang and Weihao Hu
Buildings 2024, 14(6), 1723; https://doi.org/10.3390/buildings14061723 - 8 Jun 2024
Cited by 3 | Viewed by 1204
Abstract
Conventional methods for calculating tension currently suffer from an excessive simplification of boundary conditions and a vague definition of effective cable length, both of which cause inaccurate cable tension calculations. Therefore, this study utilizes bridge field data to establish a BP neural network [...] Read more.
Conventional methods for calculating tension currently suffer from an excessive simplification of boundary conditions and a vague definition of effective cable length, both of which cause inaccurate cable tension calculations. Therefore, this study utilizes bridge field data to establish a BP neural network for tension prediction, with design cable length, line density, and frequency as the input parameters and with cable tension as the output parameter. After disregarding the selection of effective cable length and innovatively integrating the particle swarm optimization–back propagation (PSO-BP) neural network for tension prediction, it is found that the MAPE between the predicted results of the BP neural network and the actual tension values is 7.93%. After optimization using the particle swarm optimization algorithm, the mean absolute percentage error (MAPE) of the neural network prediction is reduced to 2.78%. Both of these values significantly outperform those obtained from the theoretical equations of string vibration. Moreover, the MAPE of PSO-BP also surpasses that of the optimized calculation formulas in the literature. Utilizing the PSO-BP neural network for tension prediction avoids inaccuracies in tension calculation caused by an excessive simplification of boundary conditions and a vague definition of effective cable length; thus, it possesses certain engineering practical value. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

14 pages, 13626 KB  
Article
An Adaptive Simplification Method for Coastlines Using a Skeleton Line “Bridge” Double Direction Buffering Algorithm
by Lulu Tang, Lihua Zhang, Jian Dong, Hongcheng Wei and Shuai Wei
ISPRS Int. J. Geo-Inf. 2024, 13(5), 155; https://doi.org/10.3390/ijgi13050155 - 7 May 2024
Cited by 2 | Viewed by 1681
Abstract
Aiming at the problem that the current double direction buffering algorithm is easy to use to seal the “bottleneck” area when simplifying coastlines, an adaptive simplification method for coastlines using a skeleton line “bridge” double direction buffering algorithm is proposed. Firstly, from the [...] Read more.
Aiming at the problem that the current double direction buffering algorithm is easy to use to seal the “bottleneck” area when simplifying coastlines, an adaptive simplification method for coastlines using a skeleton line “bridge” double direction buffering algorithm is proposed. Firstly, from the perspective of visual constraints, the relationship between the buffer distance and the coastline line width and the minimum recognition distance of the human eye is theoretically derived and determined. Then, based on the construction of the coastline skeleton binary tree, the “bridge” skeleton line is extracted using the “source tracing” algorithm. Finally, the shoreline adaptive simplification is realized by constructing a visual buffer of “bridge” skeleton lines to bridge the original resulting coastline and the local details. The experimental results show that the proposed method can effectively solve the problem that the current double direction buffering algorithm has, which can significantly improve the quality of simplification. Full article
Show Figures

Figure 1

Back to TopTop