Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = face of a convex set

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 467 KB  
Article
Monopoly, Multi-Product Quality, Consumer Heterogeneity, and Market Segmentation
by Amit Gayer
Games 2025, 16(5), 49; https://doi.org/10.3390/g16050049 - 10 Sep 2025
Viewed by 221
Abstract
This paper introduces a novel ratio-based framework for analyzing how consumer heterogeneity translates into product differentiation in vertically structured monopoly markets. We consider a monopolist facing a continuum of consumers and a strictly convex production cost function and identify conditions under which the [...] Read more.
This paper introduces a novel ratio-based framework for analyzing how consumer heterogeneity translates into product differentiation in vertically structured monopoly markets. We consider a monopolist facing a continuum of consumers and a strictly convex production cost function and identify conditions under which the heterogeneity of preferences, measured by the length of the consumer type interval, maps into a corresponding range of offered qualities. The analysis shows that this mapping depends on the curvature of the marginal cost function: under linear costs, the relationship is proportional; under convex costs, heterogeneity expands faster than segmentation; and under concave costs, the reverse occurs. These findings offer a new lens for understanding endogenous market granularity in monopoly settings and have potential applicability in markets with vertically differentiated goods. We also show that under partial market coverage, this proportionality breaks down - even in the linear case - revealing a critical asymmetry in equilibrium structure. Full article
(This article belongs to the Special Issue Applications of Game Theory to Industrial Organization)
Show Figures

Figure 1

14 pages, 537 KB  
Article
Non-Uniqueness of Best-Of Option Prices Under Basket Calibration
by Mohammed Ahnouch, Lotfi Elaachak and Abderrahim Ghadi
Risks 2025, 13(6), 117; https://doi.org/10.3390/risks13060117 - 18 Jun 2025
Viewed by 501
Abstract
This paper demonstrates that perfectly calibrating a multi-asset model to observed market prices of all basket call options is insufficient to uniquely determine the price of a best-of call option. Previous research on multi-asset option pricing has primarily focused on complete market settings [...] Read more.
This paper demonstrates that perfectly calibrating a multi-asset model to observed market prices of all basket call options is insufficient to uniquely determine the price of a best-of call option. Previous research on multi-asset option pricing has primarily focused on complete market settings or assumed specific parametric models, leaving fundamental questions about model risk and pricing uniqueness in incomplete markets inadequately addressed. This limitation has critical practical implications: derivatives practitioners who hedge best-of options using basket-equivalent instruments face fundamental distributional uncertainty that compounds the well-recognized non-linearity challenges. We establish this non-uniqueness using convex analysis (extreme ray characterization demonstrating geometric incompatibility between payoff structures), measure theory (explicit construction of distinct equivalent probability measures), and geometric analysis (payoff structure comparison). Specifically, we prove that the set of equivalent probability measures consistent with observed basket prices contains distinct measures yielding different best-of option prices, with explicit no-arbitrage bounds [aK,bK] quantifying this uncertainty. Our theoretical contribution provides the first rigorous mathematical foundation for several empirically observed market phenomena: wide bid-ask spreads on extremal options, practitioners’ preference for over-hedging strategies, and substantial model reserves for exotic derivatives. We demonstrate through concrete examples that substantial model risk persists even with perfect basket calibration and equivalent measure constraints. For risk-neutral pricing applications, equivalent martingale measure constraints can be imposed using optimal transport theory, though this requires additional mathematical complexity via Schrödinger bridge techniques while preserving our fundamental non-uniqueness results. The findings establish that additional market instruments beyond basket options are mathematically necessary for robust exotic derivative pricing. Full article
Show Figures

Figure 1

25 pages, 40577 KB  
Article
Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds
by Yi Zhang, Feiyang Dong, Qihao Sun and Weiwei Song
Sensors 2025, 25(12), 3681; https://doi.org/10.3390/s25123681 - 12 Jun 2025
Cited by 1 | Viewed by 710
Abstract
When facing geometrically similar environments such as subway tunnels, Scan-Map registration is highly dependent on the correct initial value of the pose, otherwise mismatching is prone to occur, which limits the application of SLAM (Simultaneous Localization and Mapping) in tunnels. We propose a [...] Read more.
When facing geometrically similar environments such as subway tunnels, Scan-Map registration is highly dependent on the correct initial value of the pose, otherwise mismatching is prone to occur, which limits the application of SLAM (Simultaneous Localization and Mapping) in tunnels. We propose a novel coarse-to-fine registration strategy that includes geometric feature extraction and a keyframe-based pose optimization model. The method involves initial feature point set acquisition through point distance calculations, followed by the extraction of line and plane features, and convex hull features based on the normal vector’s change rate. Coarse registration is achieved through rotation and translation using three types of feature sets, with the resulting pose serving as the initial value for fine registration via Point-Plane ICP. The algorithm’s accuracy and efficiency are validated using Innovusion lidar scans of a subway tunnel, achieving a single-frame point cloud registration accuracy of 3 cm within 0.7 s, significantly improving upon traditional registration algorithms. The study concludes that the proposed method effectively enhances SLAM’s applicability in challenging tunnel environments, ensuring high registration accuracy and efficiency. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

26 pages, 620 KB  
Article
Optimal Investment Based on Performance Measure and Stochastic Benchmark Under PI and Position Constraints
by Chengzhe Wang, Congjin Zhou and Yinghui Dong
Mathematics 2025, 13(11), 1846; https://doi.org/10.3390/math13111846 - 2 Jun 2025
Viewed by 407
Abstract
We consider the portfolio selection problem faced by a manager under the performance ratio with position and portfolio insurance (PI) constraints. By making use of a dual control method in an incomplete market setting, we find the unique pricing kernel in the presence [...] Read more.
We consider the portfolio selection problem faced by a manager under the performance ratio with position and portfolio insurance (PI) constraints. By making use of a dual control method in an incomplete market setting, we find the unique pricing kernel in the presence of closed convex cone control constraints. Then, following the same arguments as in the complete market case, we derive the explicit form of the optimal investment strategy by combining the linearization method, the Lagrangian method, and the concavification technique. Full article
(This article belongs to the Special Issue Probability Statistics and Quantitative Finance)
Show Figures

Figure 1

40 pages, 794 KB  
Article
An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria
by Massimiliano Kaucic, Renato Pelessoni and Filippo Piccotto
Entropy 2025, 27(5), 480; https://doi.org/10.3390/e27050480 - 29 Apr 2025
Viewed by 827
Abstract
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More [...] Read more.
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More precisely, the best-performing assets from the investable universe are identified using three financial criteria. The first criterion is based on mutual information, and it is employed to capture the microstructure of the stock market. The second one is the momentum, and the third is the upside-to-downside beta ratio. To calculate the preference weights used in the chosen multi-criteria decision-making procedure, two methods are compared, namely equal and entropy weighting. In the second stage, this work considers a portfolio optimization model where the objective function is a modified version of the Sharpe ratio, consistent with the choices of a rational agent even when faced with negative risk premiums. Additionally, the portfolio design incorporates a set of bound, budget, and cardinality constraints, together with a set of risk budgeting restrictions. To solve the resulting non-smooth programming problem with non-convex constraints, this paper proposes a variant of the distance-based parameter adaptation for success-history-based differential evolution with double crossover (DISH-XX) algorithm equipped with a hybrid constraint-handling approach. Numerical experiments on the US and European stock markets over the past ten years are conducted, and the results show that the flexibility of the proposed portfolio model allows the better control of losses, particularly during market downturns, thereby providing superior or at least comparable ex post performance with respect to several benchmark investment strategies. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
Show Figures

Figure 1

18 pages, 4229 KB  
Article
Reconfigurable Intelligent Surface Assisted Target Three-Dimensional Localization with 2-D Radar
by Ziwei Liu, Shanshan Zhao, Biao Xie and Jirui An
Remote Sens. 2024, 16(11), 1936; https://doi.org/10.3390/rs16111936 - 28 May 2024
Cited by 3 | Viewed by 1602
Abstract
Battlefield surveillance radar is usually 2-D radar, which cannot realize target three-dimensional localization, leading to poor resolution for the air target in the elevation dimension. Previous researchers have used the Traditional Height Finder Radar (HFR) or multiple 2-D radar networking to estimate the [...] Read more.
Battlefield surveillance radar is usually 2-D radar, which cannot realize target three-dimensional localization, leading to poor resolution for the air target in the elevation dimension. Previous researchers have used the Traditional Height Finder Radar (HFR) or multiple 2-D radar networking to estimate the target three-dimensional location. However, all of them face the problems of high cost, poor real-time performance and high requirement of space–time registration. In this paper, Reconfigurable Intelligent Surfaces (RISs) with low cost are introduced into the 2-D radar to realize the target three-dimensional localization. Taking advantage of the wide beam of 2-D radar in the elevation dimension, several Unmanned Aerial Vehicles (UAVs) carrying RISs are set in the receiving beam to form multiple auxiliary measurement channels. In addition, the traditional 2-D radar measurements combined with the auxiliary channel measurements are used to realize the target three-dimensional localization by solving a nonlinear least square problem with a convex optimization method. For the proposed RIS-assisted target three-dimensional localization problem, the Cramer–Rao Lower Bound (CRLB) is derived to measure the target localization accuracy. Simulation results verify the effectiveness of the proposed 3-D localization method, and the influences of the number, the positions and the site errors of the RISs on the localization accuracy are covered. Full article
Show Figures

Graphical abstract

13 pages, 4235 KB  
Article
Scalp Irradiation with 3D-Milled Bolus: Initial Dosimetric and Clinical Experience
by Khaled Dibs, Emile Gogineni, Sachin M. Jhawar, Sujith Baliga, John C. Grecula, Darrion L. Mitchell, Joshua Palmer, Karl Haglund, Therese Youssef Andraos, Wesley Zoller, Ashlee Ewing, Marcelo Bonomi, Priyanka Bhateja, Gabriel Tinoco, David Liebner, James W. Rocco, Matthew Old, Mauricio E. Gamez, Arnab Chakravarti, David J. Konieczkowski and Dukagjin M. Blakajadd Show full author list remove Hide full author list
Cancers 2024, 16(4), 688; https://doi.org/10.3390/cancers16040688 - 6 Feb 2024
Cited by 3 | Viewed by 2043
Abstract
Background and purpose: A bolus is required when treating scalp lesions with photon radiation therapy. Traditional bolus materials face several issues, including air gaps and setup difficulty due to irregular, convex scalp geometry. A 3D-milled bolus is custom-formed to match individual patient anatomy, [...] Read more.
Background and purpose: A bolus is required when treating scalp lesions with photon radiation therapy. Traditional bolus materials face several issues, including air gaps and setup difficulty due to irregular, convex scalp geometry. A 3D-milled bolus is custom-formed to match individual patient anatomy, allowing improved dose coverage and homogeneity. Here, we describe the creation process of a 3D-milled bolus and report the outcomes for patients with scalp malignancies treated with Volumetric Modulated Arc Therapy (VMAT) utilizing a 3D-milled bolus. Materials and methods: Twenty-two patients treated from 2016 to 2022 using a 3D-milled bolus and VMAT were included. Histologies included squamous cell carcinoma (n = 14, 64%) and angiosarcoma (n = 8, 36%). A total of 7 (32%) patients were treated in the intact and 15 (68%) in the postoperative setting. The median prescription dose was 66.0 Gy (range: 60.0–69.96). Results: The target included the entire scalp for 8 (36%) patients; in the remaining 14 (64%), the median ratio of planning target volume to scalp volume was 35% (range: 25–90%). The median dose homogeneity index was 1.07 (range: 1.03–1.15). Six (27%) patients experienced acute grade 3 dermatitis and one (5%) patient experienced late grade 3 skin ulceration. With a median follow-up of 21.4 months (range: 4.0–75.4), the 18-month rates of locoregional control and overall survival were 75% and 79%, respectively. Conclusions: To our knowledge, this is the first study to report the clinical outcomes for patients with scalp malignancies treated with the combination of VMAT and a 3D-milled bolus. This technique resulted in favorable clinical outcomes and an acceptable toxicity profile in comparison with historic controls and warrants further investigation in a larger prospective study. Full article
(This article belongs to the Special Issue Emerging Technologies in Head and Neck Cancer Surgery)
Show Figures

Figure 1

22 pages, 1235 KB  
Article
An Integrated Access and Backhaul Approach to Sustainable Dense Small Cell Network Planning
by Jie Zhang, Qiao Wang, Paul Mitchell and Hamed Ahmadi
Information 2024, 15(1), 19; https://doi.org/10.3390/info15010019 - 28 Dec 2023
Cited by 5 | Viewed by 2949 | Correction
Abstract
Integrated access and backhaul (IAB) networks offer transformative benefits, primarily their deployment flexibility in locations where fixed backhaul faces logistical or financial challenges. This flexibility is further enhanced by IAB’s inherent ability for adaptive network expansion. However, existing IAB network planning models, which [...] Read more.
Integrated access and backhaul (IAB) networks offer transformative benefits, primarily their deployment flexibility in locations where fixed backhaul faces logistical or financial challenges. This flexibility is further enhanced by IAB’s inherent ability for adaptive network expansion. However, existing IAB network planning models, which are grounded in the facility location problem and are predominantly addressed through linear programming, tend to neglect crucial geographical constraints. These constraints arise from the specific deployment constraints related to the positioning of IAB donors to the core network, and the geographic specificity required for IAB-node placements. These aspects expose an evident research void. To bridge this, our research introduces a geographically aware optimization methodology tailored for IAB deployments. In this paper, we detail strategies for both single-hop and multi-hop situations, concentrating on IAB donors distribution and geographical constraints. Uniquely in this study, we employ the inherent data rate limitations of network nodes to determine the maximum feasible hops, differing from traditional fixed maximum hop count methods. We devise two optimization schemes for single-hop and multi-hop settings and introduce a greedy algorithm to effectively address the non-convex multi-hop challenge. Extensive simulations across various conditions (such as diverse donor numbers and node separations) were undertaken, with the outcomes assessed against the benchmark of the single-hop scenario’s optimal solution. Our findings reveal that the introduced algorithm delivers efficient performance for geographically constrained network planning. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

29 pages, 15217 KB  
Article
MSGL+: Fast and Reliable Model Selection-Inspired Graph Metric Learning
by Cheng Yang, Fei Zheng, Yujie Zou, Liang Xue, Chao Jiang, Shuangyu Liu, Bochao Zhao and Haoyang Cui
Electronics 2024, 13(1), 44; https://doi.org/10.3390/electronics13010044 - 20 Dec 2023
Viewed by 1706
Abstract
The problem of learning graph-based data structures from data has attracted considerable attention in the past decade. Different types of data can be used to infer the graph structure, such as graphical Lasso, which is learned from multiple graph signals or graph metric [...] Read more.
The problem of learning graph-based data structures from data has attracted considerable attention in the past decade. Different types of data can be used to infer the graph structure, such as graphical Lasso, which is learned from multiple graph signals or graph metric learning based on node features. However, most existing methods that use node features to learn the graph face difficulties when the label signals of the data are incomplete. In particular, the pair-wise distance metric learning problem becomes intractable as the dimensionality of the node features increases. To address this challenge, we propose a novel method called MSGL+. MSGL+ is inspired from model selection, leverages recent advancements in graph spectral signal processing (GSP), and offers several key innovations: (1) Polynomial Interpretation: We use a polynomial function of a certain order on the graph Laplacian to represent the inverse covariance matrix of the graph nodes to rigorously formulate an optimization problem. (2) Convex Formulation: We formulate a convex optimization objective with a cone constraint that optimizes the coefficients of the polynomial, which makes our approach efficient. (3) Linear Constraints: We convert the cone constraint of the objective to a set of linear ones to further ensure the efficiency of our method. (4) Optimization Objective: We explore the properties of these linear constraints within the optimization objective, avoiding sub-optimal results by the removal of the box constraints on the optimization variables, and successfully further reduce the number of variables compared to our preliminary work, MSGL. (5) Efficient Solution: We solve the objective using the efficient linear-program-based Frank–Wolfe algorithm. Application examples, including binary classification, multi-class classification, binary image denoising, and time-series analysis, demonstrate that MSGL+ achieves competitive accuracy performance with a significant speed advantage compared to existing graphical Lasso and feature-based graph learning methods. Full article
(This article belongs to the Collection Graph Machine Learning)
Show Figures

Figure 1

12 pages, 506 KB  
Article
Multi-Objective Order Scheduling via Reinforcement Learning
by Sirui Chen, Yuming Tian and Lingling An
Algorithms 2023, 16(11), 495; https://doi.org/10.3390/a16110495 - 24 Oct 2023
Cited by 2 | Viewed by 2919
Abstract
Order scheduling is of a great significance in the internet and communication industries. With the rapid development of the communication industry and the increasing variety of user demands, the number of work orders for communication operators has grown exponentially. Most of the research [...] Read more.
Order scheduling is of a great significance in the internet and communication industries. With the rapid development of the communication industry and the increasing variety of user demands, the number of work orders for communication operators has grown exponentially. Most of the research that tries to solve the order scheduling problem has focused on improving assignment rules based on real-time performance. However, these traditional methods face challenges such as poor real-time performance, high human resource consumption, and low efficiency. Therefore, it is crucial to solve multi-objective problems in order to obtain a robust order scheduling policy to meet the multiple requirements of order scheduling in real problems. The priority dispatching rule (PDR) is a heuristic method that is widely used in real-world scheduling systems In this paper, we propose an approach to automatically optimize the Priority Dispatching Rule (PDR) using a deep multiple-objective reinforcement learning agent and to optimize the weighted vector with a convex hull to obtain the most objective and efficient weights. The convex hull method is employed to calculate the maximal linearly scalarized value, enabling us to determine the optimal weight vector objectively and achieve a balanced optimization of each objective rather than relying on subjective weight settings based on personal experience. Experimental results on multiple datasets demonstrate that our proposed algorithm achieves competitive performance compared to existing state-of-the-art order scheduling algorithms. Full article
Show Figures

Figure 1

15 pages, 6362 KB  
Article
Geostatistical Evaluation of a Porphyry Copper Deposit Using Copulas
by Babak Sohrabian, Saeed Soltani-Mohammadi, Rashed Pourmirzaee and Emmanuel John M. Carranza
Minerals 2023, 13(6), 732; https://doi.org/10.3390/min13060732 - 29 May 2023
Cited by 8 | Viewed by 2387
Abstract
Kriging has some problems such as ignoring sample values in giving weights to them, reducing dependence structure to a single covariance function, and facing negative confidence bounds. In view to these problems of kriging in this study to estimate Cu in the Iju [...] Read more.
Kriging has some problems such as ignoring sample values in giving weights to them, reducing dependence structure to a single covariance function, and facing negative confidence bounds. In view to these problems of kriging in this study to estimate Cu in the Iju porphyry Cu deposit in Iran, we used a convex linear combination of Archimedean copulas. To delineate the spatial dependence structure of Cu, the best Frank, Gumbel, and Clayton copula models were determined at different lags to fit with higher-order polynomials. The resulting Archimedean copulas were able to describe all kinds of spatial dependence structures, including asymmetric lower and upper tails. The copula and kriging methods were compared through a split-sample cross-validation test whereby the drill-hole data were divided into modeling and validation sets. The cross-validation showed better results for geostatistical estimation through copula than through kriging in terms of accuracy and precision. The mean of the validation set, which was 0.1218%, was estimated as 0.1278% and 0.1369% by the copula and kriging methods, respectively. The correlation coefficient between the estimated and measured values was higher for the copula method than for the kriging method. With 0.0143%2 and 0.0162%2 values, the mean square error was substantially smaller for copula than for kriging. A boxplot of the results demonstrated that the copula method was better in reproducing the Cu distribution and had fewer smoothing problems. Full article
(This article belongs to the Special Issue Geostatistics in the Life Cycle of Mines)
Show Figures

Figure 1

24 pages, 3224 KB  
Article
Quadrotor Path Planning and Polynomial Trajectory Generation Using Quadratic Programming for Indoor Environments
by Muhammad Awais Arshad, Jamal Ahmed and Hyochoong Bang
Drones 2023, 7(2), 122; https://doi.org/10.3390/drones7020122 - 9 Feb 2023
Cited by 9 | Viewed by 8338
Abstract
This study considers the problem of generating optimal, kino-dynamic-feasible, and obstacle-free trajectories for a quadrotor through indoor environments. We explore methods to overcome the challenges faced by quadrotors for indoor settings due to their higher-order vehicle dynamics, relatively limited free spaces through the [...] Read more.
This study considers the problem of generating optimal, kino-dynamic-feasible, and obstacle-free trajectories for a quadrotor through indoor environments. We explore methods to overcome the challenges faced by quadrotors for indoor settings due to their higher-order vehicle dynamics, relatively limited free spaces through the environment, and challenging optimization constraints. In this research, we propose a complete pipeline for path planning, trajectory generation, and optimization for quadrotor navigation through indoor environments. We formulate the trajectory generation problem as a Quadratic Program (QP) with Obstacle-Free Corridor (OFC) constraints. The OFC is a collection of convex overlapping polyhedra that model tunnel-like free connecting space from current configuration to goal configuration. Linear inequality constraints provided by the polyhedra of OFCs are used in the QP for real-time optimization performance. We demonstrate the feasibility of our approach, its performance, and its completeness by simulating multiple environments of differing sizes and varying obstacle densities using MATLAB Optimization Toolbox. We found that our approach has higher chances of convergence of optimization solver as compared to current approaches for challenging scenarios. We show that our proposed pipeline can plan complete paths and optimize trajectories in a few hundred milliseconds and within approximately ten iterations of the optimization solver for everyday indoor settings. Full article
Show Figures

Figure 1

18 pages, 3389 KB  
Article
Detecting Multi-Density Urban Hotspots in a Smart City: Approaches, Challenges and Applications
by Eugenio Cesario, Paolo Lindia and Andrea Vinci
Big Data Cogn. Comput. 2023, 7(1), 29; https://doi.org/10.3390/bdcc7010029 - 8 Feb 2023
Cited by 13 | Viewed by 4083
Abstract
Leveraged by a large-scale diffusion of sensing networks and scanning devices in modern cities, huge volumes of geo-referenced urban data are collected every day. Such an amount of information is analyzed to discover data-driven models, which can be exploited to tackle the major [...] Read more.
Leveraged by a large-scale diffusion of sensing networks and scanning devices in modern cities, huge volumes of geo-referenced urban data are collected every day. Such an amount of information is analyzed to discover data-driven models, which can be exploited to tackle the major issues that cities face, including air pollution, virus diffusion, human mobility, crime forecasting, traffic flows, etc. In particular, the detection of city hotspots is de facto a valuable organization technique for framing detailed knowledge of a metropolitan area, providing high-level summaries for spatial datasets, which are a valuable support for planners, scientists, and policymakers. However, while classic density-based clustering algorithms show to be suitable for discovering hotspots characterized by homogeneous density, their application on multi-density data can produce inaccurate results. In fact, a proper threshold setting is very difficult when clusters in different regions have considerably different densities, or clusters with different density levels are nested. For such a reason, since metropolitan cities are heavily characterized by variable densities, multi-density clustering seems to be more appropriate for discovering city hotspots. Indeed, such algorithms rely on multiple minimum threshold values and are able to detect multiple pattern distributions of different densities, aiming at distinguishing between several density regions, which may or may not be nested and are generally of a non-convex shape. This paper discusses the research issues and challenges for analyzing urban data, aimed at discovering multi-density hotspots in urban areas. In particular, the study compares the four approaches (DBSCAN, OPTICS-xi, HDBSCAN, and CHD) proposed in the literature for clustering urban data and analyzes their performance on both state-of-the-art and real-world datasets. Experimental results show that multi-density clustering algorithms generally achieve better results on urban data than classic density-based algorithms. Full article
Show Figures

Figure 1

28 pages, 569 KB  
Article
Natural Convection Heat Transfer from an Isothermal Plate
by Aubrey Jaffer
Thermo 2023, 3(1), 148-175; https://doi.org/10.3390/thermo3010010 - 3 Feb 2023
Cited by 16 | Viewed by 7450
Abstract
Using boundary-layer theory, natural convection heat transfer formulas that are accurate over a wide range of Rayleigh numbers (Ra) were developed in the 1970s and 1980s for vertical and downward-facing plates. A comprehensive formula for upward-facing plates remained unsolved because they [...] Read more.
Using boundary-layer theory, natural convection heat transfer formulas that are accurate over a wide range of Rayleigh numbers (Ra) were developed in the 1970s and 1980s for vertical and downward-facing plates. A comprehensive formula for upward-facing plates remained unsolved because they do not form conventional boundary-layers. From the thermodynamic constraints on heat-engine efficiency, the novel approach presented here derives formulas for natural convection heat transfer from isothermal plates. The union of four peer-reviewed data-sets spanning 1<Ra<1012 has 5.4% root-mean-squared relative error (RMSRE) from the new upward-facing heat transfer formula. Applied to downward-facing plates, this novel approach outperforms the Schulenberg (1985) formula’s 4.6% RMSRE with 3.8% on four peer-reviewed data-sets spanning 106<Ra<1012. The introduction of the harmonic mean as the characteristic length metric for vertical and downward-facing plates extends those rectangular plate formulas to other convex shapes, achieving 3.8% RMSRE on vertical disk convection from Hassani and Hollands (1987) and 3.2% from Kobus and Wedekind (1995). Full article
(This article belongs to the Special Issue Feature Papers of Thermo in 2022)
Show Figures

Figure 1

19 pages, 36978 KB  
Article
Algorithms for the Recognition of the Hull Structures’ Elementary Plate Panels and the Determination of Their Parameters in a Ship CAD System
by Sergey Ryumin, Vladimir Tryaskin and Kirill Plotnikov
J. Mar. Sci. Eng. 2023, 11(1), 189; https://doi.org/10.3390/jmse11010189 - 11 Jan 2023
Cited by 1 | Viewed by 2778
Abstract
The article deals with some issues of geometric modeling of ship hull structures in specialized CAD system. Stiffened shells and platings should be idealized as a set of elementary plate panels for the purpose of structural design using local strength and buckling requirements. [...] Read more.
The article deals with some issues of geometric modeling of ship hull structures in specialized CAD system. Stiffened shells and platings should be idealized as a set of elementary plate panels for the purpose of structural design using local strength and buckling requirements. In the process of geometric modeling and creating the database for calculation, a special searching algorithm for closed loops of every panel should be used. This algorithm is to have good performance and versatility. In this paper, the authors suggest an original algorithm used in CADS-Hull software developed in SMTU. It is based on a regular field of points generation within the large contour of the considered structure. A series of rays is built from every point to find intersections. It is shown that this algorithm is quite good for structures (expansions, decks, bulkheads, etc.) with non-orthogonal boundaries. Some tasks for logical operations with found panels are also discussed. One of them is the clipping of a panel or plate polygon by boundaries of a considered structure (expansion contour, hull lines). The authors developed a generic method of polygons clipping. It is based on a rotation of clipping convex polygons together with the clipped polygons. All faces of the latter that are in the negative half-plane are removed. Some problems of collecting data for every found panel are discussed. An original algorithm of smaller and larger size definition for irregular and triangular panels is also given in this paper. Full article
(This article belongs to the Special Issue Strength of Ship Structures)
Show Figures

Figure 1

Back to TopTop