Next Article in Journal
A Bi-Population Co-Evolutionary Multi-Objective Optimization Algorithm for Production Scheduling Problems in a Metal Heat Treatment Process with Time Window Constraints
Previous Article in Journal
Mathematical Modeling and Finite Element Analysis of Torsional Divergence of Carbon Plates with an AIREX Foam Core
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RMVC: A Validated Algorithmic Framework for Decision-Making Under Uncertainty

Department of Mathematics, Bursa Uludag University, Nilufer, Bursa 16059, Turkiye
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2693; https://doi.org/10.3390/math13162693
Submission received: 31 July 2025 / Revised: 16 August 2025 / Accepted: 18 August 2025 / Published: 21 August 2025
(This article belongs to the Section E1: Mathematics and Computer Science)

Abstract

The reliability of decision-making algorithms within soft set theory is fundamentally constrained by their underlying membership functions. Traditional binary approaches overlook the implicit connections between the attributes a candidate possesses and those it lacks—connections that can be inferred from the wider candidate pool. To address this core challenge, this paper puts forward the Relational Membership Value Calculation (RMVC), an algorithmic framework whose core is a fine-grained relational membership function. Our approach moves beyond binary logic to capture these nuanced interrelationships. We provide a rigorous theoretical analysis of the proposed algorithm, including its computational complexity and robustness, which is validated through a comprehensive sensitivity analysis. Crucially, a comparative analysis using the Gini Index quantitatively demonstrates that our method provides significantly higher granularity and discriminatory power on a representative case study. The RMVC is implemented as an open-source Python program, providing a foundational tool to enhance the reasoning capabilities of AI-driven decision support and expert systems.

1. Introduction

Soft set theory provides a robust and flexible framework for addressing problems characterized by uncertainty, ambiguity, or incomplete information. Unlike conventional set theories, which often struggle with imprecise data, soft set theory offers a more nuanced approach by associating elements of a universal set with a parameter set. This association creates a collection of sets that effectively captures the variability and imprecision inherent in real-world problems. First proposed by Molodtsov in 1999 [1] as a generalization and alternative to fuzzy set theory and fuzzy logic, soft set theory has since been extensively refined and expanded. Its capacity to model and manage complex decision-making scenarios where information is partially known or inherently vague has established it as a powerful tool across a wide range of disciplines, including artificial intelligence, economics, engineering, and data science.
Molodtsov developed soft set theory to fill the gaps left by classical and fuzzy set theories, which often struggled to handle complex uncertainty and ambiguity in practical situations. Soft set theory offered a pioneering approach to problems involving imprecise or incomplete information by introducing a more flexible and generalized framework. Following its introduction, the theory garnered widespread attention, leading to an acceleration of mathematical research and the exploration of its applications across numerous disciplines. In computer science, soft sets proved helpful for handling data uncertainty, while in decision theory, they provided enhanced models for complex, multi-criteria decision-making processes. Similarly, fields like data mining and artificial intelligence benefited from their capacity to process ambiguous information more effectively than traditional methods. Economists and biologists also found value in the theory, using it to model uncertainty in economic systems and biological data. Consequently, soft set theory has evolved into a versatile tool for addressing the intricate challenges of uncertain environments.
In the 2000s, soft set theory saw significant growth and refinement as researchers expanded its foundational concepts and explored new applications. Early on, Maji et al. [2] showcased practical applications for soft set theory, and the following year [3], they introduced essential concepts such as equality and subset relations within this framework. Chen et al. [4] worked on simplifying soft sets by reducing the number of parameters needed, aiming to streamline the computational aspects of soft sets. Ali et al. [5] added new operations between soft sets, further expanding the theory’s scope. Kong et al. [6] introduced the concept of normal parameter reduction, along with algorithms that made this approach more applicable in decision-making processes using soft sets. In 2010, Cagman and Enginoglu [7] reexamined key operations between soft sets, working on the product of soft sets and introducing the uni-int decision-making method, which enhanced decision processes under uncertain conditions. The theory continued to evolve in the following years: Peng and Yang, in [8], developed algorithms based on interval-valued fuzzy soft sets for multi-criteria decision-making in stochastic environments. By 2019, their work extended into the use of inverse fuzzy soft sets in decision-making [9], further solidifying soft set theory as a valuable tool for tackling complex, real-world problems.
Soft sets provide an effective mathematical model for decision-making processes focused on selecting the best option due to their ability to express objects that provide parameters within a soft set. This structure has been successfully utilized in numerous studies aimed at eliminating uncertainties. Recent works [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25] have shown the importance of soft sets as a tool in managing uncertainty and decision-making processes.
Since 2020, studies on soft set theory have made significant progress, particularly in uncertainty management and multi-criteria decision-making (MCDM) problems. During this period, soft set theory has been expanded, and hybrid models have been developed. Structures like fuzzy soft sets and interval-valued fuzzy soft sets have been effective in optimizing decision-making processes.
Additionally, research on filling missing data and parameter reduction has allowed soft sets to be used more efficiently with large datasets. These studies have developed various algorithms for predicting missing data, enabling decision-making processes to proceed despite this missing information.
These developments demonstrate that soft set theory is a flexible and powerful tool in decision sciences and uncertainty management.
While foundational methods, such as the algorithm proposed by Maji and Roy [3], provide a viable approach to decision-making with soft sets, they can suffer from a loss of sensitivity due to the summation process, potentially masking nuanced differences between alternatives. The inherent limitations of binary membership values in classical soft set decision-making have been noted in the literature. Some approaches, for instance, address this issue by redefining the membership concept to be relational [14]. Our work also addresses this fundamental challenge but introduces a distinct mathematical formulation through the Θ function. This function not only captures relational aspects but is embedded within a framework that allows for its quantitative validation. Unlike conceptual models, we empirically demonstrate the effectiveness of our approach in enhancing decision granularity using the Gini Index and provide a comprehensive computational and sensitivity analysis, offering a robust and testable solution to the problem. We develop and present the Relational Membership Value Calculation (RMVC), a novel algorithm that offers a more granular and sensitive scoring mechanism. The primary contribution of this work lies not only in the enhanced theoretical framework but also in its practical implementation as a fully automated, open-source computational tool, thereby increasing the transparency, reproducibility, and applicability of soft set theory in real-world decision-making scenarios. The enhanced granularity and robustness of the RMVC algorithm make it a promising component for integration into larger intelligent systems. Specifically, its ability to provide nuanced rankings could significantly improve the reliability of AI-driven applications such as multi-criteria expert systems, medical diagnostic tools, and personalized recommender systems, where ambiguity in decision-making can lead to suboptimal outcomes. The remainder of this paper is organized as follows:
Section 2 reviews the fundamental concepts of soft set theory.
Section 3 introduces the RMVC algorithm in detail, presents its core relational membership function, Θ , and provides a rigorous analysis of its theoretical properties, including computational complexity and robustness to data sparsity.
Section 4 is dedicated to the validation of our algorithmic framework, where we first conduct a sensitivity analysis to test the model’s stability under perturbation.
Following this, Section 5 grounds our theoretical claims by applying the RMVC framework to an illustrative decision-making scenario. Within the context of this real-world problem, we use the Gini Index to provide quantitative evidence that our algorithm achieves higher granularity compared to established methods.
Finally, Section 6 concludes this paper with a summary of our findings and suggests directions for future research.

2. Preliminaries

This section presents the basic definitions and results of fuzzy sets, soft sets, and inverse soft sets required in the following sections. Detailed explanations of soft sets and inverse soft sets can be found in [5,7,26]. To improve clarity and ensure consistency in mathematical notation, we have made some adjustments to the definitions and representations found in the literature.
Definition 1. 
Let U be a universal set. A fuzzy set F on U is characterized by a membership function μ : U [ 0 , 1 ] , where for each x U , the value μ ( x ) indicates the degree to which x belongs to F. This fuzzy set is denoted as follows:
F = { ( x , μ ( x ) ) : x U } .
Here, μ is called the membership function of U , and the value μ ( x ) is called the grade of membership of x U .
Definition 2. 
Assume that U is a set of elements and E is a set of parameters. An ordered pair ( A , Φ ) is called a soft set over U , where A E and Φ is a mapping given by
Φ : A P ( U )
where P ( U ) is the power set of U .
In this study, we specifically focus on the case where A = E , and the decision process involves only a single decision-maker.
A soft set ( E , Φ ) over U can be represented as the set of ordered pairs
( E , Φ ) = ( e , Φ ( e ) ) : e E , Φ ( e ) P ( U ) .
Definition 3. 
Assume that U is a set of elements and E is a set of parameters. An ordered pair ( U ,ψ) is called an inverse soft set (ISS) over E, where ψ is a mapping given by
ψ : U P ( E ) .
An inverse soft set ( U , ψ ) over E can be represented by the set of ordered pairs
( U , ψ ) = ( u , ψ ( u ) ) : u U , ψ ( u ) P ( E ) .
As established by Cetkin [26], each soft set can be uniquely represented as an inverse soft set, and furthermore, each fuzzy set generates an inverse soft set.

3. A Different Approach to Decision-Making Under Uncertainty via Python Code

In this section, we redefine the relational and inverse relational membership functions to enhance sensitivity and accuracy in decision-making. Furthermore, to obtain more exact and reliable results in soft set theory, we developed a program called the “Relational Membership Value Calculator” (RMVC) using Python 3.13.3.
Definition 4. 
Let U = { 1 , 2 , 3 , , n } be a universal set and let E = { e 1 , e 2 , e 3 , , e m } be a set of parameters. Let ( E , Φ ) be a soft set over U . For each 1 i m , and for any k U Φ ( e i ) , the relational membership value of k with respect to Φ ( e i ) is defined via the mapping
Θ e i : U Φ ( e i ) [ 0 , 1 ] .
This mapping, called the relational membership function, assigns a membership value to each element k not belonging to Φ ( e i ) and given by
Θ e i ( k ) = 1 | Φ ( e i ) |   · ( m 1 ) e t E { e i } j Φ ( e i ) δ e t ( k , j ) ,
where 1 j , k n , 1 i , t m , and n , m 2 .
Here, for e t E { e i } , the Kronecker delta function δ e t : [ U Φ ( e i ) ] × Φ ( e i ) { 0 , 1 } is defined as
δ e t ( k , j ) = 1 ; k Φ ( e t ) j Φ ( e t ) , 0 ; otherwise .
Thus, the relational membership function measures how strongly an element k (not originally in Φ ( e i ) ) is connected to the elements of Φ ( e i ) through other parameters.
Theorem 1. 
The proposed relational membership function, Θ e i ( k ) , is consistent and monotonic.
Proof. 
The proof is structured in two parts, demonstrating the properties of consistency and monotonicity based on the function’s definition.
1. Consistency: This property requires the function to produce values within a predictable, well-defined range. We will prove that the function Θ e i ( k ) is always bounded within the interval [ 0 ,   1 ] .
The function is defined as
Θ e i ( k ) = 1 | Φ ( e i ) |   ·   ( m 1 ) e t E { e i } j Φ ( e i ) δ e t ( k , j )
Lower bound: The Kronecker delta function, δ e t ( k , j ) , can only take values of 0 or 1. Therefore, the double summation in the numerator must be greater than or equal to zero. The denominator, which consists of the cardinality of a set ( | Φ ( e i ) | ) and the number of other parameters ( m 1 ), is always a positive integer (since m 2 ). Thus, the entire expression is non-negative:
Θ e i ( k ) 0
Upper bound: To find the maximum possible value, consider the numerator. The outer sum runs over m 1 parameters in E { e i } , and the inner sum runs over | Φ ( e i ) | elements. In the most extreme case, δ e t ( k , j ) would be 1 for every single term in the sum. The maximum value of the double summation is therefore the total number of terms, which is | Φ ( e i ) |   ·   ( m 1 ) . Substituting this maximum value into the function gives
Θ e i ( k ) m a x = | Φ ( e i ) |   ·   ( m 1 ) | Φ ( e i ) |   ·   ( m 1 ) = 1
Since the numerator can never exceed this value, we have
Θ e i ( k ) 1
Combining these bounds confirms that 0 Θ e i ( k ) 1 , proving that the function is consistent.
2. Monotonicity: In the context of this function, monotonicity relates specifically to the co-occurrence relationship. The property implies that as the relational connection of an element k to the set Φ ( e i ) increases, its membership value Θ e i ( k ) must also increase. The relational connection is measured by the sum of δ values.
Let the total co-occurrence count be C ( k ) = e t E { e i } j Φ ( e i ) δ e t ( k , j ) . The function can be written as
Θ e i ( k ) = C ( k ) | Φ ( e i ) |   ·   ( m 1 )
The denominator is a positive constant for a given e i . Therefore, Θ e i ( k ) is directly proportional to the co-occurrence count C ( k ) . If any event occurs that increases the relational connection (e.g., for a previously non-matching pair ( k , j ) under a parameter e t , a change causes them to co-occur, so δ e t ( k , j ) changes from 0 to 1), the value of C ( k ) increases. Consequently, the value of Θ e i ( k ) also increases.
This direct proportionality demonstrates that the function is monotonic with respect to the strength of the relational connections, which is the core concept of the proposed function. □
Example 1. 
Let U = { 1 , 2 , 3 , 4 , 5 } be a set of elements and E = { e 1 , e 2 , e 3 , e 4 } be a set of parameters. If Φ ( e 1 ) = { 1 , 2 , 3 , 5 } , Φ ( e 2 ) = { 2 , 4 , 5 } , Φ ( e 3 ) = { 1 , 3 , 4 } , Φ ( e 4 ) = { 1 , 2 , 5 } , then the corresponding soft set ( E , Φ ) is given by
( E , Φ ) = { ( e 1 , { 1 , 2 , 3 , 5 } ) , ( e 2 , { 2 , 4 , 5 } ) , ( e 3 , { 1 , 3 , 4 } ) , ( e 4 , { 1 , 2 , 5 } ) } .
We now calculate all the relational membership values for this soft set.
First we consider the parameter e 1 (for i = 1 ). The only element in U but not in Φ ( e 1 ) is k = 4 . So the value Θ e 1 ( 4 ) is calculated as follows:
Θ e 1 ( 4 ) = 1 4.3 δ e 2 ( 4 , j ) + δ e 3 ( 4 , j ) + δ e 4 ( 4 , j ) , j Φ ( e 1 ) = [ 0 + 1 + 0 + 1 ] + [ 1 + 0 + 1 + 0 ] + [ 0 + 0 + 0 + 0 ] 12 = 1 3
The remaining relational values were calculated in a similar manner. The complete set of results is as follows:
Θ e 2 ( 1 ) = 5 9 , Θ e 2 ( 3 ) = 1 3 , Θ e 3 ( 2 ) = 4 9 , Θ e 3 ( 5 ) = 4 9 , Θ e 4 ( 3 ) = 4 9 , Θ e 4 ( 4 ) = 1 3 .
These results can be compiled into a matrix for a more comprehensive representation as shown in Table 1.
The discriminative power of the RMVC algorithm is inherently linked to the relational richness of the input data. The core of the method, the relational membership value Θ , quantifies the strength of an alternative’s relationship with other alternatives for each parameter set. This calculation relies on the co-occurrence of alternatives as measured by the δ function.
In highly sparse data environments—where parameter sets ( Φ ( e i ) ) contain few members and have minimal overlap—the opportunities for such co-occurrences diminish significantly. Consequently, the numerator of the Θ formula, representing the sum of relational interactions, tends toward smaller values. For any non-member of a given parameter set, its calculated relational score will therefore converge towards zero.
This leads to a critical boundary condition. As data becomes sparser, the resulting Θ matrix becomes increasingly polarized and dominated by binary-like values: “1” for members and values approaching “0” for non-members. In such a state, the nuanced, relational membership values that give the RMVC method its depth are reduced, and the algorithm’s behavior begins to approximate a simple inclusion–exclusion check rather than a rich relational analysis. While the algorithm remains structurally sound, its ability to differentiate finely between non-members is consequently reduced. This highlights an important dependency: the full potential of the RMVC method is realized when the input soft set is sufficiently dense to generate a strong relational signal.
To facilitate the practical application of our method (algorithmic framework), we provide a reference implementation in Python. The following code snippet illustrates the core of the RMVC algorithm, especially demonstrating how user input for the universal set U is processed. The complete open-source program is publicly available for researchers and practitioners around the globe who are interested in further details or direct application. The repository can be accessed at https://github.com/DrDayioglu/RMVC.git (20 August 2025).
The initial step involves defining the universal set U . The following Python function is used to capture this input from the user:
Mathematics 13 02693 i001
Next the parameter set E and the corresponding subsets of U must be defined, since the soft set mapping, ϕ , assigns a subset of U (an element of the power set, P ( U ) ) to each parameter in E; the program must first be capable of generating the power set of U to construct these relationships.
Mathematics 13 02693 i002
Next we define the sum part of the delta function, which is crucial for this newly presented method.
Mathematics 13 02693 i003
To ensure data consistency and improve readability, the input data is first converted into a list of sets using the following code snippet:
Mathematics 13 02693 i004
Next, each parameter set is mapped to a unique identifier e 1 , e 2 , , e m to facilitate programming access. This is achieved with the following code.
Mathematics 13 02693 i005
The core calculation is then executed. The following loop iterates through each parameter set to complete the final relational membership values.
Mathematics 13 02693 i006
The complete output generated by the “Relational Membership Value Calculator” (RMVC) for this example is given in Figure 1.

3.1. Computational Complexity Analysis

To evaluate the scalability of the RMVC algorithm for large-scale soft sets, we analyze its theoretical time complexity. Let n = | U | be the number of elements (candidates), m = | E | be the number of parameters (criteria), and k a v g be the average cardinality of a parameter set Φ ( e i ) .
The algorithm’s complexity is determined by three primary computational steps:
  • Calculation of the membership matrix ( M m × n ): This is the most computationally intensive step. To calculate a single Θ e i ( j ) value, the algorithm iterates through all other m 1 parameters and, for each, iterates through the elements of Φ ( e i ) . This results in a complexity of approximately O ( m · k a v g ) for a single entry. Since there are n × m entries to compute, the total complexity for this step is O ( n · m · ( m · k a v g ) ) = O ( n · m 2 · k a v g ) .
  • Calculation of final scores: This step involves summing the n columns of the m × n matrix, which has a complexity of O ( n · m ) .
  • Selection of the optimal candidate: Finding the maximum value among n scores takes O ( n ) time.
The overall time complexity is determined by the most demanding step, which is the calculation of the membership matrix. Therefore, the computational complexity of the RMVC algorithm is approximately O ( n · m 2 · k avg ) .
This analysis indicates that the algorithm’s performance scales linearly with the number of candidates (n) but quadratically with the number of parameters (m). This suggests that the RMVC is well-suited for problems with a very large number of candidates but may become computationally expensive for applications involving an extremely high number of evaluation criteria.

3.2. Sensitivity Analysis

A crucial attribute of any clustering algorithm is its robustness against small perturbations in the input data. For the RMVC, the input consists of the parameter sets Φ ( e i ) . Sensitivity analysis for our model, therefore, involves examining how the final clustering output reacts to minor changes within these sets, such as the addition or removal of a single element.

3.2.1. Relational Similarity Matrix

The relational membership function, Θ e i ( u ) , produces a relational membership value that indicates the association of an element u with a specific parameter set Φ ( e i ) . To obtain a complete relational profile of an element, we collect these values across all parameter sets. We define this profile as the Relational Membership Vector.
For any element u U , its membership vector Θ u is an | E | -dimensional vector where each component is the element’s relational membership value with respect to one parameter set:
Θ u = [ Θ e 1 ( u ) , Θ e 2 ( u ) , , Θ e | E | ( u ) ]
Clustering requires us to measure the pairwise similarity between any two elements, u and v. A natural way to achieve this is by comparing their respective membership vectors, Θ u and Θ v . We define a similarity matrix S, where the entry S u v is computed by measuring the average L1-distance between these vectors. A smaller distance implies higher similarity. The formula is defined as
S u v = 1 1 | E | i = 1 | E | | Θ e i ( u ) Θ e i ( v ) |
Here, S u v = 1 indicates that the elements have identical membership vectors (perfect similarity), while a value approaching 0 indicates high dissimilarity. This matrix S provides the foundational input for the final graph-partitioning step and for the sensitivity analysis that follows.
Let us consider a small perturbation in a single parameter set Φ ( e i ) , resulting in a new set Φ ( e i ) . This change propagates through our model in a clear sequence:
1.
Change in membership values: A perturbation in a single parameter set, say from Φ ( e i ) to Φ ( e i ) , propagates its effect to all relational membership values across the system, not just one. This happens in two ways:
  • Direct effect (on Θ e i ): The membership function associated directly with the changed parameter, Θ e i ( k ) , is the most impacted. Both its denominator, which contains the term | Φ ( e i ) | , and its numerator, which sums over j Φ ( e i ) , are altered. This requires a full recalculation of Θ e i ( k ) for all elements k.
  • Indirect effect (on Θ e t for all t i ): The relational membership value for any other parameter, Θ e t ( k ) , is also affected. Its calculation involves a summation over all parameters e u E { e t } . Since t i , this collection of parameters necessarily includes e i . Therefore, the δ e i ( k , j ) term within the sum will now use the perturbed set Φ ( e i ) . This change in a single delta term will cause a small but non-zero change in the value of Θ e t ( k ) , rippling the effect of the initial perturbation throughout the entire system.
2.
Change in relational similarity matrix: Since the relational similarity matrix S is constructed from all Θ values, the changes described above will lead to a modified similarity matrix S .
2.
Potential change in final partition: The final clustering partition P * is derived from S. A change from S to S may or may not be significant enough to alter the final partition P * .
The relational nature of the Θ function provides a degree of inherent robustness. The membership value is an average of co-occurrence information calculated over all other parameters ( m 1 ) and all elements within a set. Therefore, a single change (one δ value flipping from 0 to 1, or the addition/removal of one element) has a diluted effect on the final Θ value rather than causing a catastrophic shift.
To quantitatively measure this sensitivity, one can perform the following analysis:
  • Let S be the original similarity matrix computed with the initial parameter sets { Φ ( e i ) } i = 1 m .
  • Introduce a single perturbation, for instance, by adding one element to a single set, to get { Φ ( e i ) } .
  • Compute the new similarity matrix S .
  • The magnitude of the change can be measured by the Frobenius norm of the difference between the matrices: Sensitivity = | | S S | | F . A lower sensitivity score indicates higher robustness.
  • Furthermore, one can compare the final clustering partitions P * (from S) and P * (from S ) using an external validation index like the Adjusted Rand Index (ARI). An ARI score of 1 would signify that the small perturbation had no effect on the final clustering result, demonstrating perfect stability for that specific change.
This analytical framework allows for a systematic evaluation of the algorithm’s stability, confirming that the RMVC is not overly sensitive to minor variations in its input parameterization, a desirable trait for real-world applications.

3.2.2. A Practical Case Study of Sensitivity

To provide a concrete illustration of the theoretical sensitivity framework, we apply it step-by-step to the setup from Example 1. This practical analysis demonstrates how to compute the similarity matrix, introduce a change to the system, and quantitatively measure the impact on the final clustering outcome.
Step 1: Establishing the Baseline with the Original Data
Our first task is to establish a baseline result using the original, unperturbed data from Example 1. This involves computing the relational similarity matrix, S, using the formula defined in the previous section.
To ensure complete transparency and reproducibility, this section provides a detailed, step-by-step calculation of each unique entry in the similarity matrix S for the setup in Example 1. This detailed demonstration is intended to showcase the mechanics of the methodology once. For the sake of abbreviation, in subsequent sections, similar intermediate calculation steps will be omitted and only the final results will be presented.
Step 1: Constructing Relational Membership Vectors.
First, we construct the relational membership vectors, Θ u , for every element u U . All subsequent calculations are based on these vectors:
  • Θ 1 = [ 1 , 0.556 , 1 , 1 ]
  • Θ 2 = [ 1 , 1 , 0.444 , 1 ]
  • Θ 3 = [ 1 , 0.556 , 1 , 0.667 ]
  • Θ 4 = [ 0.333 , 1 , 1 , 0.333 ]
  • Θ 5 = [ 1 , 1 , 0.444 , 1 ]
Step 2: Pairwise similarity calculations.
We now calculate the similarity S u v for each pair of elements using the following formula:
S u v = 1 1 4 i = 1 4 | Θ e i ( u ) Θ e i ( v ) |
To illustrate the process, we give a detailed calculation for a few representative entries:
  • Calculating S 12 :
    • The sum of absolute differences is | 1 1 | + | 0.556 1 | + | 1 0.444 | + | 1 1 | = 0 + 0.444 + 0.556 + 0 = 1 .
    • S 12 = 1 ( 1 / 4 ) = 1 0.250 = 0 . 750 .
  • Calculating S 24 :
    • The sum of absolute differences is | 1 0.333 | + | 1 1 | + | 0.444 1 | + | 1 0.333 | = 0.667 + 0 + 0.556 + 0.667 = 1.890 .
    • S 24 = 1 ( 1.890 / 4 ) = 1 0.472 = 0 . 528 .
The remaining entries of the similarity matrix S were calculated using the same procedure. This process is repeated for all pairs. The completed results form the final similarity matrix, which is presented in the next step.
Step 3: Assembling the final similarity matrix.
Compiling all the calculated similarity values yields the final similarity matrix, S = [ S i j ] , i = 1 , 2 , 3 , 4 , 5 :
S
12345
110.7500.9170.5550.750
20.75010.6670.5281
30.9170.66710.6390.667
40.5550.5280.63910.528
50.75010.6670.5281
Applying a graph-partitioning method to this matrix, we can derive the initial clustering partition, P * . For this analysis, we define a “strong tie” as any connection with a similarity score greater than 0.85 . An analysis of the strong ties in matrix S reveals the following:
  • The similarity S 13 is 0.917 (>0.85), indicating a strong tie. This forms the core cluster: { 1 , 3 } .
  • The similarity S 25 is 1 (>0.85), also indicating a strong tie. This forms a second distinct cluster: { 2 , 5 } .
  • No other similarity values exceed the 0.85 threshold. Element “4” does not form a strong tie with any other element and is therefore considered an outlier.
This analysis yields the following initial partition P * :
P * = { { 1 , 3 } , { 2 , 5 } , { 4 } } .
This partition consists of two distinct, cohesive clusters and a single outlier element.
Step 2: Defining and Applying the Perturbation
Next, a minimal, localized perturbation is introduced to the system to test its stability. We modify the parameter set Φ ( e 3 ) by removing the element “4”. The new, perturbed set is denoted as Φ ( e 3 ) = { 1 , 3 } . All other parameter sets remain unchanged.
Step 3: Calculating the Post-Perturbation Outcome
Following this change, the relational membership vectors ( Θ u ) and, consequently, the entire similarity matrix must be recalculated. After performing all calculations with the perturbed data, we compute the new similarity matrix, S .
S
12345
110.75010.4720.750
20.75010.7500.5281
310.75010.4720.750
40.4720.5280.47210.528
50.75010.7500.5281
Step 4: Quantitative Analysis and Interpretation
The final step is to quantitatively compare the pre- and post-perturbation results to measure the algorithm’s stability.
Frobenius norm: To measure the overall change in the similarity structure, we calculate the Frobenius norm of the difference between the original matrix S and the perturbed matrix S :
| | S S | | F = i , j ( S i j S i j ) 2 0.333
The low value indicates that the perturbation did not cause a drastic shift; the overall relational structure between elements remained highly stable.
Adjusted Rand Index (ARI): To measure the effect on the final partition, we compare the original partition, P * , with the new partition, P * , derived from S . Applying the same partitioning logic (a similarity threshold > 0.85 ) to S yields an identical partition:
P * = { { 1 , 3 } , { 2 , 5 } , { 4 } }
The new partition is identical to the original. The ARI score, which measures the agreement between two data clusterings, confirms this perfect stability:
ARI ( P * , P * ) = 1.0
An ARI score of 1.0 signifies that the final output was completely unaffected by the perturbation. This practical demonstration validates the robustness of the RMVC algorithm, showcasing its resilience against minor variations in the input data.

4. A New Decision-Making Approach

In this section, building on the new technical formulation for soft sets presented in the third section, we introduce a new algorithmic method for decision-making in uncertain environments, basically the outlines of our Python code for the RMVC. We develop the Algorithm 1 that applies our new formulation to this context.
Algorithm 1: Determine the best choice in a given SS.
Input: Let U = { 1 , 2 , 3 , , n } be the set of elements, let E = { e 1 , e 2 , e 3 , , e m } be the set of parameters. Given a soft set ( E , Φ ) , where m , n 2 .
Step 1: Computing all relational membership values using the Θ function as defined in the new formula.

Step 2: Constructing the matrix M m × n which contains the membership values derived from Ω Θ e i ( r ) where r = 1 , 2 , , n , i = 1 , 2 , , m :
M m × n = Ω Θ e 1 ( 1 ) Ω Θ e 1 ( 2 ) Ω Θ e 1 ( 3 ) Ω Θ e 1 ( n ) Ω Θ e 2 ( 1 ) Ω Θ e 2 ( 2 ) Ω Θ e 2 ( 3 ) Ω Θ e 2 ( n ) Ω Θ e m ( 1 ) Ω Θ e m ( 2 ) Ω Θ e m ( 3 ) Ω Θ e m ( n )
and Ω Θ e i ( r ) is defined as
Ω Θ e i ( r ) = 1 ; r Φ ( e i ) Θ e i ( r ) ; otherwise .
Here, Ω : E × U [ 0 , 1 ] represents the membership function.

Step 3: Calculating the score, S ( r ) of elements r U using the following formula:
S ( r ) = j = 1 m Ω Θ e j ( r ) .
Step 4: Selecting the element r with the highest score, namely, the one for which S ( r ) is maximized.
A fundamental aspect of the proposed framework is the choice of using a raw sum for the final score aggregation ( S ( r ) ). This decision was made deliberately, prioritizing interpretability and parameter-free objectivity in this foundational presentation of the RMVC model while acknowledging that more complex aggregation methods exist.

4.1. Score Resolution and Tie-Breaking Mechanism

A critical attribute of any decision-making algorithm is its ability to produce a clear and decisive ranking and and to minimize the occurrence of ties. The proposed RMVC scoring mechanism holds a significant theoretical advantage in this regard compared to traditional soft set approaches.
Many conventional methods in soft set theory, such as the one proposed by Maji and Roy, rely on scoring systems based on integer counts. In such systems, a candidate’s score is typically the total number of criteria they satisfy. This restricts the set of possible scores to a small, discrete set of integers, { 0 , 1 , , m } . When the range of possible outcomes is limited, the probability of different candidates achieving the exact same score is inherently higher.
In contrast, the RMVC methodology is based on the relational membership function, Θ , which produces rational numbers as values. The final score for a candidate, S ( r ) , is the sum of these granular values. This process maps the candidates to a much larger and denser set of possible scores. Because the image set of the scoring function is significantly larger and denser, the probability of a coincidental tie between two distinct candidates is substantially reduced. While an absolute guarantee against ties is not possible, the structural nature of the RMVC renders them a rare occurrence.
To ensure the algorithm is robust and deterministic even in the rare case of a tie, the final selection is formally defined as a two-tiered process:
  • Primary criterion: The primary selection criterion is the relational score S ( r ) . The optimal candidate, u * , is the one that maximizes the score:
    u * = arg max u i U S ( i )
  • Secondary (tie-breaking) criterion: If two or more candidates share an identical maximal score, a secondary tie-breaking criterion is applied. For this, we use a simpler choice value, k i , representing the total count of parameter sets to which candidate u i belongs. The candidate with the higher k i value among the tied candidates is chosen. If the tie persists even after this secondary criterion, the candidates are considered equally optimal and can be presented as a set of co-optimal choices.
Example 2. 
Let U = { 1 , 2 , 3 , 4 , 5 , 6 , 7 } be a set of candidates and E = { e 1 , e 2 , e 3 , e 4 , e 5 , e 6 } be a set of evaluation criteria for a specific job position. For each i = 1 , 2 , , 6 , the element e i in E represents a required attribute, such as “higher education”, “experience”, “foreign language proficiency”, etc.
The decision-making problem is to select the optimal candidate for the position.
The soft set ( E , Φ ) is defined by the following:
Φ ( e 1 ) = { 1 , 2 , 3 , 6 } , Φ ( e 2 ) = { 2 , 4 , 6 } , Φ ( e 3 ) = { 1 , 3 , 7 } , Φ ( e 4 ) = { 3 , 5 , 6 } , Φ ( e 5 ) = { 1 , 5 , 7 } , Φ ( e 6 ) = { 2 , 3 , 6 } .
This can be expressed as the set of ordered pairs:
( E , Φ ) = { ( e 1 , { 1 , 2 , 3 , 6 } ) , ( e 2 , { 2 , 4 , 6 } ) , ( e 3 , { 1 , 3 , 7 } ) , ( e 4 , { 3 , 5 , 6 } ) , ( e 5 , { 1 , 5 , 7 } ) , ( e 6 , { 2 , 3 , 6 } ) }
We proceed by calculating all relational membership values for ( E , Φ ) , which will then be compared with the output of the RMVC software implementation.
Since candidates { 4 , 5 , 7 } do not satisfy the first criterion e 1 , their corresponding relational membership values are calculated as
Θ e 1 ( 4 ) = 1 / 10 , Θ e 1 ( 5 ) = 3 / 20 , Θ e 1 ( 7 ) = 3 / 20 .
Similarly, since the candidates { 1 , 3 , 5 , 7 } do not satisfy the second criterion e 2 , their corresponding relational membership values are given by
Θ e 2 ( 1 ) = 2 / 15 , Θ e 2 ( 3 ) = 1 / 3 , Θ e 2 ( 5 ) = 1 / 15 , Θ e 2 ( 7 ) = 0 .
The complete set of remaining relational memberships values is as follows:
Θ e 3 ( 2 ) = 1 5 , Θ e 3 ( 4 ) = 0 , Θ e 3 ( 5 ) = 1 5 , Θ e 3 ( 6 ) = 4 15 ,
Θ e 4 ( 1 ) = 4 15 , Θ e 4 ( 2 ) = 1 3 , Θ e 4 ( 4 ) = 1 15 , Θ e 4 ( 7 ) = 2 15
Θ e 5 ( 2 ) = 1 15 , Θ e 5 ( 3 ) = 4 15 , Θ e 5 ( 4 ) = 0 , Θ e 5 ( 6 ) = 2 15
Θ e 6 ( 1 ) = 4 15 , Θ e 6 ( 4 ) = 2 15 , Θ e 6 ( 5 ) = 2 15 , Θ e 6 ( 7 ) = 1 15
This completes Step 1.
Following Step 2, we prepare the matrix as
1 1 1 0.10 0.15 1 0.15 0.13 1 0.33 1 0.07 1 0 1 0.2 1 0 0.2 0.27 1 0.27 0.33 1 0.07 1 1 0.13 1 0.07 0.27 0 1 0.13 1 0.27 1 1 0.13 0.13 1 0.07
In Step 3, the final scores are calculated for each candidate:
S ( 1 ) = 3.67 , S ( 2 ) = 3.6 , S ( 3 ) = 4.6 , S ( 4 ) = 1.30 , S ( 5 ) = 2.55 , S ( 6 ) = 4.4 , S ( 7 ) = 2.35
In Step 4, the optimal choice is determined by selecting the candidate with the maximum score. In this case, S ( 3 ) is the maximum value, indicating that candidate 3 is the optimal choice.
These results were cross-validated using our software implementation. Feeding the same input data into the RMVC tool yielded identical scores, as shown in Figure 2.
Although the general sensitivity analysis framework discussed in Section 3 already validated the robustness of the RMVC algorithm, we will now perform a similar perturbation on the decision-making problem of Example 3 to reiterate and reinforce this principle. The objective is to validate whether the optimal choice, candidate 3, maintains its rank under a minor perturbation. To demonstrate a case of stability, we introduce a specific perturbation and analyze its consequences.
As a test case, we assume that candidate 5 now satisfies the criterion e 1 (“higher education”), modifying the parameter set to Φ ( e 1 ) = { 1 , 2 , 3 , 5 , 6 } . This change requires a full recalculation of relational scores. Table 2 compares the original scores, all with the new scores generated by the RMVC software following this perturbation.
As the results demonstrate, the perturbation had a significant and appropriate impact on the score of the directly targeted candidate (candidate 5), increasing it from 2.55 to 3.87. Furthermore, due to the model’s relational nature, the scores of most other candidates were also slightly adjusted.
Despite these adjustments, the ranking of the top candidates remained stable. Candidate 3 retained the highest score, and thus the optimal decision was unchanged. This analysis confirms that while the model is appropriately responsive to minor changes, selecting the optimal candidate exhibits strong robustness against this type of data variation.

4.1.1. ComparisonAnalysis

To contextualize the performance of our method, we compare its results with those obtained from the traditional algorithm proposed by Maji and Roy [3], using the decision-making problem from Example 2. The algorithm from [3] produces the following candidate ranking, which includes several ties:
3 = 6 > 1 = 2 > 5 = 7 > 4 .
In contrast, our RMVC algorithm yields a fully resolved ranking with no ties:
3 > 6 > 1 > 2 > 5 > 7 > 4 .
This immediate result demonstrates that the RMVC method provides a more granular and decisive ordering of candidates.
In this study, we define granularity as the ability of a scoring method to discriminate between alternatives by reducing ties and producing a more finely distributed score set. To further deepen the analysis of score granularity, we now compare the structural characteristics of the score distributions generated by the RMVC method and the traditional Maji–Roy approach. For this purpose, we treat the set of scores from each method as a discrete probability distribution by normalizing them (i.e., dividing each score by the total sum). We then compute the Gini Index for each resulting distribution to provide a standardized measure of concentration. Unlike the conventional interpretation where higher Gini values are associated with greater inequality, our framework interprets a lower Gini Index as an indicator of higher granularity, since it reflects a less concentrated and more evenly spread distribution of scores.
This analysis is conducted on the decision-making problem of Example 2, as it provides a set of scores for each candidate.

4.1.2. Analysis of the Maji–Roy (Choice Value) Distribution

The traditional method produces scores based on the count of criteria satisfied by each candidate:
  • Raw scores ( k i ): The set of scores for the seven candidates is { 3 , 3 , 4 , 1 , 2 , 4 , 2 } .
  • Normalization: The sum of these scores is 3 + 3 + 4 + 1 + 2 + 4 + 2 = 19 . We normalize each score by dividing by 19 to get a probability distribution:
    P Maji - Roy = { 0.158 , 0.158 , 0.211 , 0.053 , 0.105 , 0.211 , 0.105 }
  • Gini Index calculation: We first sort the normalized probabilities:
    { 0.053 , 0.105 , 0.105 , 0.158 , 0.158 , 0.211 , 0.211 } .
    Applying the Gini Index formula to this distribution yields
    G Maji - Roy 0 . 210

4.1.3. Analysis of the RMVC Distribution

Next, we perform the same analysis on the scores generated by our proposed RMVC method:
  • Raw scores ( S ( i ) ): As presented in Example 2, the scores are
    { 3.67 , 3.60 , 4.60 , 1.30 , 2.55 , 4.40 , 2.35 } .
  • Normalization: The sum of these scores is 22.47 . We normalize to get the probability distribution:
    P RMVC = { 0.163 , 0.160 , 0.205 , 0.058 , 0.113 , 0.196 , 0.105 }
  • Gini Index calculation: We first sort the normalized probabilities:
    { 0.058 , 0.105 , 0.113 , 0.160 , 0.163 , 0.196 , 0.205 } .
    Applying the Gini Index formula to this distribution yields
    G RMVC 0 . 192

4.1.4. Conclusion of Comparative Analysis

The comparative results are summarized below in Table 3.
The results provide consistent quantitative evidence supporting our central claim. The Gini Index for the RMVC scores (0.192) is lower than that of the traditional count-based method (0.210). Even in this small-scale example, this represents a relative reduction in score concentration of approximately 9%. This suggests that the relational membership function maps candidates to a more distributed score space, structurally reducing the probability of ties and enabling a more discriminative basis for decision-making. While the effect size is modest in absolute terms, its presence even in a minimal test case indicates the method’s potential for enhanced granularity in broader applications.
Example 3. 
To demonstrate the flexibility of the RMVC method, we now analyze the inverse problem from Example 2. In this inverse soft set (ISS) formulation, the roles of the sets are reversed. E = { e 1 , e 2 , e 3 , e 4 , e 5 , e 6 } now represents the set of candidates, and U = { 1 , 2 , 3 , 4 , 5 , 6 , 7 } represents the set of criteria (e.g., “higher education”, “experience”, “foreign language”, etc.). The decision-making problem is now to determine which candidate is most strongly supported by the available pool of criteria.
If ψ ( 1 ) = { e 1 , e 3 , e 5 } , ψ ( 2 ) = { e 1 , e 2 , e 6 } , ψ ( 3 ) = { e 1 , e 3 , e 4 , e 6 } , ψ ( 4 ) = { e 2 } , ψ ( 5 ) = { e 4 , e 5 } , ψ ( 6 ) = { e 1 , e 2 , e 4 , e 6 } , ψ ( 7 ) = { e 3 , e 5 } , then the ISS ( ψ , U ) is written by
( ψ , U ) = { ( 1 , { e 1 , e 3 , e 5 } ) , ( 2 , { e 1 , e 2 , e 6 } ) , ( 3 , { e 1 , e 3 , e 4 , e 6 } ) , ( 4 , { e 2 } ) , ( 5 , { e 4 , e 5 } ) , ( 6 , { e 1 , e 2 , e 4 , e 6 } ) , ( 7 , { e 3 , e 5 } ) }
Utilizing the RMVC allows us to derive the complete set of relational membership values. When we organize these values into a matrix format as outlined in our algorithm, we subsequently obtain the matrix presented below.
1 0.11 1 0.22 1 0.22 1 1 0.17 0.28 0.06 1 1 0.21 1 1 0.17 1 0.33 1 0 0.17 0 0.33 0.25 0.08 0.25 1 1 0.17 1 1 0.17 1 0.08 1 0.25 0 1 0.17 1 0.08
As expected from the model’s asymmetric nature, the resulting unified membership matrix is not the transpose of the matrix from the forward problem. The final scores for each candidate (now parameters) are calculated as
S ( e 1 ) = 4.83 , S ( e 2 ) = 3.4 , S ( e 3 ) = 3.58 , S ( e 4 ) = 3.83 , S ( e 5 ) = 3.31 , S ( e 6 ) = 3.81 .
The maximum score is S ( e 1 ) , indicating that candidate e 1 is the optimal choice in the inverse formulation. This result suggests that from the perspective of the available criteria, candidate e 1 has the strongest overall profile.

4.2. A Small Note on the Asymmetry of the Relational Model

In classical soft set theory, which uses a binary incidence matrix (where entries are 1 or 0), the matrix of the inverse soft set is indeed the transpose of the forward soft set’s matrix. This is because the underlying relationship is a simple, symmetric “belongs to” check.
However, the RMVC method does not operate on this simple incidence matrix. Instead, it generates a relational membership matrix, M m × n , where each entry Θ e i ( u j ) is a calculated, real-valued score. The calculation of this score is fundamentally asymmetric. The formula computes the membership of a candidate u j with respect to a parameter e i by analyzing the co-occurrence patterns of u j across the entire universe of other parameters ( E { e i } ).
In an “inverse” relational problem—calculating the membership of a parameter e i with respect to a candidate u j —the roles of U and E would be interchanged in the formula. The calculation would then be based on the co-occurrence patterns of e i across the universe of other candidates ( U { u j } ). This would lead to a completely different set of calculations and a resulting matrix that is not a simple transpose of the former relational matrix.
This asymmetry does not impact model consistency; on the contrary, it is the source of the model’s novelty and power. It ensures that the “view” from the perspective of parameters (how candidates relate to a criterion) and the “view” from the perspective of candidates (how criteria relate to a candidate) are distinct and rich with contextual information. This is a deliberate and core feature of the relational model, not an inconsistency.
We now compare the ranking produced by our inverse RMVC analysis with the ranking from a traditional count-based approach [3] for the same inverse problem. The count-based method, which relies on integer scores, results in a ranking with extensive ties:
e 1 > e 2 = e 3 = e 4 = e 5 = e 6 .
In sharp contrast, the RMVC algorithm produces a fully resolved and granular ranking:
e 1 > e 4 > e 6 > e 3 > e 2 > e 5 .
This comparison again highlights the discriminative power of the RMVC method. By moving beyond simple integer counts to a relational, real-valued scoring system, the algorithm effectively breaks ties and provides a more detailed and useful ordering. This enhanced granularity is a direct result of the model’s design and is critical for nuanced decision-making in complex scenarios.

5. Application to a Real-World Benchmark Case

To validate the practical efficiency of the proposed RMVC method, we apply it to a well-established benchmark problem from the MCDM literature. This approach not only demonstrates the algorithm’s applicability to a real-world scenario but also allows for a direct comparison of its performance against established methods.

5.1. Benchmark Dataset: The Automotive Selection Problem

The dataset used is the classic automotive selection problem presented by Yoon [27] in his seminal paper “A Reconciliation among Discrete Compromise Solutions.” He sourced it from Fleischer’s book [28]. This problem has since become a standard benchmark, used by numerous researchers to test and compare new decision-making methods. The problem involves ranking eight car models based on 16 different criteria.
The original decision matrix, as presented in Yoon [27] (p. 284, Table 2), is shown in Table 4, below. This table includes the raw performance values for each car across all criteria, as well as the criterion weights assigned in the original study.

5.2. Data Preprocessing: From Raw Data to a Comprehensive Soft Set

The original dataset is extensive, comprising 16 distinct criteria. To provide a robust and challenging test for our RMVC method, we select a comprehensive subset of 12 criteria. This expanded set moves beyond a minimal selection and allows for a more nuanced analysis, testing our method against a rich and diverse set of attributes.
The goal of this selection is to create a balanced model covering the primary aspects of vehicle evaluation: long- and short-term economics, performance and efficiency, safety, comfort, and practicality. As our RMVC method does not use the criterion weights, demonstrating its effectiveness on a larger, more complex set of unweighted criteria is a key validation step.
The selected criteria, now grouped by their evaluation category, are as follows:
  • Economic factors (three criteria): We include the primary “List price” (C1) and the long-term “Operating cost” (C2) to cover different cost aspects. To complete the economic picture, we also include “Resale value” (C3), a low-weight benefit criterion.
  • Performance and efficiency (four criteria): We select both “City mileage” (C5) and “Highway mileage” (C6) for a full efficiency profile. For performance, we include the opposing metrics of “Acceleration” (C7) and “Braking” (C8), both of which are cost-type criteria requiring transformation.
  • Driving dynamics and comfort (three criteria): We include “Ride” (C9) for comfort and “Handling” (C10) for vehicle dynamics. To represent interior comfort, we add “Front-seat room” (C13).
  • Practicality and maintenance (two criteria): We include “Maintainability” (C4) and “Trunk space” (C12) as key indicators of a vehicle’s day-to-day usability.
This curated subset of 12 parameters provides a rigorous test case for the RMVC method. We now denote these parameters as e 1 , , e 12 for consistency with our soft set notation. The raw data for this comprehensive subset is shown below in Table 5.
To apply our RMVC method, all parameters must be of the benefit type (i.e., higher values are better). The four cost-type parameters ( e 1 , e 2 , e 7 , e 8 ) are converted into benefit values using a scaled reciprocal transformation ( 1 / x ).
To improve the readability of the transformed data and ensure the values are of a comparable magnitude to the other criteria, we use scaling factors of 1000 or 100 in the transformation (i.e., 1000 / x or 100 / x ). This linear scaling does not change the internal ranking of the alternatives within the criterion but makes the resulting table more intuitive. The benefit-type parameters remain unchanged. The resulting “benefit table,” which serves as the basis for our analysis, is presented in Table 6.

5.3. Constructing the Soft Set

With all 12 parameters converted to a uniform benefit format as shown in Table 6, we can now define the soft set parameter sets, Φ ( e i ) . To ensure maximum objectivity and methodological consistency, we employ a single, data-driven thresholding rule across parameters.
The “Above Average” rule: For each of the 12 parameters, we calculate the arithmetic mean of the benefit values for all eight alternatives. An alternative is considered to “possess” the parameter (i.e., belong to the set Φ ( e i ) ) if its benefit value is strictly greater than the mean for that parameter. This unified approach eliminates subjective threshold-setting and grounds the entire preprocessing phase in the internal structure of the data.
For example, for the first parameter e 1 (“Economical Price”), the arithmetic mean of the benefit values in Table 6 is calculated as 0.245 . The alternatives with a score strictly greater than this mean are { A 1 , A 2 , A 3 , A 6 , A 7 } . Therefore, the resulting parameter set is
Φ ( e 1 ) = { A 1 , A 2 , A 3 , A 6 , A 7 }
Applying this same data-driven rule to the remaining 11 parameters yields the complete soft set. The results are summarized in Table 7 below.
This transparent and objective preprocessing yields a well-defined soft set, which is represented in a binary matrix format in Table 8. This matrix now serves as the final input for the RMVC algorithm.

5.4. Application of the RMVC Algorithm and Results

With the soft set representation of the problem established in Table 8, we now apply the RMVC algorithm. The core of this method is the construction of the relational membership matrix, M m × n .
Unlike a simple binary value, each entry in this matrix, Ω Θ e i ( A j ) , is a normalized score between 0 and 1. It quantifies how strongly an alternative A j is associated with a parameter e i , considering its relationship with all other alternatives. For alternatives that are members of the parameter set Φ ( e i ) , this value is 1.0. For non-members, the value is the calculated relational score.
The resulting relational membership matrix, M , based on RMVC results, is presented in Table 9.
The final score, S ( A j ) , for each alternative is calculated by summing the values in its corresponding column of the matrix. A higher score indicates a more preferable alternative:
S ( A j ) = i = 1 12 Θ e i ( A j )
These calculations yield the final scores and the resulting RMVC ranking. To provide a comprehensive benchmark, we compare these results with two different TOPSIS rankings:
(1) A “focused TOPSIS” analysis using the exact same 12 criteria that we already calculated.
(2) The “original TOPSIS” ranking from Yoon (1987), which used all 16 criteria.
This multi-faceted comparison is presented in Table 10.

5.5. Discussion of Final Results

The comprehensive comparison presented in Table 10 provides a powerful validation of the RMVC method’s unique approach, revealing three different optimal choices depending on the methodology and the scope of the criteria.
  • TOPSIS’s sensitivity to scope: The TOPSIS method proves to be highly sensitive to the set of criteria. With all 16 criteria, the original study identifies A8 (Peugeot 504) as the best choice, likely due to its strengths in several high-weight criteria that were excluded from our 12-item set. However, when focused on our 12-criteria set, the optimal choice dramatically shifts to A6 (Cadillac Seville). This demonstrates how criterion selection and weighting can fundamentally alter the outcome of utility-based methods.
  • RMVC’s robustness and consistency: On the other hand, the most compelling finding is the stability of the result produced by the RMVC method. A key strength of the proposed methodology is its consistency across varying levels of analytical scope. Preliminary analyses using a more focused 6-criteria subset, the 16-criteria set itself, and the comprehensive 12-criteria analysis presented here, all identify A3 (Plymouth Duster) as the clear and unwavering optimal choice. This remarkable stability stems from the RMVC’s weight-agnostic and relational nature. The method does not seek a “utility champion” based on explicit weights but rather identifies a “consensus champion” based on overall profile balance. A3 consistently emerges as the most “well-connected” alternative that satisfies the broadest range of criteria, proving its robustness regardless of the analytical scope.
In conclusion, this final analysis powerfully illustrates the value of the RMVC method. While TOPSIS provides valuable insights based on pre-defined utility and weights, its results can be volatile. RMVC offers a more stable and robust alternative, identifying the most balanced and consensual choice by leveraging the intrinsic relationships within the data. This demonstrates its strength as a new and reliable decision-making tool.

6. Conclusions and Future Work

This paper addressed a fundamental challenge in decision-making under uncertainty: the information loss and ranking ambiguity inherent in classical soft set theory. We have put forth the Relational Membership Value Calculator (RMVC), an algorithmic framework designed to overcome these limitations by introducing a more nuanced, fine-grained approach to evaluating candidates. Our primary contributions and findings are as follows:
  • We introduced a unique function, Θ , that evaluates the implicit connections between a candidate’s existing attributes and those it lacks, thereby preserving critical information that is otherwise discarded.
  • We provided a rigorous analysis of the algorithm’s theoretical properties, establishing its polynomial-time complexity and its robustness against data sparsity.
  • Through a comprehensive sensitivity analysis, we demonstrated the stability of the RMVC’s output rankings under data perturbations.
  • Using the Gini Index, we provided quantitative evidence that the RMVC framework achieves significantly higher granularity, leading to highly differentiated rankings with a drastically reduced likelihood of ties.
The implications of these findings are significant. The RMVC provides a more trustworthy foundation for automated and semi-automated decision-making by producing more reliable rankings with minimal ambiguity. This makes it a valuable tool for integration into next-generation AI-driven decision support and expert systems, where decision ambiguity can lead to critical failures. The provision of the algorithm as an open-source Python program further enhances its value to the scientific community by ensuring transparency and facilitating adoption.
Despite these promising results, we acknowledge several limitations that open avenues for future research. Our quantitative validation was based on illustrative case studies; future work should involve applying the RMVC to diverse, large-scale, real-world datasets to further assess its generalizability.
This work opens up several promising avenues for future investigation, which can be categorized into theoretical extensions, advanced aggregation methods, and enhancements for practical application.
A primary direction for future work involves extending the RMVC model from its current crisp framework to operate in more complex environments. Future work could focus on the following:
  • Fuzzy framework: The crisp δ function could be replaced by a fuzzy counterpart (e.g., using a t-norm). The development of a Fuzzy Relational Membership Vision Clustering (F-RMVC) algorithm is a compelling next step.
  • Advanced uncertainty models: Furthermore, the model can be adapted for even more sophisticated frameworks such as interval-valued fuzzy sets, intuitionistic fuzzy sets, or neutrosophic sets. Each extension would allow the algorithm to handle not just uncertainty but also concepts like indeterminacy and contradictory information, significantly broadening its applicability.
The current model uses a raw sum for score aggregation, a deliberate choice to establish a parameter-free baseline. Future work could investigate more advanced aggregation techniques to create a more flexible and adaptive model, including the following:
  • Weighted aggregation: Incorporating criterion weights would allow the model to reflect the varying importance of different parameters in real-world scenarios.
  • Entropy-based weighting: Information theory could be used to automatically assign higher weights to more “discriminative” criteria, providing a data-driven approach to weighting.
Future research can also focus on strengthening the model’s practical and theoretical reliability through the following:
  • Handling parameter redundancy: The current model assumes parameters are distinct. An advanced extension could involve an automated, in-built mechanism to account for highly correlated or redundant criteria, for example, by down-weighting the contribution of similar parameters.
  • Formalizing stability guarantees: While traditional error bounds are not directly applicable, the concept of robustness can be formalized. Future work could involve defining a distance metric between rankings (e.g., Kendall’s Tau) and then deriving probabilistic bounds on the stability of the output, further solidifying the model’s theoretical foundations.
An important objective for future work is to perform the formal comparative analysis theorized in this paper. This would involve using metrics to rigorously validate the enhanced granularity of the RMVC method, including the following:
  • Gini Index: A lower Gini Index for RMVC scores compared to traditional methods would provide quantitative proof of a more distributed score space and a reduced probability of ties.
  • Kullback–Leibler (KL) divergence: A high KL divergence value when comparing distributions would further support the argument that the RMVC provides a richer and more informative ranking.
Such an investigation would provide a standardized framework for comparing the resolution power of various decision-making algorithms. In conclusion, this work contributes a validated, transparent, and robust algorithmic tool to the field, paving the way for more precise and reliable data-driven decision-making in complex environments.

Author Contributions

Conceptualization, A.D. and B.C.; methodology, A.D. and F.O.E.; software, A.D.; validation, A.D. and F.O.E.; formal analysis, A.D.; investigation, A.D. and F.O.E.; resources, A.D. and F.O.E.; data curation, A.D. and F.O.E.; writing—original draft preparation, A.D.; writing—review and editing, A.D.; visualization, A.D.; supervision, B.C.; funding acquisition, A.D., F.O.E., and B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The open-source Python code for the RMVC program is available at https://github.com/DrDayioglu/RMVC.git, accessed on 30 July 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Molodtsov, D. Soft set theory first results. Comput. Math. Appl. 1999, 37, 19–31. [Google Scholar] [CrossRef]
  2. Maji, P.K.; Biswas, R.; Roy, A.R. Soft set theory. Comput. Math. Appl. 2002, 45, 555–562. [Google Scholar] [CrossRef]
  3. Maji, P.K.; Roy, A.R.; Biswas, R. An application of soft sets in a decision making problem. Comput. Math. Appl. 2003, 44, 1077–1083. [Google Scholar] [CrossRef]
  4. Chen, D.; Tsang, E.C.C.; Yeung, D.S.; Wang, X. The parameterization reduction of soft sets and its applications. Comput. Math. Appl. 2005, 49, 757–763. [Google Scholar] [CrossRef]
  5. Ali, M.I.; Feng, F.; Liu, X.; Min, W.K.; Shabir, M. On some new operations in soft set theory. Comput. Math. Appl. 2007, 57, 1547–1553. [Google Scholar] [CrossRef]
  6. Kong, Z.; Gao, L.; Wang, L.; Li, S. The normal parameter reduction of soft sets and its algorithm. Comput. Math. Appl. 2008, 56, 3029–3037. [Google Scholar] [CrossRef]
  7. Cagman, N.; Enginoglu, S. Soft matrix theory and its decision making. Comput. Math. Appl. 2010, 59, 3308–3314. [Google Scholar] [CrossRef]
  8. Peng, X.; Yang, Y. Algorithms for interval-valued fuzzy soft sets in stochastic multi-criteria decision making based on regret theory and prospect theory with combined weight. Appl. Soft Comput. 2017, 54, 415–430. [Google Scholar] [CrossRef]
  9. Peng, X.; Liu, C. Algorithms for neutrosophic soft decision making based on EDAS, new similarity measure and level soft set. J. Intell. Fuzzy Syst. 2019, 32, 955–968. [Google Scholar] [CrossRef]
  10. Smarandache, F. Extension of soft set to hypersoft set, and then to plithogenic hypersoft set. Neutrosophic Sets Syst. 2018, 22, 168–170. [Google Scholar]
  11. Ahmad, B.; Kharal, A. On fuzzy soft sets. Adv. Fuzzy Syst. 2009, 2009, 586507. [Google Scholar] [CrossRef]
  12. Akram, M.; Luqman, A. Fuzzy Hypergraphs and Related Extensions; Studies in Fuzziness and Soft Computing; Springer: Singapore, 2020; Volume 390. [Google Scholar]
  13. Demirtas, N.; Dalkilic, O. An application of fuzzy soft relation in decision making. J. New Theory 2020, 30, 98–107. [Google Scholar]
  14. Dalkilic, O. A novel approach to soft set theory in decision-making under uncertainty: New definitions and applications. Int. J. Comput. Math. 2021, 98, 1935–1945. [Google Scholar] [CrossRef]
  15. Dalkilic, O. On topological structures of fuzzy parametrized soft sets. J. Math. 2021, 2021, 3942708. [Google Scholar]
  16. Dalkilic, O. Relations on neutrosophic soft set and their application in decision making. J. Appl. Math. Comput. 2021, 67, 257–273. [Google Scholar] [CrossRef]
  17. Dalkilic, O.; Demirtas, N. VFP-soft sets and its application on decision making problems. J. Polytech. 2021, 24, 1581–1589. [Google Scholar]
  18. Nawaz, H.S.; Akram, M. Granulation of protein-protein interaction networks in Pythagorean fuzzy soft environment. J. Appl. Math. Comput. 2021, 67, 293–320. [Google Scholar] [CrossRef]
  19. Siddique, I.; Ahmad, T.; Awrejcewicz, J. A novel approach to study the effect of temperature and concentration on dynamic viscosity of MgO-Ag/water hybrid nanofluid using artificial neural network. Case Stud. Therm. Eng. 2021, 26, 101055. [Google Scholar]
  20. Zulqarnain, R.M.; Xin, X.L.; Garg, H.; Khan, W.A. Aggregation operators of pythagorean fuzzy soft sets with their application for green supplier chain management. J. Intell. Fuzzy Syst. 2021, 40, 5545–5563. [Google Scholar] [CrossRef]
  21. Zulqarnain, R.M.; Xin, X.L.; Saeed, M.; Ahmad, N.; Dayan, F.; Ahmad, B. Some fundamental operations on interval valued neutrosophic hypersoft set with their properties. Neutrosophic Sets Syst. 2021, 40, 134–148. [Google Scholar]
  22. Zulqarnain, R.M.; Xin, X.L.; Saqlain, M.; Khan, W.A. TOPSIS method based on correlation coefficient under pythagorean fuzzy soft environment and its application towards green supply chain management. Sustainability 2021, 13, 1642. [Google Scholar] [CrossRef]
  23. Zulqarnain, R.M.; Xin, X.L.; Saeed, M.; Smarandache, F.; Ahmad, N. Generalized aggregate operators on neutrosophic hypersoft set. Neutrosophic Sets Syst. 2021, 36, 271–281. [Google Scholar]
  24. Sezgin, A.; Atagun, A.O.; Cagan, N. A complete study on and-product of soft sets. SIGMA 2025, 43, 1–14. [Google Scholar] [CrossRef]
  25. Xia, S.; Chen, L.; Yang, H. A soft set based approach for the decision-making problem with heterogeneous information. AIMS Math. 2022, 7, 20420–20440. [Google Scholar] [CrossRef]
  26. Cetkin, V.; Aygunoglu, A.; Aygun, H. A new approach in handling soft decision making problems. J. Nonlinear Sci. Appl. 2016, 9, 231–239. [Google Scholar] [CrossRef]
  27. Yoon, K. A Reconciliation Among Discrete Compromise Solutions. J. Oper. Res. Soc. 1987, 38, 277–286. [Google Scholar] [CrossRef]
  28. Fleischer, G.A. Engineering Economy; PWS Engineering: Boston, MA, USA, 1984. [Google Scholar]
Figure 1. Results of RMVC for Example 1.
Figure 1. Results of RMVC for Example 1.
Mathematics 13 02693 g001
Figure 2. Results of RMVC for Example 2.
Figure 2. Results of RMVC for Example 2.
Mathematics 13 02693 g002
Table 1. Membership matrix format.
Table 1. Membership matrix format.
( 12345 )
e 1 111 1 3 1
e 2 5 9 1 1 3 11
e 3 1 4 9 11 4 9
e 4 11 4 9 1 3 1
Table 2. Score Comparison.
Table 2. Score Comparison.
CandidateOriginal Score ( S ( i ) )New Score ( S ( i ) )Score Change
13.673.73
23.603.73
34.60 (Rank 1)4.67 (Rank 1)
41.301.28
52.553.87
64.404.47
72.352.36
Table 3. Gini Index comparison.
Table 3. Gini Index comparison.
Score DistributionGini Index
Traditional (Maji–Roy)0.210
Proposed (RMVC)0.192
Table 4. The original decision matrix from Yoon (1987) [27].
Table 4. The original decision matrix from Yoon (1987) [27].
CriterionTypeWeightA1A2A3A4A5A6A7A8
Ford Chevrolet Plymouth Mercury Ford Cadillac Volvo Peugeot
Pinto Nova Duster Monarch Granada Seville 244 DL 504
1. List price (USD)Cost0.21627693222352645659948367932699214
2. Operating cost (USD/yr)Cost0.03811701540179019802000154015402000
3. Resale value after 5 yrBenefit0.00410001200120020004000120012004000
4. MaintainabilityBenefit0.0275.05.07.02.56.07.05.06.0
5. City mileage (mpg)Benefit0.16218.012.010.09.011.019.012.010.0
6. Highway mileage (mpg)Benefit0.16226.019.016.018.016.028.019.015.0
7. Acceleration (0-60; s)Cost0.03214.719.110.813.013.716.215.313.5
8. Braking (60-0; ft)Cost0.002180.0165.0155.6141.6178.0170.0165.0190.6
9. RideBenefit0.0924.38.68.65.710.05.78.65.7
10. HandlingBenefit0.0177.07.07.07.04.04.07.01.0
11. ManeuverabilityBenefit0.0227.07.08.06.05.06.07.03.0
12. Trunk space (cu. ft)Benefit0.0096.313.014.320.413.610.916.019.3
13. Front-seat roomBenefit0.1087.57.510.07.510.07.57.510.0
14. Front-seat comfortBenefit0.1087.18.67.57.17.18.67.17.1
15. Rear-seat roomBenefit0.0017.57.510.07.510.010.07.510.0
16. Rear-seat comfortBenefit0.0015.07.57.57.57.510.07.57.5
Data from G.A. Fleischer [28].
Table 5. Selected raw data for 12 criteria (Yoon, 1987) [27].
Table 5. Selected raw data for 12 criteria (Yoon, 1987) [27].
ParameterTypeA1A2A3A4A5A6A7A8
e 1 : Price (USD)Cost27693222352645659948367932699214
e 2 : Operating cost (USD/yr)Cost11701540179019802000154015402000
e 3 : Resale value (USD)Benefit10001200120020004000120012004000
e 4 : MaintainabilityBenefit5.05.07.02.56.07.05.06.0
e 5 : City mileage (mpg)Benefit18.012.010.09.011.019.012.010.0
e 6 : Highway mileage (mpg)Benefit26.019.016.018.016.028.019.015.0
e 7 : Acceleration (s)Cost14.719.110.813.013.716.215.313.5
e 8 : Braking (ft)Cost180.0165.0155.6141.6178.0170.0165.0190.6
e 9 : RideBenefit4.38.68.65.710.05.78.65.7
e 10 : HandlingBenefit7.07.07.07.04.04.07.01.0
e 11 : Front-seat roomBenefit7.57.510.07.510.07.57.510.0
e 12 : Trunk space (cu. ft)Benefit6.313.014.320.413.610.916.019.3
Table 6. Transformed benefit table for 12 criteria (with scaled values).
Table 6. Transformed benefit table for 12 criteria (with scaled values).
ParameterA1A2A3A4A5A6A7A8
e 1 : Price (1000/USD)0.3610.3100.2840.2190.1010.2720.3060.109
e 2 : Operating cost (1000/USD)0.8550.6490.5590.5050.5000.6490.6490.500
e 3 : Resale value (USD)10001200120020004000120012004000
e 4 : Maintainability5.05.07.02.56.07.05.06.0
e 5 : City mileage (mpg)18.012.010.09.011.019.012.010.0
e 6 : Highway mileage (mpg)26.019.016.018.016.028.019.015.0
e 7 : Acceleration (100/s)6.805.249.267.697.306.176.547.41
e 8 : Braking (100/ft)0.5560.6060.6430.7060.5620.5880.6060.525
e 9 : Ride4.38.68.65.710.05.78.65.7
e 10 : Handling7.07.07.07.04.04.07.01.0
e 11 : Front-seat room7.57.510.07.510.07.57.510.0
e 12 : Trunk space (cu. ft)6.313.014.320.413.610.916.019.3
Table 7. Parameter ID and resulting soft set.
Table 7. Parameter ID and resulting soft set.
Parameter IDParameter NameCalculated MeanResulting Soft Set Φ ( e i )
e 1 Price0.245 { A 1 , A 2 , A 3 , A 6 , A 7 }
e 2 Operating cost0.608 { A 1 , A 2 , A 6 , A 7 }
e 3 Resale value1975 { A 4 , A 5 , A 8 }
e 4 Maintainability5.438 { A 3 , A 5 , A 6 , A 8 }
e 5 City mileage12.625 { A 1 , A 6 }
e 6 Highway mileage19.625 { A 1 , A 6 }
e 7 Acceleration7.051 { A 3 , A 4 , A 5 , A 8 }
e 8 Braking0.599 { A 2 , A 3 , A 4 , A 7 }
e 9 Ride7.15 { A 2 , A 3 , A 5 , A 7 }
e 10 Handling5.5 { A 1 , A 2 , A 3 , A 4 , A 7 }
e 11 Front-seat room8.438 { A 3 , A 5 , A 8 }
e 12 Trunk space14.225 { A 3 , A 4 , A 7 , A 8 }
Table 8. The final soft set matrix representation for 12 criteria (unified threshold).
Table 8. The final soft set matrix representation for 12 criteria (unified threshold).
ParameterA1A2A3A4A5A6A7A8
e 1 : Price11100110
e 2 : Operating cost11000110
e 3 : Resale value00011001
e 4 : Maintainability00101101
e 5 : City mileage10000100
e 6 : Highway mileage10000100
e 7 : Acceleration00111001
e 8 : Braking01110010
e 9 : Ride01101010
e 10 : Handling11110010
e 11 : Front-seat room00101001
e 12 : Trunk space00110011
Table 9. The final relational membership matrix, M .
Table 9. The final relational membership matrix, M .
ParameterA1A2A3A4A5A6A7A8
e 1 : Price1.001.001.000.180.131.001.000.11
e 2 : Operating cost1.001.000.300.140.071.001.000.05
e 3 : Resale value0.030.090.361.001.000.060.151.00
e 4 : Maintainability0.140.161.000.201.001.000.201.00
e 5 : City mileage1.000.230.180.050.051.000.230.05
e 6 : Highway mileage1.000.230.180.050.051.000.230.05
e 7 : Acceleration0.070.161.001.001.000.090.231.00
e 8 : Braking0.201.001.001.000.180.141.000.18
e 9 : Ride0.181.001.000.251.000.161.000.20
e 10 : Handling1.001.001.001.000.150.181.000.15
e 11 : Front-seat room0.060.151.000.271.000.120.211.00
e 12 : Trunk space0.140.251.001.000.250.111.001.00
Table 10. Final comprehensive comparison: RMVC vs. focused and original TOPSIS.
Table 10. Final comprehensive comparison: RMVC vs. focused and original TOPSIS.
AlternativeRMVC ScoreRMVC RankFocused TOPSIS Rank (12 Criteria)Original TOPSIS Rank (16 Criteria)
A 1 (Ford Pinto)5.82628
A 2 (Chevrolet Nova)6.27335
A 3 (Plymouth Duster)9.02156
A 4 (Mercury Monarch)6.14464
A 5 (Ford Granada)5.865 (Tie)72
A 6 (Cadillac Seville)5.865 (Tie)17
A 7 (Volvo 244 DL)7.25243
A 8 (Peugeot 504)5.78881
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dayioglu, A.; Erdogan, F.O.; Celik, B. RMVC: A Validated Algorithmic Framework for Decision-Making Under Uncertainty. Mathematics 2025, 13, 2693. https://doi.org/10.3390/math13162693

AMA Style

Dayioglu A, Erdogan FO, Celik B. RMVC: A Validated Algorithmic Framework for Decision-Making Under Uncertainty. Mathematics. 2025; 13(16):2693. https://doi.org/10.3390/math13162693

Chicago/Turabian Style

Dayioglu, Abdurrahman, Fatma Ozen Erdogan, and Basri Celik. 2025. "RMVC: A Validated Algorithmic Framework for Decision-Making Under Uncertainty" Mathematics 13, no. 16: 2693. https://doi.org/10.3390/math13162693

APA Style

Dayioglu, A., Erdogan, F. O., & Celik, B. (2025). RMVC: A Validated Algorithmic Framework for Decision-Making Under Uncertainty. Mathematics, 13(16), 2693. https://doi.org/10.3390/math13162693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop