3.1. Baseline Token Valuation Method
3.1.1. Service-Token Model for a Baseline Token Valuation Method
The service-token model is based on a weighted formula that integrates the following three main parameters: cost, time, and quality, each contributing to the token value assigned to a service. Below is a detailed description of the model.
The main variables and parameters of the model are as follows:
—A specific service, (e.g., consulting, training, and spare parts);
—Cost of providing service (e.g., operational cost and labor cost);
—Time or effort required for service ;
—Quality factor for service (e.g., reliability, customer satisfaction rating, and complexity);
—Demand for service (number of requests or users per time);
—Base token value assigned to service ;
—Unit monetary value of one token (e.g., USD 1 per token);
—Weight for the relative importance of service attributes for ;
—Platform operational multiplier for overhead or profit margin;
—Revenue generated from point consumption;
—Adjusted tokens, which is the final point value for service , dynamically calculated based on the demand and availability.
Each service is assigned a base token value based on its attributes, as follows:
where
.
Tokens are dynamically adjusted based on the demand and availability, as follows:
where
is the average demand for all services.
If , tokens increase to reflect the higher demand; if , tokens decrease to incentivize usage.
Revenue from a service is calculated as follows:
where
is the number of units of service
consumed.
Platform profitability is calculated as follows:
The platform can optimize token values to maximize profit or customer satisfaction, as follows:
which is subject to the following constraints:
The model evaluates each service, , based on the parameters of cost, time, and quality.
To represent the definition of cost, time, and quality for each service, we break each parameter into its components with the next mathematical formulation.
where
—direct costs (e.g., labor, materials, and equipment);
—indirect costs (e.g., overheads and administrative expenses); and
—variable costs (e.g., costs that scale with usage).
- 2.
Time, which is the total time required for delivering the service, as follows:
where
—preparation time (e.g., scheduling and setup);
—execution time (e.g., performing the service); and
—support time (e.g., follow-up activities).
- 3.
Quality, which is a composite score reflecting customer satisfaction and service performance, as follows:
is the customer satisfaction score (e.g., survey ratings and Likert scale);
is the reliability (e.g., consistency of outcomes and timeliness);
is the effectiveness (e.g., achievement of desired outcomes);
is the complexity (e.g., difficulty or resource intensity);
are the weights reflecting the importance of each quality component, as follows:
and
—is the score assigned based on qualitative factors such as customization or task difficulty, normalized to a 0–1 scale.
Normalization across services ensures that the evaluation and comparison of diverse-services on the ATSaaS platform are fair, consistent, and scalable. Given the varying nature of aviation technical support services—ranging from routine maintenance to specialized training—normalization involves standardizing the cost, time, and quality metrics to a common scale. This process enables the calculation of token values that are comparable across different services, regardless of their complexity or operational context. For instance, time durations may be normalized as a proportion of the maximum execution time for any service, while cost components can be expressed as percentages of a defined baseline (e.g., average service cost). Similarly, quality parameters such as satisfaction and reliability are normalized to a 0–1 scale, ensuring uniform representation in the weighted formula. By implementing normalization, the platform maintains transparency, avoids bias in service valuation, and supports dynamic adjustments as new services are introduced or existing ones are refined. This approach not only facilitates fairness but also aligns with the scalability requirements of a token-based system.
To ensure the comparability of cost, time, and quality across different services, the following were used:
where
is the normalized cost,
is the normalized time, and
is the normalized quality.
The total token value,
, for a service is calculated as a weighted sum of the three parameters in accordance of Expression (1), as follows:
where
—total token value for the service;
are the weights assigned to cost, time, and quality, respectively,
The model allows for adjustments based on an external or dynamic factor:
where
is a high demand (increases the token value) and
is a low demand (reduces the token value).
Feedback adjustment, : the realized quality, , from the customer feedback can adjust the token value, as follows:
where
is a positive feedback adjustment and
is a negative feedback adjustment.
The adjustment factor,
is derived as follows:
where
is the weight assigned to quality in the token formula. This factor determines the magnitude of the token value adjustment based on the quality deviation.
The model incorporates feedback loops to adjust token values dynamically, as follows:
Periodic reassessment of weights and parameters based on operational data;
Customer satisfaction and realized quality inform recalibrations;
Simulations assess the model’s robustness when introducing new services or handling increased volumes.
3.1.2. Techniques for Calculating Weight Coefficients in Multi-Criteria Decision Making
Determining weight coefficients is a critical step in multi-criteria decision-making processes, as it reflects the relative importance of each criterion or alternative. Common methods for calculating weights include the following:
Direct weight assignment whereby experts directly assign weights to criteria or alternatives based on their judgment [
23];
Ranking and rating methods whereby criteria are ranked or rated, and the weights are derived based on these ranks or ratings [
24];
Pairwise comparison methods, including the analytic hierarchy process (AHP), which uses pairwise comparisons to calculate weights [
25];
Entropy-based methods whereby weights are calculated based on the variability or entropy of data [
26];
Regression or optimization models, which are used when data-driven approaches are required [
27].
Among these, the AHP method is particularly suited for discussed situations involving multiple criteria and subjective judgments because of the following:
Accommodates both qualitative and quantitative criteria;
Structures the decision problem hierarchically, facilitating clarity in evaluation;
Incorporates subjective expert opinions into a consistent mathematical framework;
Handles both individual and group decision-making scenarios effectively;
By applying the AHP method, the weights derived are consistent and transparent, reducing the risk of biased or arbitrary decisions.
Sequence of applying the AHP method.
Case of one expert, as follows:
Define the criteria or alternatives to be evaluated;
Construct a pairwise comparison matrix where each element represents the relative importance of one criterion compared to another;
Normalize the matrix by dividing each element by the sum of its column;
Calculate the priority vector (i.e., weights) by averaging the normalized values across each row;
Check the consistency ratio to ensure the logical consistency of judgments.
Case of multiple experts, as follows:
Each expert independently completes a pairwise comparison matrix;
Aggregate the individual matrices into a single group matrix, typically using the geometric mean;
Normalize the aggregated matrix and compute the priority vector as above;
Check the consistency ratio for the aggregated matrix.
3.1.3. Example of Determining Weighting Factors for Service Using AHP
The methodology of the AHP for TBDCM includes the next main steps, as follows:
Define the criteria: the criteria (cost, time, and quality) are evaluated independently for each service;
Pairwise comparisons: each expert performs pairwise comparisons for the three criteria for each service;
Aggregate judgments: the geometric mean is used to combine the judgments of the three experts for each service, as follows: for
experts and their pairwise comparison values
for a specific comparison, where
is the pairwise comparison value provided by the
-th expert, as follows:
Normalize: the aggregated pairwise comparison matrix is normalized;
Calculate the weights: the priority vector (i.e., weights) is calculated by averaging the rows of the normalized matrix;
Repeat for each service: the process is repeated separately for routine maintenance (RM) and document processing (DP).
The following figures illustrate the step-by-step application of the AHP method for determining weighting factors for the criteria cost, time, and quality across the following two services: routine maintenance (RM) and document processing (DP).
Figure 2 illustrates the initial individual judgments provided by three experts for each criterion pair (cost vs. time, cost vs. quality, and time vs. quality) for the two services.
The values in this figure represent dimensionless comparison ratios on the Saaty scale (1–9), where 1 indicates equal importance among criteria, 3 indicates moderate importance of one criterion over another, 5 indicates strong importance, 7 indicates very strong importance, 9 indicates extreme importance, and 2, 4, 6, and 8 are intermediate values [
25].
Figure 3 visualizes the aggregated pairwise comparisons provided by the experts for each criterion comparison using the geometric mean. These values are also dimensionless ratios derived from the geometric mean calculation of the expert judgments, following the same Saaty scale as in
Figure 2. The comparisons include cost vs. time, cost vs. quality, and time vs. quality for both services. It highlights the relative importance assigned by experts in the first step of the AHP process.
Figure 4 displays the aggregated pairwise matrices for the two services after combining expert judgments. The matrix includes all pairwise comparisons among the criteria (cost, time, and quality) and serves as the basis for normalization in the subsequent step. The values shown are dimensionless comparison ratios, with diagonal elements always equal to 1 (representing self-comparison). Off-diagonal elements follow the Saaty scale interpretation.
Figure 5 shows the normalized pairwise comparison values for each matrix element. The normalization process ensures that the sum of each column equals 1, enabling consistent calculation of priority weights. The figure provides insight into the relative contributions of each criterion.
This summary in
Figure 6 presents the final weights for cost, time, and quality for both services. It consolidates the results from all steps, providing a clear comparison of the relative importance of each criterion.
For routine maintenance, quality (0.410) has the highest weight, slightly exceeding cost (0.396), reflecting its importance in ensuring long-term effectiveness.
For document processing, cost (0.557) dominates, followed by quality (0.252), indicating a focus on cost-effectiveness with a secondary emphasis on accuracy.
3.1.4. Numerical Case Study
To illustrate the application and versatility of the service-token model, this section presents an expanded example calculation that incorporates multiple MRO-oriented services. The MRO services, being core to aviation technical support, are characterized by their complexity, resource intensity, and critical role in ensuring airworthiness. By applying the TBDCM, we demonstrate how diverse services—such as routine maintenance, component repair, and comprehensive inspections—are evaluated using the standardized parameters of cost, time, and quality.
This expanded example not only highlights the detailed breakdown of each parameter but also showcases how the model accommodates service-specific factors like labor costs, execution durations, and quality metrics such as reliability and complexity. The inclusion of adjustments for dynamic factors, such as demand fluctuations and customer feedback, further emphasizes the adaptability and practicality of the model. By analyzing MRO-oriented services, this example provides a robust demonstration of the model’s capability to handle the intricacies of high-stakes aviation operations while ensuring fairness and transparency in service valuation.
The next example demonstrates the application of the service-token model for evaluating token values across the main MRO-oriented services. Each service is assessed based on the standardized parameters of cost, time, and quality, which are weighted to calculate the final token values.
The initial data for the selected services are presented in
Table 1. The values are categorized under the three main parameters, with further breakdowns for specific subcomponents.
The missing data in
Table 1 reflect the inherent differences in the nature and operational requirements of the services being evaluated. Not all services involve every type of cost or operational element, and this is accounted for in the service-token model to ensure accuracy and relevance in valuation. For instance, certain services, such as consulting and document processing, may not involve physical materials or complex overheads, which are more relevant for resource-intensive MRO services like routine maintenance or component repair. For example, consulting relies heavily on labor costs (e.g., expert time) and minimal overheads, with no material costs, while document processing primarily involves automation tools with minimal human intervention.
The inclusion or omission of specific cost, time, or quality metrics depends on the operational structure of the service. For instance, metrics like materials are irrelevant for services like consulting or training, which do not involve tangible components, while overheads may not significantly impact low-resource services like document processing. These omissions reflect the simplicity or automation of some services compared to the complexity of others.
Even with missing data, the service-token model ensures fair valuation through its normalization and weighting mechanisms. Normalization allows for the comparison of services only for relevant components, while the weighting system accounts for the absence of certain metrics by redistributing focus to the elements that are most significant for a particular service. For example, in consulting, the absence of material costs means labor and overheads dominate the cost component, while for document processing, the cost of software tools and automation efficiency drive the valuation.
To enable a fair comparison, we normalize all parameters to a 0–1 scale using the maximum values for each parameter across all services.
The maximum values across all services are as follows:
= USD 1600;
= 30 h;
= 1.0.
The normalized values of the parameters are presented in
Table 2.
The allocation of weighting coefficients for cost, time, and quality in the service-token model depends on the specific characteristics and priorities of each service type. Different services place varying levels of importance on these parameters based on their operational demands, safety implications, and resource requirements.
Table 3 provides a proposed weighting scheme for the discussed service types on the basis of expert conclusions.
Using Formula (1) with the weights in
Table 3, we obtain the results shown in
Table 4.
A general analysis of the results obtained shows the following:
MRO-oriented services
Routine maintenance scores the highest among MRO services because of its significant cost and time requirements, alongside high reliability;
Component repair is moderately valued, balancing resource use and excellent quality;
Inspection services have the lowest point value among MRO services because of its low time and cost, despite strong quality metrics.
Non-MRO services
Consulting has a high point value, reflecting its resource-intensive and high-complexity nature;
Training provides excellent quality at a moderate cost, making it highly efficient;
Document processing, while the most cost-effective, scores lower because of its simpler and less resource-intensive nature. It is important to note, however, that this assessment reflects only the characteristics of the document processing services evaluated during the pilot phase, which involved routine, template-based operations with minimal regulatory variation. In real-world scenarios, the complexity of document processing can vary significantly—particularly in cases involving newer aircraft, first-time maintenance procedures, or jurisdiction-specific compliance documentation. For such services, the quality and complexity components may carry greater weight, and the token value would adjust accordingly. The model is designed to accommodate such variability through its feedback-driven recalibration mechanism and flexible weighting system based on service-specific parameters.
This integrated example highlights the flexibility of the service-token model in evaluating a wide range of services, from high-complexity MRO operations to resource-efficient administrative tasks. The model’s ability to normalize data and assign fair point values ensures transparency, scalability, and adaptability across the ATSaaS platform.
3.1.5. Service Passport in the Service-Token Model
The Service Passport is a central element of the TBDCM, providing a structured digital record that defines, standardizes, and communicates the value of each service offered on a platform. It serves as a comprehensive repository of information, including operational characteristics, resource requirements, performance expectations, and evaluation metrics, ensuring transparency and consistency across diverse services. Within the ATSaaS platform, the Service Passport is essential for documenting and justifying the assigned token values, fostering clarity and fairness for stakeholders while supporting scalability and adaptability.
The Service Passport is designed with a clear structure to encompass all critical components of the service-token model. The structure of the Service Passport is shown in
Table 5.
The Service Passport ensures clarity in token assignment by providing a transparent rationale for the calculated value, helping stakeholders understand how cost, time, and quality contribute to the service’s evaluation. It serves as a basis for comparing heterogeneous services, enabling fair evaluation despite differences in scope, complexity, and resource requirements. Furthermore, it supports continuous improvement by integrating performance data and customer feedback, ensuring token values remain reflective of current realities. As a scalable tool, the Service Passport simplifies the integration of new services into the platform through its standardized template, maintaining uniformity across offerings.
Despite its advantages, implementing a Service Passport presents challenges. Collecting accurate and comprehensive data for cost, time, and quality metrics can be resource-intensive, and the dynamic nature of the services requires continuous updates to ensure relevance. Achieving standardization across multiple service providers may also demand stringent guidelines and oversight. However, these challenges are outweighed by the benefits of transparency, fairness, and adaptability.
The Service Passport is poised to evolve with advancements in technology. Automation through AI and machine learning can streamline data collection and real-time updates. Integration with blockchain technology could enhance transparency and immutability, while advanced analytics could predict trends, optimize resource allocation, and refine service valuation. These developments will further strengthen the Service Passport’s role as a cornerstone of the TBDCM, driving innovation and efficiency within the aviation technical support industry. By documenting and justifying service values comprehensively, the Service Passport ensures clarity, fairness, and scalability, making it an indispensable tool for platforms like ATSaaS.
3.1.6. Initial Definition of Token Value for a New Services
Once the Service Passport has been structured to define the essential characteristics of the service—such as its scope, regulatory requirements, and performance expectations—the next step is to assign an initial token value. This process uses the defined parameters as inputs for estimating a fair and adaptive starting point for service valuation within the token-based framework.
Defining the initial token value for a new service within the service-token model is a critical step that requires precision, transparency, and alignment with both platform objectives and stakeholder expectations. The token value reflects the service’s intrinsic worth by integrating its cost, time, and quality components into a single, normalized metric.
Figure 7 illustrates the workflow for defining the initial token value of a new service. It outlines the sequential steps—from parameter decomposition and expert evaluation to simulation and pilot feedback—that guide the structured initialization process. This visual representation helps clarify the model’s practical implementation and highlights its reliance on both expert judgment and adaptive recalibration.
The following steps are recommended for calculating the initial token value.
Step 1. Parameter Decomposition. Break down the service into the three core parameters—cost, time, and quality. Each of these is further detailed into subcomponents as defined in
Section 3.1.1 (e.g., direct labor, preparation time, customer satisfaction proxies).
Step 2. Analogous estimation. Identify a comparable service from the platform or external reference (e.g., from MRO or consulting databases) that shares operational or structural similarities. Use its normalized values as a starting point for assigning baseline metrics to the new service.
Step 3. Expert evaluation: Employ structured techniques such as the Delphi method or AHP with a panel of domain experts to estimate weights and relative parameter values. This is particularly valuable for assessing quality dimensions like complexity or expected reliability, which are otherwise difficult to quantify at launch.
Step 4. Use of parametric cost models. Apply parametric estimation formulas when available (e.g., cost per hour for inspection personnel, cost per training module, or document complexity coefficients). These models are well-established in aviation project management and logistics literature and help provide a grounded initial cost estimate.
Step 5. Simulation-based sensitivity testing. Before deploying the service on the platform, simulate token calculations under various parameter combinations to evaluate sensitivity and detect potential valuation anomalies. This also allows testing how demand or user feedback might influence recalibration in early iterations.
Step 6. Provisional token assignment and monitoring. Assign an initial token value based on the above estimates and launch the service under pilot conditions. Collect feedback from users during the first service cycles to measure actual performance (e.g., delivery time, perceived quality, cost deviation). Use these data to perform the first round of token value recalibration.
This hybrid approach ensures that the initial token value reflects domain-specific knowledge, analogous service benchmarks, and established cost-estimation principles. It also aligns with best practices found in software sizing (e.g., function point analysis), manufacturing (e.g., parametric cost modeling), and service pricing strategies discussed in prior works. By grounding the initial valuation in systematic estimation and expert input, the model avoids arbitrary assumptions and supports early-stage accuracy until sufficient empirical data become available.
3.1.7. Correction of Token Value Based on User Feedback About Quality
The correction of token values based on user feedback is a crucial process in the service-token model, ensuring that the assigned token value accurately reflects a service’s real-world performance. The quality parameter, which includes metrics such as customer satisfaction, reliability, effectiveness, and complexity, is inherently dynamic and subject to change as services are delivered and users provide feedback. Incorporating this feedback allows the platform to adapt token values to evolving service performance and user expectations.
Figure 8 illustrates the step-by-step workflow for correcting token values based on user feedback.
The process begins with the collection of user feedback through various mechanisms, such as surveys, rating systems, and qualitative reviews. Users are prompted to evaluate the service across specific quality dimensions, including satisfaction with the outcome, timeliness, and overall reliability. These data form the foundation for assessing realized service quality.
Once collected, the feedback is aggregated to create a composite quality score for the service, which represents the realized quality based on user experiences. This step minimizes the influence of outliers and biases by averaging data across multiple users. Aggregation ensures that the quality assessment is robust and reflective of the broader user base.
The realized quality score is compared to the initially assumed quality value used in the original token calculation. The difference quantifies whether the service has exceeded expectations or underperformed. This comparison highlights the need for any token value adjustments.
Before making adjustments, the platform investigates the causes of any quality discrepancies, for example, as follows:
A negative deviation may indicate service delivery issues, such as delays or inconsistent outcomes;
A positive deviation may reflect unanticipated excellence in service execution. This step ensures that adjustments are informed by the root causes of quality deviations rather than surface-level metrics.
If necessary, at the step 5 the quality parameter is adjusted to reflect the updated score. The adjustment factor is derived in accordance of Formula (4). Using the adjusted quality parameter, the new token value is recalculated at the step 6 in accordance of Expression (3).
The final step involves validating the recalculated token value with stakeholders, including service providers and users, to ensure alignment with expectations and feedback. The rationale for the adjustment is documented in the Service Passport, and the updated token value is communicated transparently to all stakeholders. This builds trust in the platform’s responsiveness and fairness.
Correcting token values based on user feedback about quality is essential to maintaining the TBDCM’s integrity and adaptability. By systematically incorporating realized quality metrics into token calculations, the platform ensures that token values accurately reflect real-world service performance.
3.1.8. Service-Token Model Validation
The validation of the service-token model is essential to ensure its effectiveness, reliability, and applicability in real-world scenarios. This process assesses the model’s ability to assign accurate, fair, and dynamic token values to services on the ATSaaS platform. Validation focuses on the following three critical aspects: the accuracy of token values, responsiveness to feedback, and scalability across diverse services.
To evaluate the model’s accuracy, a series of simulations were conducted using real and hypothetical service data. The simulation included 50 iterative cycles, each representing a single feedback loop where service performance data were used to adjust token values and weighting coefficients. These simulations involved generating cost, time, and quality parameters for various services, such as routine maintenance, component repair, consulting, and document processing. Initial token values were calculated using the model’s weighted formula, integrating these parameters. The calculated values were then compared against industry standards, expert evaluations, and user expectations. This process demonstrated the model’s ability to generate token values that align with the intrinsic characteristics of services, ensuring fairness and consistency across different service types.
To illustrate the practical application of these validation results, consider the routine maintenance service scenario. In this use case, the initial weight distribution (cost: 0.2, time: 0.3, and quality: 0.5) reflected traditional service valuation priorities where quality held a slight premium over operational factors.
The validation results for this routine maintenance scenario demonstrated the model’s effectiveness. Starting with service costs ranging between USD 500 and USD 1500, execution times varying from 5 to 20 h, and quality scores between 0.6 and 1.0, the model successfully adapted its weight distribution while maintaining coherent service valuations.
Figure 9 illustrates the weight evolution during the validation process over 50 iterations, showing the dynamic adjustment of component weights from their initial values to their final target distribution. This transition demonstrates the model’s capability to systematically adjust service evaluation parameters based on operational requirements and performance feedback.
Operational analysis revealed that a modified weight distribution (cost: 0.2, time and quality: about 0.4) would better align with service efficiency and customer satisfaction objectives. The weight evolution graph reveals several key characteristics of the model’s adaptation process, as follows:
Smooth transition from initial to target weights, indicating stable adjustment mechanisms;
Maintenance of the unity sum constraint throughout the adaptation process;
Consistent convergence behavior across all three parameters;
Appropriate response to the specified target distribution while avoiding oscillatory behavior.
Figure 10 presents the token value trend throughout the validation period, showing a controlled decrease from an initial value of approximately 900 to a final stable value around 600. This systematic reduction in token value reflects the model’s ability to respond to changing weight distributions while maintaining predictable valuation behavior.
The initial token value of approximately 900 reflected the higher initial cost weighting, while the final stabilized value of around 600 better represented the optimized weight distribution that emphasized time and quality factors. The token value evolution exhibits several notable characteristics, as follows:
Gradual, controlled transition between initial and final values;
Maintained stability with minor variations reflecting real-world fluctuations;
Consistent convergence pattern aligned with weight adjustments;
Absence of dramatic fluctuations that could destabilize service pricing.
Figure 9 and
Figure 10 visualize the evolution of weight coefficients and token values across iterations. Each point on the graph corresponds to a discrete simulation step, and the connecting lines illustrate the continuous adjustment trend over time. The values between points are meaningful, as the model’s recalibration logic is mathematically continuous—allowing interpolated states between discrete updates. This helps to demonstrate convergence behavior and system stability.
The purpose of this visualization is not to reflect real-time empirical data, but to test the model’s robustness and sensitivity under controlled dynamic conditions. The trends confirm that the model reaches stable valuations and weight distributions over time, validating its ability to adapt to user feedback without introducing volatility.
The validation results demonstrate that the service-token model achieves both mathematical consistency and practical applicability.
Table 6 summarizes the key quantitative findings from our validation process.
The validation confirmed that the model successfully integrates cost, time, and quality parameters into a unified valuation framework. The mean absolute percentage error (MAPE) between token-calculated values and expert consensus valuations was 12.4%, below our acceptable threshold of 15%. This indicates that the model generates token values that align well with expert judgment.
The convergence patterns illustrated in
Figure 9 and
Figure 10 show that the weight coefficients and token values stabilize within approximately 27 iterations. Stability was defined as achieving a coefficient of variation (CV) below 3% in the final 10 iterations, indicating that the model reaches equilibrium efficiently without requiring excessive recalibration cycles.
Statistical analysis of user feedback revealed a strong positive correlation (r = 0.76) between service quality ratings and corresponding quality parameter weights, confirming that the feedback mechanism effectively captures user perceptions and translates them into appropriate token value adjustments. The consistency ratio in the AHP was 0.067, which is well below the 0.1 threshold, indicating the logical coherence in expert judgments used for initial weight determination.
A sensitivity analysis was conducted by systematically varying the input parameters (cost: ±20%, time: ±30%, and quality: ±25% from baseline values) to assess model robustness. The results show that the token values remained within acceptable bounds under these variations, with the quality parameter changes having the most significant impact. This finding confirms the importance of reliable quality metrics and justifies our emphasis on comprehensive feedback collection.
The model was further tested through scaled simulation involving 100 hypothetical service profiles with randomized but plausible parameter ranges. This simulation confirmed the model’s ability to maintain computational efficiency and logical token valuations even when handling diverse service types simultaneously, with an average processing time of 1.2 s per service profile on standard computing infrastructure.
These validation results collectively demonstrate that the service-token model provides a reliable, efficient, and mathematically sound framework for service valuation within the ATSaaS platform. The model successfully balances accuracy, stability, and responsiveness to feedback, making it suitable for real-world implementation.
3.1.9. Baseline Token Value Determination Algorithm
The baseline token valuation method provides a systematic approach to determining initial token values for services within the ATSaaS platform. The algorithm integrates cost, time, and quality parameters through a structured evaluation process that ensures consistency and fairness in service valuation (
Figure 11).
The algorithm accepts as input a service description containing operational parameters, historical cost data, time estimates, and quality metrics. Additional inputs include regulatory compliance requirements and service-specific constraints. The algorithm produces a baseline token value and a comprehensive Service Passport documenting all valuation parameters and decisions.
The process begins with parameter decomposition, breaking down the three primary components into their constituent elements.
A normalization phase follows to ensure comparability across different services. Each parameter is normalized against the maximum value observed across all services.
The algorithm then applies service-specific weights to these normalized parameters. Weight determination follows the AHP methodology, incorporating expert judgments and service characteristics.
This base value undergoes two adjustment phases. First, a demand adjustment D modifies the token value based on market conditions.
Subsequently, a feedback adjustment is applied based on the initial service performance and user feedback.
The final output includes both the adjusted token value and a Service Passport.
The algorithm maintains an audit trail of all calculations and decisions, enabling transparency and facilitating future adjustments. The Service Passport serves as a comprehensive record of the valuation process and provides a baseline for future token value optimizations.
This baseline method ensures a systematic and transparent approach to token valuation while maintaining flexibility through its adjustment mechanisms and provides a stable foundation for more advanced optimization techniques.
3.1.10. Adaptive Weighting Mechanism
A core enhancement of the TBDCM is an adaptive weighting mechanism that dynamically adjusts the importance of cost, time, and quality based on service performance and customer feedback. Based on Animasaun et al. [
28], Animasaun et al. [
29], and Wang et al. [
30], this mechanism relies on statistical analysis, machine learning techniques, and optimization models to recalibrate weight coefficients in response to the following:
Identifying patterns in past transactions and adjusting token valuations accordingly;
If multiple users rate a service’s quality as lower than expected, the model dynamically increases the weight of the quality component, ensuring a more accurate reflection of service value;
If service providers consistently complete tasks faster than expected, the time parameter weight is reduced, making tokens more responsive to real-world execution times.
Mathematically, the adaptive weights
, or cost, time, and quality, respectively, are adjusted using the following iterative update rule, as follows:
where
is the weight of parameter
at iteration
,
is the learning rate (a small constant to control the rate of adaptation),
is the realized service performance, and
is the expected service performance.
The weights are normalized as follows: .
This iterative approach allows for smooth adaptation to changing operational conditions while preventing drastic fluctuations in token values.
3.1.11. Machine Learning for Token Optimization
A further improvement is the integration of ML models to predict token values based on historical performance data and real-time service attributes. The model is trained using features such as past token values, service completion time, quality scores from user feedback, and market demand trends and external factors such as service availability and seasonal variations.
The ML model learns complex relationships between these factors and optimizes token values to maintain fairness and efficiency. Reinforcement learning (RL) techniques can further refine this process by continuously experimenting with token adjustments and learning which pricing strategies yield optimal customer satisfaction and provider efficiency.
The RL-based model continuously updates token values based on real-world service interactions. Using a reward function that balances provider profitability and customer satisfaction, RL models adjust token values to optimize service allocation. The reward function
is defined as follows:
where
is the user satisfaction score,
is the desired satisfaction score threshold,
is the provider profitability,
is the target profitability level, and
are the weights assigned to user satisfaction and provider profitability.
Through continuous learning, the RL model refines token valuations, ensuring long-term equilibrium between service affordability, fairness, and business sustainability.
3.1.12. Use Case Illustration
To validate the effectiveness of the adaptive token model, we conducted a simulation of 50 service iterations. The service used in this case study was routine maintenance within the ATSaaS platform.
The reasoning for this service selection, as follows:
Routine maintenance involves labor, materials, and operational overhead, making it a suitable candidate for token-based valuation adjustments;
The execution time for routine maintenance can fluctuate based on aircraft type, maintenance complexity, and technician availability, making dynamic weight adjustment relevant;
Maintenance services are regularly evaluated based on effectiveness, compliance with safety standards, and customer feedback, aligning well with the adaptive weighting mechanism used in the model.
Thus, the routine maintenance service was used in this simulation to showcase how token values are dynamically adjusted in response to historical service data, real-time feedback, and operational performance.
Initially, cost, time, and quality weights were set at 30%, 30%, and 40%, respectively. The routine maintenance service costs ranged between USD 500 and USD 1500, the execution time varied between 5 and 20 h, and the service quality scores ranged from 0.6 to 1.0.
Over multiple iterations, the adaptive weighting mechanism adjusted the importance of cost, time, and quality based on real-time service performance feedback.
To illustrate the impact of the adaptive weighting mechanism in the token value calculation process, the visualizations of how the weights for cost, time, and quality evolve over multiple service iterations can be presented. This provides insight into how the model dynamically rebalances the relative importance of these parameters based on real-time service performance feedback.
Figure 12 demonstrates how the weights for cost, time, and quality are adjusted iteratively as new data are incorporated.
The initial weights were set at 30% for cost, 30% for time, and 40% for quality. Over the iterations, the adaptive model modifies these weights to reflect real-time service performance and customer feedback trends.
Figure 13 showcases how token values fluctuate over multiple service iterations in response to real-time adjustments in cost, time, and quality weights.
The graph highlights periods of increased and decreased token valuations, indicating how the self-learning mechanism ensures pricing fairness and efficiency based on evolving service performance.
The simulation demonstrated that when service costs exceeded expected values, the cost parameter weight increased to reflect its greater impact on token valuation. Conversely, when quality improved significantly, its weight was dynamically reduced to balance overall fairness.
The token values fluctuated accordingly, showing an increasing trend when cost and time exceeded expected benchmarks and a stabilizing effect when quality met or exceeded its expected threshold. This dynamic adaptation ensured that service valuation remained responsive to actual performance, enhancing fairness and efficiency.
The results also confirmed that reinforcement learning-based token adjustments successfully optimized pricing, ensuring providers received appropriate compensation while maintaining user satisfaction. The model’s ability to self-adjust and normalize over time suggests its robustness in managing diverse aviation technical support services efficiently.
This use case illustrates the power of machine learning-enhanced token valuation in ensuring dynamic, fair, and efficient service transactions, further strengthening the TBDCM as a scalable and transparent framework for aviation technical support platforms.
3.1.13. Dynamic Token Value Optimization Algorithm
The advanced token value optimization algorithm provides a comprehensive framework for dynamically adjusting token values based on multiple data sources and learning mechanisms (
Figure 14).
The algorithm takes as input an initial token value, historical service data, real-time performance metrics and user feedback data. Additional parameters include a learning rate and performance thresholds for satisfaction and profitability, respectively.
The initialization phase establishes the foundational elements of the optimization process. Initial weights are set for cost, time, and quality parameters. The algorithm initializes both a machine learning model for prediction and a reinforcement learning policy for optimization. A convergence flag and iteration counter are established to control the optimization loop.
The main optimization process operates iteratively until either convergence criteria are met or a maximum iteration limit is reached. Each iteration consists of the following three major components working in concert: an adaptive weight mechanism, machine learning optimization phase, and reinforcement learning component.
The adaptive weight mechanism begins by analyzing historical performance data, extracting relevant metrics, x such as service completion times, quality scores, and cost efficiency measures. Weight updates follow a gradient-based approach.
The machine learning optimization phase processes the collected data through feature extraction, creating a structured representation from the historical, real-time, and feedback data. The ML model is updated through training on this feature set and the current token values. This model then generates token value predictions based on the current state of the system.
The reinforcement learning component evaluates the system’s performance through a reward function. This reward function guides policy updates, which in turn determine adjustments to the predicted token values, resulting in an optimized value.
Convergence checking ensures the stability and effectiveness of the optimization process. The algorithm considers convergence achieved when the change in token value falls below a threshold and both user satisfaction and provider profitability meet or exceed their respective thresholds. This multi-criteria convergence check ensures that the optimization process achieves both stability and stakeholder satisfaction.
Upon successful convergence, the algorithm outputs the optimized token value and the final weight coefficients. Post-processing steps include updating the Service Passport with the new token value, logging optimization metrics for future reference, and initializing monitoring systems to track the performance of the new token value in operation.
The algorithm’s design ensures robust handling of various service types through its adaptive nature and multiple feedback mechanisms.
This advanced optimization approach significantly enhances the baseline token valuation method by incorporating dynamic learning and adaptation capabilities.