Next Article in Journal
Investigation of the Boundary Value Problem for an Extended System of Stationary Nernst–Planck–Poisson Equations in the Diffusion Layer
Previous Article in Journal
Dirichlet μ-Parametric Differential Problem with Multivalued Reaction Term
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Token-Based Digital Currency Model for Aviation Technical Support as a Service Platforms

by
Igor Kabashkin
1,*,
Vladimir Perekrestov
2 and
Maksim Pivovar
2
1
Engineering Faculty, Transport and Telecommunication Institute, Lauvas iela 2, LV-1019 Riga, Latvia
2
Sky Net Technics, Business Center 03, Ras Al-Khaimah B04-223, United Arab Emirates
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1297; https://doi.org/10.3390/math13081297
Submission received: 16 February 2025 / Revised: 2 April 2025 / Accepted: 14 April 2025 / Published: 15 April 2025

Abstract

:
This paper introduces a token-based digital currency (TBDC) model for standardizing service delivery in an aviation technical support as a service (ATSaaS) platform. The model addresses the challenges of service standardization and valuation by integrating cost, time, and quality parameters into a unified framework. Unlike traditional cryptocurrencies, this specialized digital currency incorporates intrinsic service valuation mechanisms that dynamically reflect the worth of aviation technical support services. The research presents a mathematical formulation for token value calculation, including a Service Passport framework for comprehensive documentation and a systematic approach for service integration. The model is validated through a numerical case study focusing on maintenance, repair, and overhaul services, demonstrating its effectiveness in generating fair token values across diverse service types. The study introduces optimization techniques using machine learning to enhance token calculations, successfully standardizing heterogeneous services while maintaining flexibility and transparency. Implementation challenges and future developments are identified. The TBDC model provides a foundation for transforming aviation technical support services, particularly benefiting small airlines through improved efficiency, standardization, and accessibility.

1. Introduction

1.1. Background and Motivation

The aviation industry is undergoing a transformative shift, with increasing reliance on digital platforms to deliver technical support, maintenance, and operational services efficiently. The concept of an Aviation Technical Support as a Service (ATSaaS) platform is a response to these evolving demands, offering a scalable and integrated solution for addressing the diverse needs of airlines, maintenance providers, and other stakeholders in the aviation ecosystem [1]. However, the implementation of such a platform presents unique challenges, particularly in the standardization and accessibility of services.
A key issue lies in the variability in aviation technical support services. These services range from routine maintenance and component repair to document processing, inspection services, and training. Each offering differs significantly in the complexity, cost, and value delivered, making it difficult to establish a unified approach for pricing, evaluating, and consuming these services. This complexity can create barriers for users in resource allocation and for providers in pricing standardization.
To address these challenges, this study proposes the introduction of a service payment currency within the ATSaaS platform. The token-based digital currency model (TBDCM) represents service transactions through predefined digital units, enabling transparent and standardized exchanges. Token values are assigned based on metrics such as cost, time, and service complexity, ensuring an objective and structured pricing framework.
The TBDCM enhances service accessibility and flexibility by creating a standardized pricing mechanism that adapts to evolving aviation needs. Tokens act as a universal currency, simplifying transactions, enabling service bundling, and fostering loyalty through reward programs. For example, customers can earn bonus tokens for repeat usage or redeem tokens across a range of services, creating a seamless and engaging user experience.
Additionally, this model supports scalability and innovation. As new services are introduced, their token values can be dynamically calculated using the same standardized framework, ensuring their integration into the platform without disrupting existing processes. The model dynamically adjusts token valuation in response to fluctuating service demands and operational constraints.
The TBDCM facilitates service standardization by linking token valuation to aviation compliance metrics and industry benchmarks. This approach not only optimizes the delivery of existing services but also lays the groundwork for sustainable growth and innovation in the aviation industry.

1.2. Related Works

The concept of a service-token model has garnered significant attention in recent years, particularly with the rise of blockchain technology and decentralized finance.
A study by Park and Youm [2] proposed a novel service model for investment in tokenized assets and trading in blockchain-based security tokens. The authors identified potential security threats and specified requirements to counter these threats, emphasizing privacy protection and anti-money-laundering measures. The proposed model facilitates user investment in tokenized tangible and intangible assets, addressing challenges in existing investment service models.
Reference [3] discusses a Token-as-a-Service (TaaS) framework for software-defined zero-trust networking. The study proposed a genetic-algorithm-based service optimization that generates unique tokens to maintain a trusted zone in multi-tenant environments. This approach reduces authentication and authorization loads in cloud servers by distributing databases across OpenFlow switches, enhancing security in complex network infrastructures.
Reference [4] presents a mechanism for secure service session management using blockchain capabilities. The study explores NFTs as digital proof of policy agreements for secure and immutable service-consumption tracking. This integration of blockchain technology into zero-trust networking provides a decentralized and secure method for managing service sessions.
Reference [5] examines Steemit’s blockchain-based incentive model, highlighting its token-driven reward system. The study proposed a process for building a desirable token economy model, emphasizing the importance of incentive design in achieving sustainable growth. The authors highlight how a well-structured token economy could program human behavior through incentives, contributing to the platform’s value creation.
Reference [6] explores the emergent start-up token funding model, known as an initial coin offering. An ICO enables startups to raise capital by issuing digital tokens, creating an alternative to traditional venture funding. The study examined the implications of this funding mechanism, shedding light on the evolving landscape of token-based business models and their potential to disrupt conventional financial systems.
In [7], the concept of conditional tokens (CTs) in supply-chain finance is introduced. Conditional tokens (CTs) encode service requirements into smart contracts, automating compliance and execution in financial transactions. The study proposed functions for CT operations, aiming to enhance transparency and efficiency in supply-chain financial transactions through programmable tokens.
Reference [8] explores volatility mitigation in digital assets by designing self-sustaining token economies with embedded stability mechanisms. The authors propose a formalized approach to designing self-sustaining token models within blockchain platforms, aiming to enhance business organizations’ interest in blockchain applications by mitigating concerns over token value instability.
Reference [9] introduces a systematic method for formulating service specifications in service-based applications (SBAs), enabled by blockchain technology. The study integrates specification patterns with blockchain to create a quality-of-service framework that supports service selection and workflow composition, enhancing the reliability and efficiency of SBAs.
Reference [10] presents a generalized understanding of blockchain technology using adapted stochastic processes. Focusing on financial instruments, the study introduces a valuation model that stabilizes token economics through stochastic analysis.
The research agenda in [11] evaluates blockchain’s role in service management, emphasizing its transformative impact on efficiency and security. The study discusses how blockchain can transform service interactions, emphasizing the need for further investigation into its implications for service design, delivery, and quality assurance.
The service-token model aligns with industry standards set by the European Union Aviation Safety Agency (EASA), Federal Aviation Administration (FAA), and International Air Transport Association (IATA) to ensure regulatory compliance and operational efficiency. These organizations establish industry benchmarks crucial for evaluating aviation technical support services. Below is a detailed review of their contributions and relevance to the model, with direct links to key documents.
The EASA is the regulatory authority responsible for ensuring civil aviation safety in Europe. Its comprehensive framework includes maintenance standards, personnel qualifications, and compliance mechanisms that shape standardized service delivery in aviation support. The key guidelines and standards that have relevance to the token model are as follows:
  • Part-M [12] and Part-145 [13] define maintenance standards, ensuring compliance in aviation technical support;
  • Part-66 [14] highlights personnel qualifications, influencing cost and complexity components;
  • Guidance material and acceptable means of compliance [15] ensure transparency in defining service-level agreements and quality expectations.
The FAA governs civil aviation in the United States, offering complementary standards to EASA’s framework. Its guidelines emphasize safety, quality, and operational reliability. The key guidelines and standards that have relevance to the token model are as follows:
  • 14 CFR Part 43 [16] contributes to defining reliability metrics;
  • Advisory Circulars [17] and the Continuous Airworthiness Maintenance Program [18] offer best practices for quality assurance and risk management;
  • The Continuous Airworthiness Maintenance Program [18] ensures ongoing compliance with safety requirements for commercial operators;
  • The Repair Station Regulations (14 CFR Part 145) [19] ensure standardization in the evaluation of maintenance services, affecting both cost and complexity.
The IATA provides a global perspective on aviation operations, focusing on safety, efficiency, and digital transformation. Its resources are essential for aligning the token model with international benchmarks. The key guidelines and standards that have relevance to the token model are the following:
  • The IATA Operational Safety Audit (IOSA) [20] is a globally recognized certification program for evaluating operational management and control systems;
  • The Airline Operational Cost Management Guidelines [21] provide benchmarking data for direct operating costs, including maintenance expenses;
  • The Guidance on Digital Transformation [22] encourages adopting digital platforms to enhance operational efficiency and customer satisfaction.
The reviewed works demonstrate a growing interest in token-based models across various domains, including investment services, network security, decentralized platforms, and supply-chain finance. These studies highlight the versatility of tokens in representing value, enforcing policies, and incentivizing behaviors within digital ecosystems.

1.3. Research Gap and the Paper’s Contributions and Structure

Despite the growing interest in token-based service models across various industries, existing studies often focus on financial systems, blockchain applications, or specific niche services without addressing the complex requirements of aviation technical support. Prior research predominantly emphasizes theoretical frameworks or isolated use cases, lacking a unified and adaptable model suitable for diverse service types in aviation. Additionally, current models fail to comprehensively integrate cost, time, and quality parameters into a standardized evaluation framework that can dynamically adjust to user feedback. This gap hinders the adoption of scalable and customer-centric solutions in ATSaaS platforms, particularly for small airlines and service providers with limited resources.
This paper addresses these gaps by introducing a service-token model tailored for ATSaaS platforms. The model offers several key contributions, as follows:
  • A mathematical model that standardizes service valuation by integrating cost, time, and quality metrics into a single token-based system;
  • Incorporates real-time user feedback to recalibrate token values, ensuring alignment with service quality and customer satisfaction;
  • A structured digital documentation system that defines, evaluates, and communicates service characteristics, fostering transparency and trust;
  • Demonstrates the model’s applicability across a wide range of aviation services, from routine maintenance to specialized training and consulting;
  • The model’s effectiveness is illustrated through a numerical case study focusing on maintenance, repair, and overhaul (MRO) services, showcasing its ability to generate fair, transparent, and adaptive token values.
By addressing the challenges of service standardization, accessibility, and quality assurance, this research lays the groundwork for transforming aviation technical support services.
The structure of this paper is as follows: Section 2 introduces the ATSaaS concept, outlines the methodology for developing the TBDCM, and details the process of data collection, model design, and validation. Section 3 presents the TBDCM framework, including the mathematical formulation, normalization process, and Service Passport. A numerical case study illustrates the model’s practical application. Section 4 compares the TBDCM with traditional pricing models, highlighting its advantages, challenges, and limitations. Future research directions are proposed to enhance the model’s scalability, transparency, and adaptability. Section 5 summarizes the research findings, emphasizing the TBDCM’s potential to optimize aviation technical support services and its implications for future developments.

2. Materials and Methods

2.1. Aviation Technical Support as a Service

ATSaaS has emerged as an innovative solution designed specifically to address the unique challenges faced by small airlines in maintaining their aircraft and managing technical operations. This concept represents a significant shift from traditional maintenance models by combining advanced digitalization methods, collaborative frameworks, and customizable service offerings.
Small airlines face several critical challenges in maintaining their aircraft. They often operate with limited financial and human resources compared to larger carriers, which affects their ability to invest in state-of-the-art maintenance facilities and skilled personnel. Additionally, these airlines frequently lack in-house technical expertise and struggle with accessing spare parts and components in a timely and cost-effective manner. They must also navigate complex regulatory requirements while keeping pace with evolving industry technologies and methodologies.
The ATSaaS model addresses these challenges through several key components. At its core is a central platform that serves as a hub for communication, collaboration, and data exchange between service providers, airlines, and other stakeholders. This platform includes features such as user authentication, centralized dashboards, communication tools, document management, service request systems, and data integration capabilities.
A distinguishing feature of ATSaaS is its emphasis on customization and scalability. The service can be tailored to meet the specific needs of each airline, considering factors such as fleet size, operational requirements, and regulatory obligations. This flexibility allows airlines to adjust their technical support services as their needs evolve, enabling cost optimization and improved operational efficiency.
ATSaaS operates within a collaborative ecosystem that connects various stakeholders. This includes the primary service provider, small airlines, aircraft manufacturers, spare parts suppliers, MRO providers, regulatory bodies, training centers, research institutions, and technology providers. Each stakeholder contributes specific expertise and resources, fostering a comprehensive support network that benefits all participants.
A key advantage of ATSaaS is its cost structure. The pay-as-you-go or subscription-based model eliminates the need for significant upfront investments, making high-quality technical support services more accessible to small airlines. This approach allows airlines to optimize their maintenance costs while ensuring access to necessary expertise and resources.

2.2. Materials and Methods of the Study

The aims of this study are the development and validation of a structured service-token model for an ATSaaS platform. A mixed-methods approach was employed, integrating domain-specific regulatory data, expert evaluations, pilot service feedback, and numerical modeling techniques. The methodology consists of the following three core phases: model development, pilot implementation, and validation.

2.2.1. Data Sources

Standards from the EASA (Part-M, Part-145, and Part-66) [12,13,14], FAA (14 CFR Part 43 and Part 145) [16,19], and IATA (IOSA standards and Airline Operational Cost Management Guidelines) [20] served to define baseline compliance requirements, influencing quality and cost parameters within the model.
A total of 14 semi-structured interviews were conducted with stakeholders across the following three groups: (1) small airline operators (n = 5) with a fleet size < 15 aircraft; (2) MRO and aviation service providers (n = 6); and (3) aviation quality/safety auditors (n = 3). Interviews focused on quantifying the time requirements, cost structures, and quality metrics associated with services such as routine maintenance, component repair, inspections, and training. Additionally, 62 structured survey responses were obtained from platform users during a controlled pilot.
Operational data were collected during 12 real-life service transactions (4 routine maintenance, 3 inspections, 3 document processing, and 2 training sessions) carried out within the ATSaaS test environment. These services were priced using preliminary token assignments.

2.2.2. Model Development

The model’s structure was formulated to quantify the service value via normalized metrics of cost (e.g., direct, indirect, and variable), time (e.g., preparation, execution, and support), and quality (e.g., satisfaction, reliability, effectiveness, and complexity). Initial weighting coefficients for each service type were determined using the analytic hierarchy process based on expert pairwise comparisons.

2.2.3. Pilot Testing and Feedback Loop

Pilot testing involved the deployment of the token-based pricing mechanism on the ATSaaS platform for the 12 selected services. Upon service completion, customers completed a feedback questionnaire rating cost fairness, delivery time, and quality (on a 5-point Likert scale). These data were utilized to compute realized quality and inform recalibration of token values.

2.2.4. Validation and Calibration

Token values generated by the model were compared to those derived from conventional time-and-materials pricing and expert valuation. Discrepancies exceeding 15% triggered weight adjustments using a feedback-based correction algorithm.
The validation of the token-based digital currency model follows a rigorous multi-dimensional approach to ensure its practical applicability, accuracy, and robustness. The validation framework incorporates both real-world operational data and controlled simulations, enabling a comprehensive assessment of the model’s performance across diverse service scenarios.
For empirical validation, data from three primary sources were collected, as follows:
  • Operational service transactions—12 real service deliveries performed within the ATSaaS test environment (4 routine maintenance, 3 inspections, 3 document processing, and 2 training sessions) with measurements of the actual cost, time, and quality metrics;
  • User feedback—62 structured survey responses from platform users participating in the pilot, evaluating service satisfaction, perceived value, and preference comparisons between token-based and traditional pricing;
  • Expert evaluations—structured assessments from 14 domain experts regarding the fairness and accuracy of token values assigned to specific services.
For statistical validation, the following were employed:
  • Comparative analysis for which token values were benchmarked against conventional time-and-materials pricing, with discrepancies quantified and analyzed;
  • Convergence testing by statistical analysis of the token value stabilization across 50 iterative feedback cycles;
  • Sensitivity analysis by systematic variation in the input parameters to assess the model’s robustness and identify boundary conditions.
The validation metrics included the following:
  • Mean absolute percentage error (MAPE) between the token-derived service valuation and expert consensus valuation;
  • Coefficient of variation (CV) across iterations to quantify convergence stability;
  • Correlation coefficient between user satisfaction ratings and quality parameter weights;
  • Consistency ratio in the AHP process to ensure logical coherence in expert judgments.
These validation approaches collectively address the model’s mathematical correctness, operational feasibility, and alignment with stakeholder expectations—three critical dimensions for ensuring the practical value of the proposed framework.

2.3. Framework for the Service-Token Model of an ATSaaS Platform

The service-token model provides a structured approach to standardizing the valuation and payment of services on an ATSaaS platform. It uses tokens as a unified currency, enabling transparency, scalability, and efficiency in service transactions. This model addresses the diverse nature of aviation technical support services by providing a consistent framework for evaluating and accessing offerings, ensuring fairness and predictability for customers while fostering operational efficiency for providers.
At its core, the TBDCM standardizes service valuation through the following three key parameters: cost, time, and quality. Cost reflects the monetary resources required to deliver the service, including direct costs (e.g., labor and materials), indirect costs (e.g., overheads), and variable costs that scale with demand. Time captures the duration of the preparation, execution, and post-service support, while quality encompasses both objective and subjective metrics, such as customer satisfaction, reliability, effectiveness, and complexity.
The taxonomy of the main components of these parameters are shown in Figure 1.

3. Results

3.1. Baseline Token Valuation Method

3.1.1. Service-Token Model for a Baseline Token Valuation Method

The service-token model is based on a weighted formula that integrates the following three main parameters: cost, time, and quality, each contributing to the token value assigned to a service. Below is a detailed description of the model.
The main variables and parameters of the model are as follows:
  • S i —A specific service, i (e.g., consulting, training, and spare parts);
  • C i —Cost of providing service i (e.g., operational cost and labor cost);
  • T i —Time or effort required for service i ;
  • Q i —Quality factor for service i (e.g., reliability, customer satisfaction rating, and complexity);
  • D i —Demand for service i (number of requests or users per time);
  • V i —Base token value assigned to service i ;
  • V —Unit monetary value of one token (e.g., USD 1 per token);
  • W i —Weight for the relative importance of service attributes for i ;
  • K —Platform operational multiplier for overhead or profit margin;
  • R —Revenue generated from point consumption;
  • A i —Adjusted tokens, which is the final point value for service i , dynamically calculated based on the demand and availability.
Each service is assigned a base token value based on its attributes, as follows:
V i = w c C i + w t T i + w q Q i  
where w c + w t + w q = 1 .
Tokens are dynamically adjusted based on the demand and availability, as follows:
A i = V i · 1 + D i D ¯ D ¯
where D ¯ is the average demand for all services.
If D i > D ¯ , tokens increase to reflect the higher demand; if D i < D ¯ , tokens decrease to incentivize usage.
Revenue from a service is calculated as follows:
R i = A i · U i · V
where U i is the number of units of service i consumed.
Platform profitability is calculated as follows:
P r o f i t = i = 1 n R i i = 1 n C i
The platform can optimize token values to maximize profit or customer satisfaction, as follows:
max P i   i = 1 n ( R i C i )
which is subject to the following constraints:
  • A i V i —Adjusted points cannot drop below the base points;
  • A i M a x   T o k e n s   A l l o w e d   f o r   i —Prevention of the overpricing of services.
The model evaluates each service, S i , based on the parameters of cost, time, and quality.
To represent the definition of cost, time, and quality for each service, we break each parameter into its components with the next mathematical formulation.
  • Cost, which refers to the monetary resources required for service delivery, as follows:
C = C d + C i n + C v
where C d —direct costs (e.g., labor, materials, and equipment); C i n —indirect costs (e.g., overheads and administrative expenses); and C v —variable costs (e.g., costs that scale with usage).
2.
Time, which is the total time required for delivering the service, as follows:
T = T p + T e + T s
where T p —preparation time (e.g., scheduling and setup); T e —execution time (e.g., performing the service); and T s —support time (e.g., follow-up activities).
3.
Quality, which is a composite score reflecting customer satisfaction and service performance, as follows:
Q = w q s Q s + w q r Q r + w q e Q e + w q c Q c
where
  • Q s is the customer satisfaction score (e.g., survey ratings and Likert scale);
  • Q r is the reliability (e.g., consistency of outcomes and timeliness);
  • Q e is the effectiveness (e.g., achievement of desired outcomes);
  • Q c is the complexity (e.g., difficulty or resource intensity);
  • w q s , w q r , w q e , a n d   w q c are the weights reflecting the importance of each quality component, as follows:
Q s = S u m   o f   R a i t i n g s N u m b e r   o f   R a i t i n g s
Q r = 1 N u m b e r   o f   F a i l u r e s T o t a l   S e r v i c e   I n s t a n c e s
Q c = A c h i v e d   O u t c o m e s E x p e c t e d   O u t c o m e s
and Q e —is the score assigned based on qualitative factors such as customization or task difficulty, normalized to a 0–1 scale.
Normalization across services ensures that the evaluation and comparison of diverse-services on the ATSaaS platform are fair, consistent, and scalable. Given the varying nature of aviation technical support services—ranging from routine maintenance to specialized training—normalization involves standardizing the cost, time, and quality metrics to a common scale. This process enables the calculation of token values that are comparable across different services, regardless of their complexity or operational context. For instance, time durations may be normalized as a proportion of the maximum execution time for any service, while cost components can be expressed as percentages of a defined baseline (e.g., average service cost). Similarly, quality parameters such as satisfaction and reliability are normalized to a 0–1 scale, ensuring uniform representation in the weighted formula. By implementing normalization, the platform maintains transparency, avoids bias in service valuation, and supports dynamic adjustments as new services are introduced or existing ones are refined. This approach not only facilitates fairness but also aligns with the scalability requirements of a token-based system.
To ensure the comparability of cost, time, and quality across different services, the following were used:
C i N = C i m a x   ( C )   ,     T i N = T i m a x   ( T )   ,     Q i N = Q i m a x   ( Q )
where C i N is the normalized cost, T i N is the normalized time, and Q i N is the normalized quality.
The total token value, T V , for a service is calculated as a weighted sum of the three parameters in accordance of Expression (1), as follows:
T V = w c C + w t T + w q Q  
where T V —total token value for the service; w c , w t ,   a n d   w q are the weights assigned to cost, time, and quality, respectively,
The model allows for adjustments based on an external or dynamic factor:
  • Demand adjustment: to account for periods of high or low demand, a demand adjustment factor D is applied, as follows:
T V = T V ( 1 + D )
where D > 0 is a high demand (increases the token value) and D < 0 is a low demand (reduces the token value).
  • Feedback adjustment, F : the realized quality, Q r , from the customer feedback can adjust the token value, as follows:
T V = T V 1 + F
where F > 0 is a positive feedback adjustment and F < 0 is a negative feedback adjustment.
The adjustment factor, F , is derived as follows:
F = w q Δ Q
where w q is the weight assigned to quality in the token formula. This factor determines the magnitude of the token value adjustment based on the quality deviation.
The model incorporates feedback loops to adjust token values dynamically, as follows:
  • Periodic reassessment of weights and parameters based on operational data;
  • Customer satisfaction and realized quality inform recalibrations;
  • Simulations assess the model’s robustness when introducing new services or handling increased volumes.

3.1.2. Techniques for Calculating Weight Coefficients in Multi-Criteria Decision Making

Determining weight coefficients is a critical step in multi-criteria decision-making processes, as it reflects the relative importance of each criterion or alternative. Common methods for calculating weights include the following:
  • Direct weight assignment whereby experts directly assign weights to criteria or alternatives based on their judgment [23];
  • Ranking and rating methods whereby criteria are ranked or rated, and the weights are derived based on these ranks or ratings [24];
  • Pairwise comparison methods, including the analytic hierarchy process (AHP), which uses pairwise comparisons to calculate weights [25];
  • Entropy-based methods whereby weights are calculated based on the variability or entropy of data [26];
  • Regression or optimization models, which are used when data-driven approaches are required [27].
Among these, the AHP method is particularly suited for discussed situations involving multiple criteria and subjective judgments because of the following:
  • Accommodates both qualitative and quantitative criteria;
  • Structures the decision problem hierarchically, facilitating clarity in evaluation;
  • Incorporates subjective expert opinions into a consistent mathematical framework;
  • Handles both individual and group decision-making scenarios effectively;
  • By applying the AHP method, the weights derived are consistent and transparent, reducing the risk of biased or arbitrary decisions.
Sequence of applying the AHP method.
  • Case of one expert, as follows:
    • Define the criteria or alternatives to be evaluated;
    • Construct a pairwise comparison matrix where each element represents the relative importance of one criterion compared to another;
    • Normalize the matrix by dividing each element by the sum of its column;
    • Calculate the priority vector (i.e., weights) by averaging the normalized values across each row;
    • Check the consistency ratio to ensure the logical consistency of judgments.
  • Case of multiple experts, as follows:
    • Each expert independently completes a pairwise comparison matrix;
    • Aggregate the individual matrices into a single group matrix, typically using the geometric mean;
    • Normalize the aggregated matrix and compute the priority vector as above;
    • Check the consistency ratio for the aggregated matrix.

3.1.3. Example of Determining Weighting Factors for Service Using AHP

The methodology of the AHP for TBDCM includes the next main steps, as follows:
  • Define the criteria: the criteria (cost, time, and quality) are evaluated independently for each service;
  • Pairwise comparisons: each expert performs pairwise comparisons for the three criteria for each service;
  • Aggregate judgments: the geometric mean is used to combine the judgments of the three experts for each service, as follows: for n experts and their pairwise comparison values a 1 ,   a 2 , , a n for a specific comparison, where a i is the pairwise comparison value provided by the i -th expert, as follows:
    G e o m e t r i c   M e a n = i = 1 n a i 1 n
  • Normalize: the aggregated pairwise comparison matrix is normalized;
  • Calculate the weights: the priority vector (i.e., weights) is calculated by averaging the rows of the normalized matrix;
  • Repeat for each service: the process is repeated separately for routine maintenance (RM) and document processing (DP).
The following figures illustrate the step-by-step application of the AHP method for determining weighting factors for the criteria cost, time, and quality across the following two services: routine maintenance (RM) and document processing (DP).
Figure 2 illustrates the initial individual judgments provided by three experts for each criterion pair (cost vs. time, cost vs. quality, and time vs. quality) for the two services.
The values in this figure represent dimensionless comparison ratios on the Saaty scale (1–9), where 1 indicates equal importance among criteria, 3 indicates moderate importance of one criterion over another, 5 indicates strong importance, 7 indicates very strong importance, 9 indicates extreme importance, and 2, 4, 6, and 8 are intermediate values [25].
Figure 3 visualizes the aggregated pairwise comparisons provided by the experts for each criterion comparison using the geometric mean. These values are also dimensionless ratios derived from the geometric mean calculation of the expert judgments, following the same Saaty scale as in Figure 2. The comparisons include cost vs. time, cost vs. quality, and time vs. quality for both services. It highlights the relative importance assigned by experts in the first step of the AHP process.
Figure 4 displays the aggregated pairwise matrices for the two services after combining expert judgments. The matrix includes all pairwise comparisons among the criteria (cost, time, and quality) and serves as the basis for normalization in the subsequent step. The values shown are dimensionless comparison ratios, with diagonal elements always equal to 1 (representing self-comparison). Off-diagonal elements follow the Saaty scale interpretation.
Figure 5 shows the normalized pairwise comparison values for each matrix element. The normalization process ensures that the sum of each column equals 1, enabling consistent calculation of priority weights. The figure provides insight into the relative contributions of each criterion.
This summary in Figure 6 presents the final weights for cost, time, and quality for both services. It consolidates the results from all steps, providing a clear comparison of the relative importance of each criterion.
For routine maintenance, quality (0.410) has the highest weight, slightly exceeding cost (0.396), reflecting its importance in ensuring long-term effectiveness.
For document processing, cost (0.557) dominates, followed by quality (0.252), indicating a focus on cost-effectiveness with a secondary emphasis on accuracy.

3.1.4. Numerical Case Study

To illustrate the application and versatility of the service-token model, this section presents an expanded example calculation that incorporates multiple MRO-oriented services. The MRO services, being core to aviation technical support, are characterized by their complexity, resource intensity, and critical role in ensuring airworthiness. By applying the TBDCM, we demonstrate how diverse services—such as routine maintenance, component repair, and comprehensive inspections—are evaluated using the standardized parameters of cost, time, and quality.
This expanded example not only highlights the detailed breakdown of each parameter but also showcases how the model accommodates service-specific factors like labor costs, execution durations, and quality metrics such as reliability and complexity. The inclusion of adjustments for dynamic factors, such as demand fluctuations and customer feedback, further emphasizes the adaptability and practicality of the model. By analyzing MRO-oriented services, this example provides a robust demonstration of the model’s capability to handle the intricacies of high-stakes aviation operations while ensuring fairness and transparency in service valuation.
The next example demonstrates the application of the service-token model for evaluating token values across the main MRO-oriented services. Each service is assessed based on the standardized parameters of cost, time, and quality, which are weighted to calculate the final token values.
The initial data for the selected services are presented in Table 1. The values are categorized under the three main parameters, with further breakdowns for specific subcomponents.
The missing data in Table 1 reflect the inherent differences in the nature and operational requirements of the services being evaluated. Not all services involve every type of cost or operational element, and this is accounted for in the service-token model to ensure accuracy and relevance in valuation. For instance, certain services, such as consulting and document processing, may not involve physical materials or complex overheads, which are more relevant for resource-intensive MRO services like routine maintenance or component repair. For example, consulting relies heavily on labor costs (e.g., expert time) and minimal overheads, with no material costs, while document processing primarily involves automation tools with minimal human intervention.
The inclusion or omission of specific cost, time, or quality metrics depends on the operational structure of the service. For instance, metrics like materials are irrelevant for services like consulting or training, which do not involve tangible components, while overheads may not significantly impact low-resource services like document processing. These omissions reflect the simplicity or automation of some services compared to the complexity of others.
Even with missing data, the service-token model ensures fair valuation through its normalization and weighting mechanisms. Normalization allows for the comparison of services only for relevant components, while the weighting system accounts for the absence of certain metrics by redistributing focus to the elements that are most significant for a particular service. For example, in consulting, the absence of material costs means labor and overheads dominate the cost component, while for document processing, the cost of software tools and automation efficiency drive the valuation.
To enable a fair comparison, we normalize all parameters to a 0–1 scale using the maximum values for each parameter across all services.
The maximum values across all services are as follows:
  • max   C = USD 1600;
  • max   T = 30 h;
  • max   Q = 1.0.
The normalized values of the parameters are presented in Table 2.
The allocation of weighting coefficients for cost, time, and quality in the service-token model depends on the specific characteristics and priorities of each service type. Different services place varying levels of importance on these parameters based on their operational demands, safety implications, and resource requirements. Table 3 provides a proposed weighting scheme for the discussed service types on the basis of expert conclusions.
Using Formula (1) with the weights in Table 3, we obtain the results shown in Table 4.
A general analysis of the results obtained shows the following:
  • MRO-oriented services
    • Routine maintenance scores the highest among MRO services because of its significant cost and time requirements, alongside high reliability;
    • Component repair is moderately valued, balancing resource use and excellent quality;
    • Inspection services have the lowest point value among MRO services because of its low time and cost, despite strong quality metrics.
  • Non-MRO services
    • Consulting has a high point value, reflecting its resource-intensive and high-complexity nature;
    • Training provides excellent quality at a moderate cost, making it highly efficient;
    • Document processing, while the most cost-effective, scores lower because of its simpler and less resource-intensive nature. It is important to note, however, that this assessment reflects only the characteristics of the document processing services evaluated during the pilot phase, which involved routine, template-based operations with minimal regulatory variation. In real-world scenarios, the complexity of document processing can vary significantly—particularly in cases involving newer aircraft, first-time maintenance procedures, or jurisdiction-specific compliance documentation. For such services, the quality and complexity components may carry greater weight, and the token value would adjust accordingly. The model is designed to accommodate such variability through its feedback-driven recalibration mechanism and flexible weighting system based on service-specific parameters.
This integrated example highlights the flexibility of the service-token model in evaluating a wide range of services, from high-complexity MRO operations to resource-efficient administrative tasks. The model’s ability to normalize data and assign fair point values ensures transparency, scalability, and adaptability across the ATSaaS platform.

3.1.5. Service Passport in the Service-Token Model

The Service Passport is a central element of the TBDCM, providing a structured digital record that defines, standardizes, and communicates the value of each service offered on a platform. It serves as a comprehensive repository of information, including operational characteristics, resource requirements, performance expectations, and evaluation metrics, ensuring transparency and consistency across diverse services. Within the ATSaaS platform, the Service Passport is essential for documenting and justifying the assigned token values, fostering clarity and fairness for stakeholders while supporting scalability and adaptability.
The Service Passport is designed with a clear structure to encompass all critical components of the service-token model. The structure of the Service Passport is shown in Table 5.
The Service Passport ensures clarity in token assignment by providing a transparent rationale for the calculated value, helping stakeholders understand how cost, time, and quality contribute to the service’s evaluation. It serves as a basis for comparing heterogeneous services, enabling fair evaluation despite differences in scope, complexity, and resource requirements. Furthermore, it supports continuous improvement by integrating performance data and customer feedback, ensuring token values remain reflective of current realities. As a scalable tool, the Service Passport simplifies the integration of new services into the platform through its standardized template, maintaining uniformity across offerings.
Despite its advantages, implementing a Service Passport presents challenges. Collecting accurate and comprehensive data for cost, time, and quality metrics can be resource-intensive, and the dynamic nature of the services requires continuous updates to ensure relevance. Achieving standardization across multiple service providers may also demand stringent guidelines and oversight. However, these challenges are outweighed by the benefits of transparency, fairness, and adaptability.
The Service Passport is poised to evolve with advancements in technology. Automation through AI and machine learning can streamline data collection and real-time updates. Integration with blockchain technology could enhance transparency and immutability, while advanced analytics could predict trends, optimize resource allocation, and refine service valuation. These developments will further strengthen the Service Passport’s role as a cornerstone of the TBDCM, driving innovation and efficiency within the aviation technical support industry. By documenting and justifying service values comprehensively, the Service Passport ensures clarity, fairness, and scalability, making it an indispensable tool for platforms like ATSaaS.

3.1.6. Initial Definition of Token Value for a New Services

Once the Service Passport has been structured to define the essential characteristics of the service—such as its scope, regulatory requirements, and performance expectations—the next step is to assign an initial token value. This process uses the defined parameters as inputs for estimating a fair and adaptive starting point for service valuation within the token-based framework.
Defining the initial token value for a new service within the service-token model is a critical step that requires precision, transparency, and alignment with both platform objectives and stakeholder expectations. The token value reflects the service’s intrinsic worth by integrating its cost, time, and quality components into a single, normalized metric.
Figure 7 illustrates the workflow for defining the initial token value of a new service. It outlines the sequential steps—from parameter decomposition and expert evaluation to simulation and pilot feedback—that guide the structured initialization process. This visual representation helps clarify the model’s practical implementation and highlights its reliance on both expert judgment and adaptive recalibration.
The following steps are recommended for calculating the initial token value.
  • Step 1. Parameter Decomposition. Break down the service into the three core parameters—cost, time, and quality. Each of these is further detailed into subcomponents as defined in Section 3.1.1 (e.g., direct labor, preparation time, customer satisfaction proxies).
  • Step 2. Analogous estimation. Identify a comparable service from the platform or external reference (e.g., from MRO or consulting databases) that shares operational or structural similarities. Use its normalized values as a starting point for assigning baseline metrics to the new service.
  • Step 3. Expert evaluation: Employ structured techniques such as the Delphi method or AHP with a panel of domain experts to estimate weights and relative parameter values. This is particularly valuable for assessing quality dimensions like complexity or expected reliability, which are otherwise difficult to quantify at launch.
  • Step 4. Use of parametric cost models. Apply parametric estimation formulas when available (e.g., cost per hour for inspection personnel, cost per training module, or document complexity coefficients). These models are well-established in aviation project management and logistics literature and help provide a grounded initial cost estimate.
  • Step 5. Simulation-based sensitivity testing. Before deploying the service on the platform, simulate token calculations under various parameter combinations to evaluate sensitivity and detect potential valuation anomalies. This also allows testing how demand or user feedback might influence recalibration in early iterations.
  • Step 6. Provisional token assignment and monitoring. Assign an initial token value based on the above estimates and launch the service under pilot conditions. Collect feedback from users during the first service cycles to measure actual performance (e.g., delivery time, perceived quality, cost deviation). Use these data to perform the first round of token value recalibration.
This hybrid approach ensures that the initial token value reflects domain-specific knowledge, analogous service benchmarks, and established cost-estimation principles. It also aligns with best practices found in software sizing (e.g., function point analysis), manufacturing (e.g., parametric cost modeling), and service pricing strategies discussed in prior works. By grounding the initial valuation in systematic estimation and expert input, the model avoids arbitrary assumptions and supports early-stage accuracy until sufficient empirical data become available.

3.1.7. Correction of Token Value Based on User Feedback About Quality

The correction of token values based on user feedback is a crucial process in the service-token model, ensuring that the assigned token value accurately reflects a service’s real-world performance. The quality parameter, which includes metrics such as customer satisfaction, reliability, effectiveness, and complexity, is inherently dynamic and subject to change as services are delivered and users provide feedback. Incorporating this feedback allows the platform to adapt token values to evolving service performance and user expectations. Figure 8 illustrates the step-by-step workflow for correcting token values based on user feedback.
The process begins with the collection of user feedback through various mechanisms, such as surveys, rating systems, and qualitative reviews. Users are prompted to evaluate the service across specific quality dimensions, including satisfaction with the outcome, timeliness, and overall reliability. These data form the foundation for assessing realized service quality.
Once collected, the feedback is aggregated to create a composite quality score for the service, which represents the realized quality based on user experiences. This step minimizes the influence of outliers and biases by averaging data across multiple users. Aggregation ensures that the quality assessment is robust and reflective of the broader user base.
The realized quality score is compared to the initially assumed quality value used in the original token calculation. The difference quantifies whether the service has exceeded expectations or underperformed. This comparison highlights the need for any token value adjustments.
Before making adjustments, the platform investigates the causes of any quality discrepancies, for example, as follows:
  • A negative deviation may indicate service delivery issues, such as delays or inconsistent outcomes;
  • A positive deviation may reflect unanticipated excellence in service execution. This step ensures that adjustments are informed by the root causes of quality deviations rather than surface-level metrics.
If necessary, at the step 5 the quality parameter is adjusted to reflect the updated score. The adjustment factor is derived in accordance of Formula (4). Using the adjusted quality parameter, the new token value is recalculated at the step 6 in accordance of Expression (3).
The final step involves validating the recalculated token value with stakeholders, including service providers and users, to ensure alignment with expectations and feedback. The rationale for the adjustment is documented in the Service Passport, and the updated token value is communicated transparently to all stakeholders. This builds trust in the platform’s responsiveness and fairness.
Correcting token values based on user feedback about quality is essential to maintaining the TBDCM’s integrity and adaptability. By systematically incorporating realized quality metrics into token calculations, the platform ensures that token values accurately reflect real-world service performance.

3.1.8. Service-Token Model Validation

The validation of the service-token model is essential to ensure its effectiveness, reliability, and applicability in real-world scenarios. This process assesses the model’s ability to assign accurate, fair, and dynamic token values to services on the ATSaaS platform. Validation focuses on the following three critical aspects: the accuracy of token values, responsiveness to feedback, and scalability across diverse services.
To evaluate the model’s accuracy, a series of simulations were conducted using real and hypothetical service data. The simulation included 50 iterative cycles, each representing a single feedback loop where service performance data were used to adjust token values and weighting coefficients. These simulations involved generating cost, time, and quality parameters for various services, such as routine maintenance, component repair, consulting, and document processing. Initial token values were calculated using the model’s weighted formula, integrating these parameters. The calculated values were then compared against industry standards, expert evaluations, and user expectations. This process demonstrated the model’s ability to generate token values that align with the intrinsic characteristics of services, ensuring fairness and consistency across different service types.
To illustrate the practical application of these validation results, consider the routine maintenance service scenario. In this use case, the initial weight distribution (cost: 0.2, time: 0.3, and quality: 0.5) reflected traditional service valuation priorities where quality held a slight premium over operational factors.
The validation results for this routine maintenance scenario demonstrated the model’s effectiveness. Starting with service costs ranging between USD 500 and USD 1500, execution times varying from 5 to 20 h, and quality scores between 0.6 and 1.0, the model successfully adapted its weight distribution while maintaining coherent service valuations.
Figure 9 illustrates the weight evolution during the validation process over 50 iterations, showing the dynamic adjustment of component weights from their initial values to their final target distribution. This transition demonstrates the model’s capability to systematically adjust service evaluation parameters based on operational requirements and performance feedback.
Operational analysis revealed that a modified weight distribution (cost: 0.2, time and quality: about 0.4) would better align with service efficiency and customer satisfaction objectives. The weight evolution graph reveals several key characteristics of the model’s adaptation process, as follows:
  • Smooth transition from initial to target weights, indicating stable adjustment mechanisms;
  • Maintenance of the unity sum constraint throughout the adaptation process;
  • Consistent convergence behavior across all three parameters;
  • Appropriate response to the specified target distribution while avoiding oscillatory behavior.
Figure 10 presents the token value trend throughout the validation period, showing a controlled decrease from an initial value of approximately 900 to a final stable value around 600. This systematic reduction in token value reflects the model’s ability to respond to changing weight distributions while maintaining predictable valuation behavior.
The initial token value of approximately 900 reflected the higher initial cost weighting, while the final stabilized value of around 600 better represented the optimized weight distribution that emphasized time and quality factors. The token value evolution exhibits several notable characteristics, as follows:
  • Gradual, controlled transition between initial and final values;
  • Maintained stability with minor variations reflecting real-world fluctuations;
  • Consistent convergence pattern aligned with weight adjustments;
  • Absence of dramatic fluctuations that could destabilize service pricing.
Figure 9 and Figure 10 visualize the evolution of weight coefficients and token values across iterations. Each point on the graph corresponds to a discrete simulation step, and the connecting lines illustrate the continuous adjustment trend over time. The values between points are meaningful, as the model’s recalibration logic is mathematically continuous—allowing interpolated states between discrete updates. This helps to demonstrate convergence behavior and system stability.
The purpose of this visualization is not to reflect real-time empirical data, but to test the model’s robustness and sensitivity under controlled dynamic conditions. The trends confirm that the model reaches stable valuations and weight distributions over time, validating its ability to adapt to user feedback without introducing volatility.
The validation results demonstrate that the service-token model achieves both mathematical consistency and practical applicability. Table 6 summarizes the key quantitative findings from our validation process.
The validation confirmed that the model successfully integrates cost, time, and quality parameters into a unified valuation framework. The mean absolute percentage error (MAPE) between token-calculated values and expert consensus valuations was 12.4%, below our acceptable threshold of 15%. This indicates that the model generates token values that align well with expert judgment.
The convergence patterns illustrated in Figure 9 and Figure 10 show that the weight coefficients and token values stabilize within approximately 27 iterations. Stability was defined as achieving a coefficient of variation (CV) below 3% in the final 10 iterations, indicating that the model reaches equilibrium efficiently without requiring excessive recalibration cycles.
Statistical analysis of user feedback revealed a strong positive correlation (r = 0.76) between service quality ratings and corresponding quality parameter weights, confirming that the feedback mechanism effectively captures user perceptions and translates them into appropriate token value adjustments. The consistency ratio in the AHP was 0.067, which is well below the 0.1 threshold, indicating the logical coherence in expert judgments used for initial weight determination.
A sensitivity analysis was conducted by systematically varying the input parameters (cost: ±20%, time: ±30%, and quality: ±25% from baseline values) to assess model robustness. The results show that the token values remained within acceptable bounds under these variations, with the quality parameter changes having the most significant impact. This finding confirms the importance of reliable quality metrics and justifies our emphasis on comprehensive feedback collection.
The model was further tested through scaled simulation involving 100 hypothetical service profiles with randomized but plausible parameter ranges. This simulation confirmed the model’s ability to maintain computational efficiency and logical token valuations even when handling diverse service types simultaneously, with an average processing time of 1.2 s per service profile on standard computing infrastructure.
These validation results collectively demonstrate that the service-token model provides a reliable, efficient, and mathematically sound framework for service valuation within the ATSaaS platform. The model successfully balances accuracy, stability, and responsiveness to feedback, making it suitable for real-world implementation.

3.1.9. Baseline Token Value Determination Algorithm

The baseline token valuation method provides a systematic approach to determining initial token values for services within the ATSaaS platform. The algorithm integrates cost, time, and quality parameters through a structured evaluation process that ensures consistency and fairness in service valuation (Figure 11).
The algorithm accepts as input a service description containing operational parameters, historical cost data, time estimates, and quality metrics. Additional inputs include regulatory compliance requirements and service-specific constraints. The algorithm produces a baseline token value and a comprehensive Service Passport documenting all valuation parameters and decisions.
The process begins with parameter decomposition, breaking down the three primary components into their constituent elements.
A normalization phase follows to ensure comparability across different services. Each parameter is normalized against the maximum value observed across all services.
The algorithm then applies service-specific weights to these normalized parameters. Weight determination follows the AHP methodology, incorporating expert judgments and service characteristics.
This base value undergoes two adjustment phases. First, a demand adjustment D modifies the token value based on market conditions.
Subsequently, a feedback adjustment is applied based on the initial service performance and user feedback.
The final output includes both the adjusted token value and a Service Passport.
The algorithm maintains an audit trail of all calculations and decisions, enabling transparency and facilitating future adjustments. The Service Passport serves as a comprehensive record of the valuation process and provides a baseline for future token value optimizations.
This baseline method ensures a systematic and transparent approach to token valuation while maintaining flexibility through its adjustment mechanisms and provides a stable foundation for more advanced optimization techniques.

3.1.10. Adaptive Weighting Mechanism

A core enhancement of the TBDCM is an adaptive weighting mechanism that dynamically adjusts the importance of cost, time, and quality based on service performance and customer feedback. Based on Animasaun et al. [28], Animasaun et al. [29], and Wang et al. [30], this mechanism relies on statistical analysis, machine learning techniques, and optimization models to recalibrate weight coefficients in response to the following:
  • Identifying patterns in past transactions and adjusting token valuations accordingly;
  • If multiple users rate a service’s quality as lower than expected, the model dynamically increases the weight of the quality component, ensuring a more accurate reflection of service value;
  • If service providers consistently complete tasks faster than expected, the time parameter weight is reduced, making tokens more responsive to real-world execution times.
Mathematically, the adaptive weights w c , w t , a n d   w q , or cost, time, and quality, respectively, are adjusted using the following iterative update rule, as follows:
w i ( t + 1 ) = w i ( t ) + α ( S i E i )
where w i ( t ) is the weight of parameter i at iteration t , α is the learning rate (a small constant to control the rate of adaptation), S i is the realized service performance, and E i is the expected service performance.
The weights are normalized as follows: w c + w t + w q = 1 .
This iterative approach allows for smooth adaptation to changing operational conditions while preventing drastic fluctuations in token values.

3.1.11. Machine Learning for Token Optimization

A further improvement is the integration of ML models to predict token values based on historical performance data and real-time service attributes. The model is trained using features such as past token values, service completion time, quality scores from user feedback, and market demand trends and external factors such as service availability and seasonal variations.
The ML model learns complex relationships between these factors and optimizes token values to maintain fairness and efficiency. Reinforcement learning (RL) techniques can further refine this process by continuously experimenting with token adjustments and learning which pricing strategies yield optimal customer satisfaction and provider efficiency.
The RL-based model continuously updates token values based on real-world service interactions. Using a reward function that balances provider profitability and customer satisfaction, RL models adjust token values to optimize service allocation. The reward function R ( s , α ) is defined as follows:
R s , α = β 1 U c U c , t a r g e t + β 2 ( P p P p , t a r g e t )
where U c is the user satisfaction score, U c , t a r g e t is the desired satisfaction score threshold, P p is the provider profitability, P p , t a r g e t is the target profitability level, and β 1   a n d   β 2 are the weights assigned to user satisfaction and provider profitability.
Through continuous learning, the RL model refines token valuations, ensuring long-term equilibrium between service affordability, fairness, and business sustainability.

3.1.12. Use Case Illustration

To validate the effectiveness of the adaptive token model, we conducted a simulation of 50 service iterations. The service used in this case study was routine maintenance within the ATSaaS platform.
The reasoning for this service selection, as follows:
  • Routine maintenance involves labor, materials, and operational overhead, making it a suitable candidate for token-based valuation adjustments;
  • The execution time for routine maintenance can fluctuate based on aircraft type, maintenance complexity, and technician availability, making dynamic weight adjustment relevant;
  • Maintenance services are regularly evaluated based on effectiveness, compliance with safety standards, and customer feedback, aligning well with the adaptive weighting mechanism used in the model.
Thus, the routine maintenance service was used in this simulation to showcase how token values are dynamically adjusted in response to historical service data, real-time feedback, and operational performance.
Initially, cost, time, and quality weights were set at 30%, 30%, and 40%, respectively. The routine maintenance service costs ranged between USD 500 and USD 1500, the execution time varied between 5 and 20 h, and the service quality scores ranged from 0.6 to 1.0.
Over multiple iterations, the adaptive weighting mechanism adjusted the importance of cost, time, and quality based on real-time service performance feedback.
To illustrate the impact of the adaptive weighting mechanism in the token value calculation process, the visualizations of how the weights for cost, time, and quality evolve over multiple service iterations can be presented. This provides insight into how the model dynamically rebalances the relative importance of these parameters based on real-time service performance feedback.
Figure 12 demonstrates how the weights for cost, time, and quality are adjusted iteratively as new data are incorporated.
The initial weights were set at 30% for cost, 30% for time, and 40% for quality. Over the iterations, the adaptive model modifies these weights to reflect real-time service performance and customer feedback trends.
Figure 13 showcases how token values fluctuate over multiple service iterations in response to real-time adjustments in cost, time, and quality weights.
The graph highlights periods of increased and decreased token valuations, indicating how the self-learning mechanism ensures pricing fairness and efficiency based on evolving service performance.
The simulation demonstrated that when service costs exceeded expected values, the cost parameter weight increased to reflect its greater impact on token valuation. Conversely, when quality improved significantly, its weight was dynamically reduced to balance overall fairness.
The token values fluctuated accordingly, showing an increasing trend when cost and time exceeded expected benchmarks and a stabilizing effect when quality met or exceeded its expected threshold. This dynamic adaptation ensured that service valuation remained responsive to actual performance, enhancing fairness and efficiency.
The results also confirmed that reinforcement learning-based token adjustments successfully optimized pricing, ensuring providers received appropriate compensation while maintaining user satisfaction. The model’s ability to self-adjust and normalize over time suggests its robustness in managing diverse aviation technical support services efficiently.
This use case illustrates the power of machine learning-enhanced token valuation in ensuring dynamic, fair, and efficient service transactions, further strengthening the TBDCM as a scalable and transparent framework for aviation technical support platforms.

3.1.13. Dynamic Token Value Optimization Algorithm

The advanced token value optimization algorithm provides a comprehensive framework for dynamically adjusting token values based on multiple data sources and learning mechanisms (Figure 14).
The algorithm takes as input an initial token value, historical service data, real-time performance metrics and user feedback data. Additional parameters include a learning rate and performance thresholds for satisfaction and profitability, respectively.
The initialization phase establishes the foundational elements of the optimization process. Initial weights are set for cost, time, and quality parameters. The algorithm initializes both a machine learning model for prediction and a reinforcement learning policy for optimization. A convergence flag and iteration counter are established to control the optimization loop.
The main optimization process operates iteratively until either convergence criteria are met or a maximum iteration limit is reached. Each iteration consists of the following three major components working in concert: an adaptive weight mechanism, machine learning optimization phase, and reinforcement learning component.
The adaptive weight mechanism begins by analyzing historical performance data, extracting relevant metrics, x such as service completion times, quality scores, and cost efficiency measures. Weight updates follow a gradient-based approach.
The machine learning optimization phase processes the collected data through feature extraction, creating a structured representation from the historical, real-time, and feedback data. The ML model is updated through training on this feature set and the current token values. This model then generates token value predictions based on the current state of the system.
The reinforcement learning component evaluates the system’s performance through a reward function. This reward function guides policy updates, which in turn determine adjustments to the predicted token values, resulting in an optimized value.
Convergence checking ensures the stability and effectiveness of the optimization process. The algorithm considers convergence achieved when the change in token value falls below a threshold and both user satisfaction and provider profitability meet or exceed their respective thresholds. This multi-criteria convergence check ensures that the optimization process achieves both stability and stakeholder satisfaction.
Upon successful convergence, the algorithm outputs the optimized token value and the final weight coefficients. Post-processing steps include updating the Service Passport with the new token value, logging optimization metrics for future reference, and initializing monitoring systems to track the performance of the new token value in operation.
The algorithm’s design ensures robust handling of various service types through its adaptive nature and multiple feedback mechanisms.
This advanced optimization approach significantly enhances the baseline token valuation method by incorporating dynamic learning and adaptation capabilities.

4. Discussion

4.1. Comparison with Traditional Pricing Models

The service-token model represents a significant departure from traditional pricing approaches in aviation technical support services. Understanding these differences and their implications is crucial for evaluating the model’s potential impact and benefits. Traditional pricing models in aviation technical support typically fall into the following several categories: fixed-price contracts, time-and-materials pricing, cost-plus arrangements, and subscription-based services. Each of these approaches has distinct characteristics and limitations that the TBDCM seeks to address.
Fixed-price contracts, commonly used for routine maintenance and standard repairs, offer predictability but lack flexibility in accommodating service variations or quality differences. These contracts often struggle to account for unexpected complications or additional requirements, leading to potential disputes and cost overruns. In contrast, the TBDCM’s dynamic token valuation system can adjust to changing service requirements while maintaining transparency in value calculation.
Time-and-materials pricing, prevalent in complex repair and consultation services, provides flexibility but often lacks standardization and can lead to unpredictable costs. This model may incentivize inefficiency, as service providers benefit from longer service durations. The TBDCM addresses this limitation through its integrated approach to valuing time, cost, and quality parameters, creating incentives for efficient service delivery while maintaining quality standards.
Cost-plus arrangements, typically used for complex or uncertain projects, ensure provider compensation but may lead to cost inflation and reduced efficiency incentives. These arrangements often lack transparency and can create misaligned interests between providers and clients. The TBDCM’s transparent calculation methodology and quality-linked token values help align stakeholder interests and promote cost-effectiveness.
Table 7 provides a comparative analysis of these pricing models across key operational parameters.
The TBDCM offers several distinctive advantages over traditional models, as follows:
  • Unlike traditional models that often treat cost, time, and quality as separate considerations, the TBDCM integrates these parameters into a unified valuation framework. This integration enables more balanced and comprehensive service evaluation;
  • Traditional pricing models typically require contract renegotiation or amendments to accommodate changes in service requirements or quality levels. The TBDCM’s token-based approach allows for dynamic value adjustments based on real-time performance and feedback;
  • While traditional models often struggle to standardize pricing across diverse services, the TBDCM achieves standardization through its token framework while maintaining flexibility through parameter weighting and quality adjustments;
  • Traditional models often lack direct mechanisms for linking service quality to pricing. The TBDCM explicitly incorporates quality metrics into token value calculations, creating stronger incentives for high-quality service delivery;
  • Traditional pricing models often face challenges in scaling across different service types or operating contexts. The TBDCM’s standardized framework facilitates easier integration of new services and adaptation to different operational environments;
  • The TBDCM’s comprehensive feedback mechanism and quality metrics provide better tools for tracking and improving service performance compared to traditional models, which often lack systematic performance monitoring.
The comparative analysis demonstrates that while traditional pricing models have served the aviation technical support industry, they increasingly struggle to meet the demands of modern service delivery. The TBDCM addresses many limitations of traditional approaches while introducing new capabilities for value assessment, quality integration, and performance management. However, successful implementation requires careful consideration of the transition challenges and stakeholder needs.

4.2. Benefits of the Service-Token Model

The TBDCM represents a groundbreaking approach to service evaluation and payment within the context of platforms like ATSaaS. By integrating cost, time, and quality into a unified framework, the model provides a transparent, scalable, and adaptable mechanism for assigning value to services. Its implementation brings a wide array of benefits to service providers, users, and platform administrators, enhancing fairness, efficiency, and trust.
One of the primary benefits of the TBDCM is its transparency in assigning value to services. By breaking down each service into measurable components and integrating them using a weighted formula, the model ensures that the rationale behind token values is clear to all stakeholders. This transparency builds trust among users and providers, as it eliminates ambiguities and biases in service valuation. Users can see exactly how their payments correspond to the resources and quality delivered, while providers are assured that their efforts are compensated.
The aviation industry encompasses a broad range of services, from routine maintenance to complex inspections and consulting. The TBDCM provides a standardized framework for evaluating these diverse offerings, allowing them to be compared on a common scale. Through normalization, the model ensures that differences in scale, complexity, and operational requirements are accounted for, making it possible to evaluate heterogeneous services fairly. This standardization simplifies decision making for users and supports providers in benchmarking their offerings against industry standards.
The TBDCM is highly adaptable to the evolving needs of the platform and its users. By allowing the weights assigned to cost, time, and quality to be adjusted, the model accommodates shifts in stakeholder priorities and market conditions. For example, a platform can emphasize quality metrics for premium services while focusing on cost efficiency for routine tasks. Additionally, the integration of real-time feedback mechanisms enables dynamic adjustments to token values, ensuring that they remain aligned with actual service performance. This adaptability makes the model suitable for a wide range of industries and service types.
The model promotes active engagement between users, providers, and platform administrators. Users are encouraged to provide feedback on service quality, knowing that their input directly impacts token adjustments. This feedback loop not only ensures that token values accurately reflect real-world performance but also fosters a sense of ownership and trust in the platform. Providers, on the other hand, are incentivized to maintain high standards, as consistent quality improvements lead to better token valuations. Administrators benefit from a clear framework for managing services and resolving disputes, reducing friction in platform operations.
As platforms like ATSaaS expand, they need a valuation model that can handle increasing numbers of services and users without compromising accuracy or fairness. The TBDCM is inherently scalable, thanks to its standardized approach and reliance on normalized metrics. New services can be seamlessly integrated into the system by following the same evaluation process, ensuring consistency across the platform. Additionally, the model’s automated workflows for token calculation and feedback integration allow it to scale efficiently with minimal administrative overhead.
The TBDCM generates a wealth of data about service performance, user preferences, and market trends. These data can be analyzed to inform strategic decisions, such as optimizing resource allocation, identifying service gaps, or tailoring offerings to meet user needs. For administrators, the model provides actionable insights into platform operations, enabling them to refine policies, improve efficiency, and enhance user satisfaction.
As the model evolves to integrate advanced technologies and feedback mechanisms, its benefits will only grow, solidifying its role as a cornerstone of modern service platforms.

4.3. Challenges and Limitations of the Service-Token Model

While the TBDCM offers numerous benefits its implementation is not without challenges and limitations. These hurdles arise from the complexity of evaluating diverse services, integrating feedback, and maintaining fairness in token value assignments.
Quality is a key parameter in the TBDCM, encompassing customer satisfaction, reliability, effectiveness, and complexity. However, many of these metrics are subjective and reliant on user perceptions, which can vary widely. For instance, one user might rate a service highly for speed, while another might prioritize thoroughness. These subjective differences can introduce biases and inconsistencies in the model, making it difficult to achieve a universally accepted quality score.
While user feedback is critical for refining token values, it also introduces several challenges. Collecting sufficient and reliable feedback can be difficult, especially for services with low user engagement or short lifecycles. Users may be unwilling to provide detailed feedback, resulting in data gaps that undermine the accuracy of quality adjustments. Additionally, feedback can be influenced by individual biases, such as unrealistic expectations or isolated negative experiences, which may skew the overall evaluation.
The model’s ability to dynamically adjust token values based on feedback is one of its strengths, but it also presents risks of overcorrection. Frequent or excessive adjustments can destabilize token valuations, creating uncertainty for both users and providers. For example, if a single round of negative feedback disproportionately impacts the quality parameter, it may unfairly devalue the service. Striking a balance between responsiveness and stability is a critical yet challenging aspect of the model’s implementation.
Although the model is designed to handle a wide range of services, scaling it to accommodate a rapidly growing platform presents challenges. As the number of services and users increases, maintaining consistency and fairness in token value assignments becomes more difficult.
Introducing a token-based valuation model may face resistance from stakeholders who are accustomed to traditional pricing methods. Building trust and acceptance among stakeholders requires extensive education, communication, and demonstration of the model’s benefits, which can be time intensive.
For platforms that already operate under traditional pricing models, transitioning to the TBDCM may involve significant integration challenges. Existing systems may need to be restructured to accommodate token-based payments, requiring updates to software, databases, and user interfaces. Ensuring a seamless transition without disrupting ongoing operations is a complex and critical task.
With continuous improvement and thoughtful implementation, the TBDCM can overcome its limitations and deliver long-term value to all stakeholders.

4.4. Future Directions of Research on the Service-Token Model

The TBDCM has emerged as a transformative framework for evaluating, pricing, and managing services, particularly in platforms like ATSaaS. However, to ensure its continued relevance and adaptability, future research must address evolving technological, market, and stakeholder needs. By exploring areas such as automation, data integration, user-centric customization, and expanded industry applications, the model can evolve into an even more robust and versatile system.
One of the most promising directions for research is the integration of automation and AI. Automating key processes such as data collection, token calculation, and feedback analysis can significantly reduce manual effort and improve efficiency. AI-powered tools can identify patterns in user feedback, predict quality adjustments, and dynamically optimize parameter weights based on real-time conditions. These advancements would allow the model to scale effortlessly, even on platforms handling a high volume of services and users.
Blockchain technology is another area of innovation that can enhance the model’s transparency and security. By using blockchain’s decentralized and immutable nature, token calculations and service performance data can be recorded on a tamper-proof ledger. Smart contracts could automate token value adjustments based on predefined rules, while cross-platform interoperability can be enabled by allowing tokens to be exchanged across multiple ecosystems. This integration would build trust among stakeholders and provide verifiable transparency in service valuation.
Improving quality feedback mechanisms is also a critical area for research. Current methods relying on surveys and ratings often suffer from subjectivity and inconsistency. Future advancements could include the use of IoT devices to capture real-time service performance data, natural language processing to analyze user comments, and standardized industry-specific metrics to reduce variability. These innovations would result in more objective and accurate quality evaluations, ensuring that token values reflect the true performance of services.
Personalization and user-centric adaptations represent another vital research direction. As platforms grow more diverse, users increasingly demand tailored experiences. Future developments could include adaptive weight distributions that allow users to prioritize cost, time, or quality based on their individual preferences. Personalized token adjustments could also account for user history, loyalty, and behavior, creating a more engaging and satisfying experience. Additionally, real-time interfaces could provide users with transparency into how their preferences influence token values, fostering trust and empowerment.
The model’s potential for expansion to multi-service platforms offers exciting possibilities. While the model has been successfully applied to focused domains like ATSaaS, future research could explore its scalability across industries, such as healthcare, logistics, and education. Adapting normalization methods to accommodate extreme variations in service complexity and developing cross-industry benchmarks would enable the model to evaluate and compare highly diverse services fairly. Token interoperability across domains could further enhance the model’s utility, enabling unified value systems for multi-service ecosystems.
Predictive analytics offers significant potential for proactive adjustments to token values. By using historical data and advanced analytics, the model could anticipate changes in service performance or user feedback, enabling adjustments before issues arise. Predictive models could forecast service demand, identify potential quality issues, and optimize resource allocation, improving platform efficiency and user satisfaction.
To ensure the model’s long-term relevance, establishing continuous learning frameworks is essential. These frameworks would allow the model to adapt iteratively to changing market dynamics and user expectations. Periodic recalibration of weights, integration of real-time feedback loops, and machine learning systems that autonomously identify and implement improvements are key areas of focus. Such frameworks would future-proof the model, ensuring its applicability in dynamic and competitive environments.
The future of the TBDCM lies in its ability to evolve with technological advancements, changing market demands, and stakeholder priorities. Research into automation, blockchain integration, enhanced feedback mechanisms, sustainability metrics, and predictive analytics will enhance the model’s adaptability, scalability, and transparency. Addressing ethical concerns, enabling personalization, and fostering collaboration will ensure its long-term success. These directions not only strengthen the model’s utility but also position it as a cornerstone of modern service platforms, capable of driving innovation, fairness, and efficiency across industries.

5. Conclusions

This study introduced a comprehensive token-based digital currency model for an Aviation Technical Support as a Service platform, addressing critical challenges in service standardization, valuation, and quality assessment within the aviation maintenance ecosystem. This research makes several key contributions.
The TBDCM establishes a structured methodology for service valuation through three primary parameters—cost, time, and quality—integrated into a unified token-based framework. This approach enables the following:
  • Transparent and standardized evaluation of diverse aviation services;
  • Fair comparison among heterogeneous services through normalization techniques;
  • Dynamic adjustment of token values based on real-time feedback and service performance;
  • Enhanced operational efficiency through structured digital documentation via the Service Passport.
The numerical case study demonstrated the model’s practical applicability across various MRO-oriented services, confirming its ability to effectively balance cost-efficiency, performance quality, and customer satisfaction. The results indicate that the token-based approach outperforms traditional pricing models in terms of transparency, adaptability, and stakeholder alignment.
The implementation of the TBDCM offers significant benefits for aviation stakeholders, as follows:
  • For small airlines—improved access to technical support services through transparent pricing and quality assurance;
  • For service providers—enhanced ability to standardize offerings while receiving fair compensation based on service complexity and quality;
  • For platform administrators—robust framework for service management, quality monitoring, and continuous improvement.
The model’s flexibility makes it particularly valuable for ATSaaS platforms seeking to support diverse service categories while maintaining operational efficiency and customer trust.
Building on the foundation established in this study, several promising avenues for future research emerge, as follows:
  • Integration of advanced AI and machine learning techniques to automate token value calculations and predict service performance;
  • Exploration of blockchain technology to enhance transparency, security, and immutability of service records and token transactions;
  • Development of more sophisticated quality metrics and feedback mechanisms to reduce subjectivity in service evaluation;
  • Investigation of cross-platform token interoperability to create unified service ecosystems;
  • Application of the model to adjacent industries with similar standardization challenges.
Future work should also address the identified limitations, particularly regarding the subjectivity of quality metrics, potential feedback biases, and implementation challenges in existing operational environments.
The proposed token-based digital currency model offers a transformative approach to service valuation in ATSaaS platforms, contributing to greater transparency, operational efficiency, and stakeholder trust in aviation technical support services.

Author Contributions

Conceptualization, I.K.; methodology, I.K.; software, M.P.; validation, V.P. and M.P.; formal analysis, I.K.; investigation, I.K., V.P. and M.P.; resources, V.P. and M.P.; data curation, V.P. and M.P.; writing—original draft preparation, I.K.; writing—review and editing, I.K., V.P. and M.P.; visualization, I.K.; supervision, I.K.; project administration, I.K.; funding acquisition, I.K., V.P. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors V.P. and M.P. were employed by the company Sky Net Technics. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Kabashkin, I.; Perekrestov, V. Concept of Aviation Technical Support as a Service. Transp. Telecommun. 2023, 24, 471–482. [Google Scholar] [CrossRef]
  2. Park, K.; Youm, H.-Y. Proposal of a Service Model for Blockchain-Based Security Tokens. Big Data Cogn. Comput. 2024, 8, 30. [Google Scholar] [CrossRef]
  3. Erel-Özçevik, M. Token as a Service for Software-Defined Zero Trust Networking. J. Netw. Syst. Manag. 2025, 33, 10. [Google Scholar] [CrossRef]
  4. Rivera, J.J.D.; Akbar, W.; Khan, T.A.; Muhammad, A.; Song, W.-C. Zt &t: Secure service session management using blockchain-based tokens in zero trust networks. Ann. Telecommun. 2024, 79, 487–505. [Google Scholar] [CrossRef]
  5. Kim, M.S.; Chung, J.Y. Sustainable Growth and Token Economy Design: The Case of Steemit. Sustainability 2019, 11, 167. [Google Scholar] [CrossRef]
  6. Tasca, P. Token-Based Business Models. In Disrupting Finance. Palgrave Studies in Digital Business & Enabling Technologies; Lynn, T., Mooney, J., Rosati, P., Cummins, M., Eds.; Palgrave Pivot: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  7. Chen, C.-P.; Huang, K.-W.; Kuo, Y.-C. Conditional Token: A New Model to Supply Chain Finance by Using Smart Contract in Public Blockchain. FinTech 2023, 2, 170–204. [Google Scholar] [CrossRef]
  8. Udokwu, C. Formalizing and Simulating the Token Aspects of Blockchain-Based Research Collaboration Platform Using Game Theory. Mathematics 2024, 12, 3252. [Google Scholar] [CrossRef]
  9. Viriyasitavat, W.; Xu, L.D.; Bi, Z. Specification Patterns of Service-Based Applications Using Blockchain Technology. IEEE Trans. Comput. Soc. Syst. 2020, 7, 886–896. [Google Scholar] [CrossRef]
  10. Matsuura, K. Token Model and Interpretation Function for Blockchain-Based FinTech Applications. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2019, E102.A, 3–10. [Google Scholar] [CrossRef]
  11. Büttgen, M.; Dicenta, J.; Spohrer, K.; Venkatesh, V.; Raman, R.; Hoehle, H.; De Keyser, A.; Verbeeck, C.; Zwienenberg, T.J.; Jørgensen, K.P.; et al. Blockchain in Service Management and Service Research—Developing a Research Agenda and Managerial Implications. SMR J. Serv. Manag. Res. 2021, 5, 71–102. [Google Scholar] [CrossRef]
  12. European Union Aviation Safety Agency. Part-M. Available online: https://www.easa.europa.eu/en/the-agency/faqs/part-m (accessed on 10 January 2025).
  13. European Union Aviation Safety Agency. Part-145. Available online: https://www.easa.europa.eu/en/the-agency/faqs/part-145 (accessed on 10 January 2025).
  14. European Union Aviation Safety Agency. Part-66. Available online: https://www.easa.europa.eu/en/the-agency/faqs/part-66 (accessed on 10 January 2025).
  15. European Union Aviation Safety Agency. Acceptable Means of Compliance (AMC) and Guidance Material (GM). Available online: https://www.easa.europa.eu/en/document-library/acceptable-means-of-compliance-and-guidance-materials (accessed on 10 January 2025).
  16. Federal Aviation Administration. 14 CFR Part 43. Available online: https://www.ecfr.gov/current/title-14/chapter-I/subchapter-C/part-43 (accessed on 10 January 2025).
  17. Federal Aviation Administration. Advisory Circulars (ACs). Available online: https://www.faa.gov/regulations_policies/advisory_circulars/ (accessed on 10 January 2025).
  18. Federal Aviation Administration. Continuous Airworthiness Maintenance Program (CAMP). Available online: https://www.ecfr.gov/current/title-14/chapter-I/subchapter-F/part-91/subpart-K/subject-group-ECFRc17623c0e0be17e/section-91.1411 (accessed on 10 January 2025).
  19. Federal Aviation Administration. Repair Station Regulations (14 CFR Part 145). Available online: https://www.ecfr.gov/current/title-14/chapter-I/subchapter-H/part-145?toc=1 (accessed on 10 January 2025).
  20. International Air Transport Association. IATA Operational Safety Audit (IOSA). Available online: https://www.iata.org/en/programs/safety/audit/iosa/ (accessed on 10 January 2025).
  21. International Air Transport Association. Airline Operational Cost Management Guidelines. Available online: https://www.iata.org/en/publications/manuals/airline-operational-cost-management-guidelines/ (accessed on 10 January 2025).
  22. International Air Transport Association. Digital Think Tank White Paper; International Air Transport Association: Montreal, QC, Canada, 2022; Available online: https://www.iata.org/contentassets/a46387f9bc6b42368c0a72664f6f930f/digital_think-tank_white-paper_2022.pdf (accessed on 10 January 2025).
  23. Ayan, B.; Abacıoğlu, S.; Basilio, M.P. A Comprehensive Review of the Novel Weighting Methods for Multi-Criteria Decision-Making. Information 2023, 14, 285. [Google Scholar] [CrossRef]
  24. Ochieng, P.J.; London, A.; Krész, M. A Forward-Looking Approach to Compare Ranking Methods for Sports. Information 2022, 13, 232. [Google Scholar] [CrossRef]
  25. Saaty, T.L. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation; McGraw-Hill: New York, NY, USA, 1980. [Google Scholar]
  26. He, D.; Xu, J.; Chen, X. Information-Theoretic-Entropy Based Weight Aggregation Method in Multiple-Attribute Group Decision-Making. Entropy 2016, 18, 171. [Google Scholar] [CrossRef]
  27. Relich, M. A Data-Driven Approach for Improving Sustainable Product Development. Sustainability 2023, 15, 6736. [Google Scholar] [CrossRef]
  28. Animasaun, I.L.; Taseer, M.; Yook, S.-J. Exploration of Half-Cycle Length of Converging Circular Wavy Duct with Diverging-Outlet: Turbulent Water Dynamics. Adv. Theory Simul. 2025, 8, 2500038. [Google Scholar] [CrossRef]
  29. Li, L.; Animasaun, I.L.; Koriko, O.K.; Taseer, M.; Elnaqeeb, T. Insight into Turbulent Reynolds Number at the Regular, Converging, and Diverging Outlets: Dynamics of Air, Water, and Kerosene through Y-Shaped Cylindrical Copper Ducts. Int. Commun. Heat Mass Transf. 2024, 159, 108044. [Google Scholar] [CrossRef]
  30. Wang, F.; Animasaun, I.L.; Al Shamsi, D.M.; Taseer, M.; Ali, A. Transient Cold-Front-Water through Y-Shaped Aluminium Ducts: Nature of Turbulence, Non-Equilibrium Thermodynamics, and Velocity at the Converged and Diverged Outlets. J. Non-Equilib. Thermodyn. 2024, 49, 485–512. [Google Scholar] [CrossRef]
Figure 1. Main components of the model’s parameters.
Figure 1. Main components of the model’s parameters.
Mathematics 13 01297 g001
Figure 2. Pairwise comparisons by experts.
Figure 2. Pairwise comparisons by experts.
Mathematics 13 01297 g002
Figure 3. Geometric means of the pairwise comparisons (step 1).
Figure 3. Geometric means of the pairwise comparisons (step 1).
Mathematics 13 01297 g003
Figure 4. Aggregated pairwise matrices (step 2).
Figure 4. Aggregated pairwise matrices (step 2).
Mathematics 13 01297 g004
Figure 5. Normalized matrices (step 3).
Figure 5. Normalized matrices (step 3).
Mathematics 13 01297 g005
Figure 6. Summary of the final weighting factors.
Figure 6. Summary of the final weighting factors.
Mathematics 13 01297 g006
Figure 7. Workflow for initial token value definition in the service-token model.
Figure 7. Workflow for initial token value definition in the service-token model.
Mathematics 13 01297 g007
Figure 8. Workflow for token value adjustment based on user feedback.
Figure 8. Workflow for token value adjustment based on user feedback.
Mathematics 13 01297 g008
Figure 9. Weight evolution during validation.
Figure 9. Weight evolution during validation.
Mathematics 13 01297 g009
Figure 10. Token value trend.
Figure 10. Token value trend.
Mathematics 13 01297 g010
Figure 11. Algorithm of the baseline token value determination.
Figure 11. Algorithm of the baseline token value determination.
Mathematics 13 01297 g011
Figure 12. Evolution of the weights in the adaptive token calculation.
Figure 12. Evolution of the weights in the adaptive token calculation.
Mathematics 13 01297 g012
Figure 13. Trend in the token values over iterations.
Figure 13. Trend in the token values over iterations.
Mathematics 13 01297 g013
Figure 14. Algorithm of dynamic token value optimization.
Figure 14. Algorithm of dynamic token value optimization.
Mathematics 13 01297 g014
Table 1. Initial data for all services.
Table 1. Initial data for all services.
ServiceCost (USD)Time (h)Quality Metrics
LaborMaterialsOverheads
Component Repair600200100
Routine Maintenance1250150200
Inspection Services60020050
Consulting1000-200
Training640100-
Document Processing500-50
Table 2. Normalized values of parameters.
Table 2. Normalized values of parameters.
ServiceNormalized CostNormalized TimeNormalized Quality
Component Repair0.5620.61.0
Routine Maintenance1.01.00.976
Inspection Services0.5310.40.937
Consulting0.750.9331.0
Training0.4620.40.934
Document Processing0.3120.2670.934
Table 3. Weightings for the different service types.
Table 3. Weightings for the different service types.
Service TypeWeight for CostWeight for TimeWeight for QualityExplanation
Component Repair0.30.20.5Quality is critical because of the safety and reliability concerns. Cost is moderate, as repairs involve parts and labor, while time is less critical unless urgent.
Routine Maintenance0.20.30.5Quality remains important to ensure compliance and reliability. Cost and time are equally weighted because of regularity and resource requirements.
Inspection Services0.20.20.6Quality is paramount as inspections directly impact safety and regulatory compliance. Cost and time have less influence compared to accuracy and thoroughness.
Consulting0.40.20.4Both cost and quality are significant. Cost is critical for resource allocation, and quality ensures the consulting delivers value. Time is less critical.
Training0.30.30.4Quality is essential for effective knowledge transfer. Cost and time are equally important to balance affordability and efficiency in delivering the training.
Document Processing0.20.30.5Quality dominates because problems with the execution of documents will require checking all the work described in them. Time is important to maintain operational flow, while quality is relatively less critical.
Table 4. Summary table with outcomes.
Table 4. Summary table with outcomes.
ServiceTokensInsights
Component Repair705Balances cost, time, and quality efficiently.
Routine Maintenance993Highest point value due to resource-intensive and high-quality metrics.
Inspection Services614Lower cost and time requirements balance its high quality.
Consulting975High complexity and time contribution drive the value of this expertise-rich service.
Training656Cost-effective and delivers strong quality performance.
Document Processing541Low-cost and efficient service but with lower complexity.
Table 5. Taxonomy table for the Service Passport.
Table 5. Taxonomy table for the Service Passport.
CategorySubcategoryDescription
Service IdentificationNameUnique identifier for the service.
DescriptionPurpose and scope of the service.
CategoryClassification of service.
Operational CharacteristicsCostDirect, indirect, and variable costs.
TimePreparation, execution, and support time.
Quality MetricsSatisfactionCustomer feedback and customization.
ReliabilityConsistency and compliance.
EffectivenessResolution success and objective achievement.
ComplexityTask difficulty and resource intensity.
Token AssignmentCalculated Token ValueFinal token value based on normalized metrics.
Normalization FactorsScaling values for comparison across services.
Performance DataHistorical MetricsPast performance of the service.
FeedbackAggregated customer ratings and qualitative inputs.
StandardizationComplianceAdherence to regulatory and safety standards.
SLAsDocumented performance guarantees.
Scalability and UpdatesScalabilityAbility to handle changing demand.
UpdatesMechanisms for periodic revisions.
Table 6. Summary of the validation results.
Table 6. Summary of the validation results.
Validation MetricTarget ValueActual ValueNotes
MAPE from expert valuations<15%12.4%Calculated across all 12 validation services
Weight convergence time<30 iterations27 iterationsNumber of iterations to reach stability (CV < 3%)
Token value stability (CV)<5%2.7%Final 10 iterations
User satisfaction correlation>0.70.76Correlation between feedback and token adjustment
AHP consistency ratio<0.10.067Indicates logical consistency of expert judgments
Table 7. Comparison of pricing models in aviation technical support.
Table 7. Comparison of pricing models in aviation technical support.
ParameterTraditional Fixed-PriceTime-and-MaterialsCost-PlusService-Token Model
Cost PredictabilityHighLowMediumHigh
FlexibilityLowHighMediumHigh
Quality IntegrationLimitedIndirectLimitedDirect
TransparencyMediumLowLowHigh
Efficiency IncentivesMediumLowLowHigh
StandardizationLowLowLowHigh
ScalabilityLimitedMediumLimitedHigh
Performance TrackingLimitedMediumMediumComprehensive
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kabashkin, I.; Perekrestov, V.; Pivovar, M. Token-Based Digital Currency Model for Aviation Technical Support as a Service Platforms. Mathematics 2025, 13, 1297. https://doi.org/10.3390/math13081297

AMA Style

Kabashkin I, Perekrestov V, Pivovar M. Token-Based Digital Currency Model for Aviation Technical Support as a Service Platforms. Mathematics. 2025; 13(8):1297. https://doi.org/10.3390/math13081297

Chicago/Turabian Style

Kabashkin, Igor, Vladimir Perekrestov, and Maksim Pivovar. 2025. "Token-Based Digital Currency Model for Aviation Technical Support as a Service Platforms" Mathematics 13, no. 8: 1297. https://doi.org/10.3390/math13081297

APA Style

Kabashkin, I., Perekrestov, V., & Pivovar, M. (2025). Token-Based Digital Currency Model for Aviation Technical Support as a Service Platforms. Mathematics, 13(8), 1297. https://doi.org/10.3390/math13081297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop