Next Article in Journal
The Dynamic Influence of Mountain–Valley Breeze Circulation on Wildfire Spread in the Greater Khingan Mountains
Next Article in Special Issue
Pedestrian Decision-Making Behavior During Stair Evacuation: An Experiment Study on Stair Lane-Selection Preferences
Previous Article in Journal
CFD Simulation and Experimental Investigation of Water Distribution Patterns in Transitional Attack
Previous Article in Special Issue
Study on the Performance of Upstream Obstacles Under Different Exit Loads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response

School of Foreign Studies, China University of Petroleum (East China), Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Submission received: 8 November 2025 / Revised: 16 December 2025 / Accepted: 22 December 2025 / Published: 25 December 2025
(This article belongs to the Special Issue Fire Safety and Emergency Evacuation)

Abstract

The frequent occurrence of fires has prompted China to accelerate the development of community fire prevention and emergency management systems. Language, serving both communicative and affective functions by facilitating the flow of information and fostering mutual understanding, runs through the entire process of community fire emergency management. In response to the early-stage nature of this field and the lack of a systematic framework, this study constructs a dynamic capability evaluation system for urban community fire-related emergency language services (FELS) by integrating multi-source and heterogeneous data. First, by adopting a hybrid approach combining dynamic capability theory and text mining, a three-level indicator system is established. Second, based on domain knowledge, quantitative methods and scoring rules are designed for the third-level qualitative indicators to provide standardized input for the model. Third, a weighting and integration framework is developed that simultaneously considers the internal mechanism characteristics and statistical properties of indicators. Specifically, a knowledge-driven weighting approach combining FAHP and fuzzy DEMATEL is employed to characterize indicator importance and interrelationships, while the CRITIC method is used to extract Data-Driven weights based on data dispersion and information content. These knowledge-driven and Data-Driven weights are then integrated through a multi-feature fusion weighting approach. Finally, a linear weighting model is applied to combine the normalized indicator values with the integrated weights, enabling a systematic evaluation of the dynamic capabilities of community FELS. To validate the proposed framework, application tests were conducted in four representative types of urban communities, including internationally developed, aging and vulnerable, newly developed, and economically diverse communities, using fire emergency scenarios as the entry point. The external validity and internal robustness of the proposed model were verified through these tests. The results indicate that the evaluation system provides accurate, objective, and adaptive assessments of dynamic capabilities in FELS across different community contexts, offering a governance-oriented quantitative tool to support grassroots fire prevention and to enhance community resilience.

1. Introduction

Against the backdrop of the frequent occurrence of urban fire incidents worldwide and China’s accelerated advancement of national security governance modernization, the CPC Central Committee and the State Council have attached great importance to improving fire prevention and emergency management capabilities. Statistics show that globally, millions of fire incidents occur each year, causing more than 50,000 deaths, 170,000 injuries, and economic losses amounting to hundreds of billions of U.S. dollars. Fires have thus become one of the major disaster types threatening human life and property safety [1]. In China, with rapid urbanization, the proliferation of high-rise buildings and densely populated communities, and the increasing risks associated with electricity, gas, and logistics storage, the pressure on fire emergency management continues to grow. In response, China has successively issued a series of policy documents to continuously improve its fire prevention system and social fire governance capacity. The Fire Protection Law of the People’s Republic of China explicitly requires governments at all levels to incorporate fire safety into urban and rural planning and community governance systems, implement fire prevention responsibility mechanisms, and strengthen grassroots fire protection networks. The 14th Five-Year Plan for the National Emergency Management System further proposes promoting the modernization of fire and rescue services and enhancing community-level emergency response and social mobilization capabilities. Local governments are also actively exploring new models of urban fire governance; for instance, Shanghai, Beijing, and Shenzhen have established intelligent fire monitoring and early warning platforms, promoted community-based fire safety accountability systems, and implemented public fire risk assessment mechanisms. As the national policy framework becomes increasingly refined and local innovations deepen, building a scientific and efficient urban community fire emergency governance system has become a key pathway to advancing the modernization of China’s public safety governance.
Language plays dual communicative and affective roles in responding to emergencies such as fires. It not only facilitates the effective transmission of information and safeguards lives but also guides public opinion and helps maintain social stability [2,3,4]. In fire emergencies, emergency language services provide rapid, accurate, and accessible linguistic support that bridges the “last mile” between command dissemination and public response, ensuring efficient information flow and cross-departmental coordination. Their scope encompasses multilingual and dialect translation for fire response, sign language and assisted communication, disaster-related language software development, fire situation reporting, linguistic resource allocation, formulation of emergency language standards, on-site language training, and psychological language intervention [5]. This system covers the entire process of fire emergency management, including prevention, response, and recovery [6], and involves multiple levels of actors such as national, local, and community authorities. At the community level in particular, the institutionalization and digitalization of emergency language services have become essential for improving the efficiency of fire risk communication, strengthening grassroots emergency response capacity, and enhancing public resilience.
However, how to scientifically and rationally evaluate the emergency language service capability of urban communities in fire scenarios and how to identify weaknesses and optimize the allocation of emergency language resources based on the evaluation results remain pressing challenges. Most existing studies construct evaluation index systems guided by expert judgment, academic research, or policy standards. Their research themes largely focus on national or macro-level aspects such as emergency language service policies and planning, talent cultivation, and medical applications, while insufficient attention has been paid to the emergency language service systems of urban communities in fire emergencies. Li et al. [7], from an experiential perspective, constructed a national emergency language service capability framework along four dimensions: emergency phases, language tasks, non-linguistic resources, and types of emergency language. Wang [8] developed a national emergency language service capability evaluation indicator system from three perspectives: response, resources and technologies, and supporting capability. Teng [9], who drew on the MEDTO competency framework, built a talent evaluation system for emergency language services and explained the components and interrelations of the emergency language education system across seven dimensions. Wang et al. [10] proposed an emergency language education framework grounded in policy, practice, and academic evidence. Brandenberger et al. [11], guided by the Standards for Reporting Qualitative Research (SRQR) and the Consolidated Criteria for Reporting Qualitative Research (COREQ), assessed the accuracy of Google Translate (GT) in pediatric emergency settings and examined the legal and policy implications of using AI-based language interpretation in healthcare. Elmhadhbi et al. [12], basing their study on the Emergency Data Exchange Language (EDXL) standards, evaluated the accuracy of semantic translation during disaster response to address communication difficulties arising from the cooperation of multiple emergency responder organizations (EROs).
However, existing studies mostly remain at the stage of constructing and describing evaluation indicator systems for emergency language service capabilities, lacking a scientifically validated framework specifically designed for assessing the FELS capabilities of urban communities. Due to the limited engagement of traditional research with frontline practice, the construction of current indicator systems is often highly subjective and fails to fully reflect the connotations of emergency language service capability. For example, under the indicator “emergency response speed,” the sub-indicators “emergency response time” and “emergency event handling success rate” [13] reflect the performance of emergency governance in general rather than accurately capturing the specific effectiveness of emergency language services. Moreover, quantitative data on emergency language services are extremely scarce in policy documents, which leads to potential bias in evaluation. Furthermore, existing indicator systems do not cover relevant capability indicators within the scope of FELS, such as risk early warning and demand identification before emergencies, information dissemination and cross-department coordination during emergencies, and evaluation and feedback after emergencies. As a result, the role of emergency language services in supporting fire risk prevention, early warning, and coordinated response at the community level remains insufficiently assessed.
In methodological choices and evaluation model construction, scholars employ both subjective and objective approaches. On the subjective side, Wang [14] applied the Delphi method to determine the weights of a language service competitiveness index, while Guan et al. [15] used the AHP method to allocate indicators and build an index framework for urban community emergency management capacity against infectious diseases. These methods capture expert cognition and indicator logic but tend to neglect empirical community data and are vulnerable to subjective bias. On the objective side, facilitated by advances in data collection and digital governance, Li et al. [7] adopted the DEMATEL method to analyze factors influencing community emergency management from a compound risk perspective. Wang and Xie [13] extracted data across policy, technology, and human resource dimensions, apply the entropy weight method to assign weights, and compute a composite index to assess emergency language service capacity. Yet, these approaches emphasize data-driven mechanisms, often overlooking the substantive meaning of indicators and lacking contextual understanding of their practical application.
In summary, research on the evaluation of emergency language service capabilities remains at an early stage and still faces significant challenges in constructing rational indicator systems and selecting scientifically sound evaluation methods. In particular, there is a lack of systematic studies focusing on the assessment of urban community FELS capabilities. More importantly, existing studies rarely position emergency language services within the broader fire risk governance chain or examine their contribution to fire risk prevention and control from a dynamic capability perspective. To address this gap, the present study develops a data-driven evaluation system for the dynamic capabilities of urban community FELS, integrating multi-source and heterogeneous data. The main contributions of this study are as follows:
(1)
Construction of a comprehensive and objective three-level indicator system. By combining dynamic capability theory with text mining, a three-level indicator system is developed, covering multiple dimensions such as language risk early warning, human resource scheduling, information dissemination, cross-department coordination, and evaluation and feedback. The system incorporates multi-source heterogeneous data from community emergency drills, management reports, volunteer training records, emergency publicity reports, and emergency plans. It also covers the full process of emergency language services, from pre-event identification to in-event response and post-event improvement.
(2)
Development of unified quantification and scoring rules. Based on domain knowledge, quantitative methods and scoring rules are formulated for the third-level qualitative indicators. All data are normalized into a dimensionless scale from 0 to 100, providing an accurate, comparable, and reusable data foundation for evaluation.
(3)
Establishment of a data-driven multi-model integrated evaluation method. By combining FAHP, fuzzy DEMATEL, CRITIC, Multi-Feature Fusion Weighting Model, the method derives integrated knowledge-driven and data-driven weights. Linear weighting approach is then used to achieve integrated evaluation. This method has a multi-layered structure, process interpretability, and the capacity to handle complex data while providing robust assessment results.
(4)
Application testing and validation of the model. The evaluation system is applied and validated in four representative types of urban communities. The results demonstrate that the system can effectively and objectively assess the dynamic capability of community emergency language services. It also reveals the differentiated distribution of capability, which is mainly driven by key high-weight dimensions. The system shows strong governance adaptability and application potential, and it provides important support for capability diagnosis, weakness identification, and resource allocation optimization in urban community emergency language services.

2. Construction of the Evaluation Index System for Urban Community FELS Capabilities

2.1. Connotation and Framework of Dynamic Capabilities in Urban Community FELS

In fire scenarios, the emergency language service system in urban communities faces challenges characterized by high uncertainty, complexity, and heterogeneity. In the event of a fire, communities must promptly identify communication barriers among linguistically vulnerable groups [3], mobilize multi-party resources for rapid response [16], and continuously refine their language service plans and capability systems in the aftermath [17], in order to build a sustainable emergency response mechanism. Existing research on emergency language service capability has predominantly adopted a static perspective, focusing on resource optimization or quantitative expansion, which is insufficient to explain or guide continuous adaptation and capability reconstruction in dynamic environments. Therefore, it is necessary to introduce a theoretical lens that can capture adaptive cycles. Teece and Pisano et al. [18] proposed the Dynamic Capabilities Theory, which defines the ability of organizations to integrate and reconfigure internal and external resources in rapidly changing environments to achieve ongoing adaptation. Its core mechanism is the cyclical process of sensing–seizing–transforming, enabling continuous renewal [19].
Accordingly, this study adopts Dynamic Capabilities Theory as the overarching analytical framework and divides community FELS capability into three sub-capabilities, sensing, seizing, and transforming, which, respectively, correspond to the three stages of pre-event preparedness, in-event response, and post-event optimization. Sensing capability refers to a community’s ability to identify, assess, and predict language service needs during fire-related emergencies, with particular attention to linguistically vulnerable groups (e.g., expatriates, ethnic minorities, the elderly, and people with hearing impairments). By establishing a multi-level information collection and analysis system [20], and combining community surveys, resident language profiles, digital platforms, and AI recognition technologies, communities can develop multilingual early-warning mechanisms that enable proactive anticipation and precise identification of potential language barriers. Seizing capability refers to the ability to respond rapidly and supply language services efficiently once needs are identified. It emphasizes the instantaneous mobilization of multi-actor resources (such as local multilingual volunteers, professional translation agencies, AI-based translation tools, and governmental channels) and cross-sectoral coordination (e.g., street offices–hospitals, communities–NGOs, governments–university volunteer groups). Service modalities include hotlines, translation apps, and multilingual LED bulletin boards. Transforming capability refers to a community’s ability to continuously adjust internal processes, optimize resource allocation, and strengthen organizational mechanisms based on feedback and environmental changes during fire-related emergency practices. Through systematic data archiving, resident feedback, and expert evaluations, communities support iterative improvements, advance platform and tool upgrades, and enhance organizational learning and institutional development (e.g., volunteer training, task reallocation, and improving residents’ language adaptation [21]). This promotes the normalization and institutionalization of ELS, fostering a resilient ecosystem for emergency language services.
Based on the sensing- seizing- transforming logic of dynamic capability, this study develops a conceptual framework for the dynamic capabilities of urban community FELS (Figure 1). Its operational logic is as follows: under complex and volatile internal and external environments, communities leverage existing ELS resources through the exercise of dynamic capabilities to complete an evaluation–feedback loop, continuously optimize processes and resource allocation, and reconfigure service portfolios, thereby enhancing the overall responsiveness and adaptability of the system in fire scenarios.

2.2. Construction of the Evaluation Indicator System for Urban Community FELS

2.2.1. Dataset

This study is based on original community text materials collected through field research, mainly consisting of daily records and activity summaries from community emergency management processes. The research focuses on FELS practices at the community level, characterized by strong authenticity, completeness, and timeliness. The data sources include five categories of textual materials: (1) Community Fire-related Emergency Drill Reports, covering background of drill tasks, process flows, deployment of language resources (e.g., translation volunteers, multilingual signage, interpreting equipment), and evaluation feedback; (2) Report on Community Fire Protection and Emergency Management Work, reflecting language service planning, construction of emergency response mechanisms, and services provided for key groups in daily community management; (3) Records of Language Volunteers and Fire Safety Training, including training plans, number of participants, assessment methods, and subsequent participation; (4) Community Fire-related Emergency Publicity Activity Reports, covering the distribution of multilingual publicity materials, language science lectures, and resident participation; and (5) a Community Fire-related Emergency Plan, focusing on arrangements related to language services, such as classification of service targets, resource allocation mechanisms, and the design of coordination processes.
Data collection is completed before June 2025. Field research is conducted in nine urban communities across Guangdong, Shandong, Shanxi, and Guizhou Provinces, yielding a total of 263 original text materials. These provinces were selected following the principle of typicality, based on three considerations: (1) economic gradient differences, covering both developed coastal regions and developing central and western regions; (2) geographic and climatic diversity, which enhances the representativeness of regional governance contexts; and (3) the existence of prior emergency language service practices, ensuring practical relevance and comparability. A stratified probabilistic sampling strategy was adopted, covering 12 cities, 36 districts, and 108 subdistricts. All materials are reviewed and authorized by relevant community officials for use in this academic study. Considering the differences in document structure and language expression, the research team anonymized, standardized, and semantically cleaned all texts during the data processing stage to enable effective text mining and indicator extraction. It should be noted that due to variations in management systems and information recording methods across communities, the data show certain limitations in coverage and depth. To address this, subsequent research will complement and refine the indicator system by integrating expert interviews, ensuring both scientific rigor and practical applicability.

2.2.2. Preliminary Screening of Evaluation Indicators Based on Text Mining

Based on the dynamic capability theory, this study employs text mining to preliminarily extract elements of FELS capabilities from 263 community-related documents. It should be noted that the keywords identified through text mining do not directly represent residents’ natural language expressions. Instead, they reflect institutionalized and resident-oriented emergency language practices embedded in community governance documents, such as emergency notices, public communication materials, and service records. Accordingly, text mining in this study is used as an information identification and screening tool, the specific methods are as follows:
Data preprocessing. First, the original texts are structurally cleaned by removing headers, footers, signatures, and duplicate information, and by unifying the encoding format to ensure data consistency. Then, the JIEBA segmentation tool is used to perform word segmentation and stop-word removal (e.g., common words that carry no substantive meaning). Part-of-speech tagging is further applied to identify the semantic attributes of keywords, enabling subsequent extraction of critical nouns and verbs.
Capability element extraction. The TF-IDF algorithm is applied to extract keywords that signal the presence, emphasis, or institutionalization of specific emergency language service practices from the segmentation results [22]. Combined with established rules, keywords that reflect the dynamic capability of community emergency language services are selected. To avoid interference from invalid but high-frequency words (e.g., insufficient and partially), manual review is introduced for a second round of filtering. In the end, 98 valid keywords related to FELS are obtained (Figure 2).
On this basis, and within the “sensing-seizing-transforming” framework of Dynamic Capabilities Theory, the keywords obtained through text mining are transformed into key capability elements of the community emergency language service evaluation system. These include 16 preliminary capability elements such as language demand identification, language risk monitoring, human resource reserve, timeliness of emergency information dissemination, quality of emergency drills, optimization of emergency plans, training of language volunteers, and residents’ language adaptability (see Table 1). These capability elements serve as a candidate pool for subsequent expert-based refinement and indicator consolidation.

2.2.3. Refinement and Transformation of Capability Evaluation Indicators Based on Expert Consensus

Building on the preliminary capability elements extracted through text mining, this study further refined the indicator system through an expert consensus process grounded in Dynamic Capabilities Theory. A panel of domain experts in emergency management, community governance, and language services was invited to evaluate the 16 preliminary capability elements from the perspectives of conceptual clarity, practical relevance, measurability, and applicability at the community level.
Following a Delphi-style consultation process, capability elements with high semantic overlap or strong functional coupling (e.g., Human Resource Dispatch Response Time and Human Resource Reserve) were merged, while elements that were difficult to operationalize independently or were better represented as sub-dimensions of broader capabilities were consolidated. This process aimed to enhance the parsimony and interpretability of the indicator system without compromising theoretical completeness.
As a result, the initial 16 capability elements were systematically refined into 12 tertiary evaluation indicators, each corresponding to a distinct aspect of community FELS dynamic capability in terms of sensing, seizing, or transforming. These refined indicators are measurable, collectible, and practice-oriented, ensuring that they can effectively support subsequent quantitative evaluation and empirical analysis. The final transformation results are presented in Table 2.

2.2.4. Construction of the Evaluation Indicator System for Urban Community FELS Capability

Based on the above indicator transformation results, this study constructs an evaluation indicator system for urban community FELS capability. The system takes the dynamic capability of urban community emergency language services as the first-level indicator, under which three second-level indicators are defined: sensing capability, seizing capability, and transforming capability. These are further refined into 12 measurable tertiary indicators are shown in Table 3.
Sensing capability reflects the community’s ability to identify, monitor, and predict potential language service needs and risks, including four tertiary indicators: Language Demand Identification Capability (C11), Language Risk Early Warning Capability (C12), Language Service Data Support Capability (C13), and Intelligent Predictive Analysis Capability (C14). Seizing capability emphasizes the efficiency of a community’s response to actual language needs during emergencies, including four tertiary indicators: Emergency Language Human Resource Scheduling Capability (C21), Intelligent Language Technology Capability (C22), Emergency Information Dissemination Capability (C23), and Cross-Department Language Coordination Capability (C24). Transforming capability focuses on a community’s ability to learn, adapt, and optimize after emergency practices, including four tertiary indicators: Emergency Language Plan Optimization Capability (C31), Language Volunteer Training Capability (C32), Residents’ Language Adaptability Capability (C33), and Evaluation and Feedback Capability (C34). Through the construction of this three-level indicator system, the study not only enables a scientific evaluation of the overall level of dynamic capability in community FELS but also allows precise identification of weak links in capability building, thereby providing a theoretical basis and decision-making reference for formulating targeted optimization strategies.

2.2.5. Scoring Rules for the Evaluation Indicators of Urban Community FELS Capability

To achieve standardized evaluation of the dynamic capability of urban community emergency language services, this study establishes explicit scoring rules for the constructed three-level indicator system. The core idea of the scoring rules is to convert multi-source heterogeneous indicator data into dimensionless scores on a 0–100 scale, thus providing a standardized data foundation for subsequent evaluation.
The scoring rules were formulated through a Delphi-style expert consensus process involving experts in emergency management, community governance, fire protection, and language services. Based on the operational definitions of each indicator, experts jointly determined (i) the observable evidence used for scoring (e.g., policy documents, drill records, and service logs), and (ii) the corresponding rule-based mapping logic from documented evidence to numerical scores. For each community, indicator scores are assigned by trained researchers following these predefined rules, using documented records as the sole scoring basis to ensure consistency and replicability.
Specifically, according to the characteristics of the indicators and data availability, the measurement units are divided into five categories: percentage (%), quantity (number), frequency (times/year), duration (minutes or hours), and binary judgment (yes/no). This classification ensures that multi-source heterogeneous data can be effectively processed. The score mapping rules are as follows: (1) Percentage-based indicators (e.g., coverage rate, accuracy rate) directly reflect relative completion levels and use the original percentage value as the score. (2) Quantity-based indicators (e.g., number of resources allocated, number of channels) reflect the quantity of concrete resources or channels and are mapped to scores based on defined grading intervals. (3) Frequency-based indicators (e.g., number of training sessions, frequency of evaluations) emphasize the frequency of actions or measures implemented, with higher frequency yielding higher scores. (4) Duration-based indicators (e.g., response time) focus on timeliness, with shorter response times corresponding to higher scores. (5) Binary judgment indicators (e.g., hotline availability) indicate whether a system or measure has been implemented, scored as either 0 or 100.
Through this approach, the study achieves unified quantification of multi-source heterogeneous data, ensuring that all indicators can be fairly compared and weighted within the same evaluation dimension. The measurement methods, calculation formulas, and score mapping rules for each indicator are detailed in Table 4.

3. Construction of the Evaluation Model

To accurately evaluate the dynamic capability of urban community FELS, this study develops a multi-feature weighting model that integrates knowledge-driven and data-driven approaches. Knowledge-driven weighting methods can reflect experts’ theoretical understanding and the logical structure of indicators, but they are easily affected by subjective bias. Data-driven weighting methods make use of data characteristics, but often overlook the practical significance of indicators. Therefore, this study adopts a strategy that combines knowledge-driven and data-driven weighting, and applies Euclidean distance to achieve multi-feature fusion weighting approach, constructing a linear weighting evaluation model.

3.1. Knowledge-Driven Weighting Model Considering Indicator Characteristics

The challenge of calculating knowledge-driven weights lies in how to transform experts’ prior knowledge into quantitative data. Traditional rating scales directly convert experts’ subjective perceptions of indicator importance into quantitative values. However, due to time and knowledge constraints, it is often difficult to provide precise quantitative evaluations of indicators, resulting in rough and inaccurate transformations [23]. To address this, the study introduces the FAHP method and the fuzzy DEMATEL method, aiming to characterize the intrinsic importance of evaluation indicators in urban community emergency language service systems and their interaction mechanisms. These methods rely on expert prior knowledge as the informational foundation, using fuzzy linguistic variables to represent the uncertainty in expert judgments, thereby making the expression of indicator importance and influence relationships more aligned with actual cognitive processes. FAHP is employed to capture the relative importance of ELS capability indicators under conditions of uncertainty and expert cognition, while fuzzy DEMATEL is used to identify causal relationships and influence pathways among indicators. By integrating importance information with interaction mechanisms, the knowledge-driven weighting process reflects not only priority ranking but also the internal structural logic of the ELS capability system.

3.1.1. Computation of Indicator Importance Based on FAHP

The Fuzzy Analytic Hierarchy Process (FAHP) combines triangular fuzzy numbers with the traditional Analytic Hierarchy Process (AHP) to calculate indicator weights when expert judgments involve uncertainty and fuzziness [24]. In this study, the FAHP method is applied to assign knowledge-driven weights to indicator importance. The basic steps are as follows:
(1)
Transformation of experts’ prior knowledge. Experts use fuzzy linguistic terms to determine the relative importance of each pair of indicators. These fuzzy linguistic terms are then converted into triangular fuzzy numbers. The conversion rules for fuzzy linguistic terms are shown in Table 5.
(2)
Concept and Basic Operations of Fuzzy Sets. Let the domain be U . Any mapping μ A : U 0 , 1 from U to the interval [ 0 , 1 ] is called a fuzzy set on U . Denote it as A ¯ . The function μ A ¯ is the membership function of the fuzzy set A ¯ , and μ A ¯ ( e ) represents the degree to which element e belongs to A ¯ , referred to as the membership degree.
For any triangular fuzzy number with value x between three points l , m , u , if B ¯ is a triangular fuzzy function, then its membership function is:
u a ¯ ( x ) = 0 x < l ( x l ) / ( m l ) l x m ( u x ) / ( u m ) m x u 0 x > u
Let two triangular fuzzy numbers be defined as d 1 = l 1 , m 1 , u 1 , d 2 = l 2 , m 2 , u 2 , then the calculation methods of triangular fuzzy numbers are as follows:
d 1 d 2 = l 1 + l 2 , m 1 + m 2 , u 1 + u 2 d 1 d 2 = l 1 l 2 , m 1 m 2 , u 1 u 2   1 d 1 u , 1 m , 1 l
(3)
Construction of the Triangular Fuzzy Pairwise Comparison Matrix. By using triangular fuzzy numbers to integrate the evaluations of decision-makers, a triangular fuzzy pairwise comparison matrix can be further established:
A k = d 11 k d 12 k d 1 n k d 21 k d 22 k d 2 n k d n 1 k d n 2 k d n n k
d i j k = l i j , m i j , r i j is the triangular fuzzy number representing the comparison made by the k-th expert between indicator i and indicator j.
(4)
Group Integration. When multiple decision-makers are involved, the average value of their preferences is calculated. The expression for d i j is given as follows:
d i j = k = 1 K d i j k K
(5)
According to the average preferences of multiple decision-makers, the fuzzy pairwise comparison matrix is updated:
A = d 11 d 12 d 1 n d 21 d 22 d 2 n d n 1 d n 2 d n n
(6)
Calculation of the geometric mean of fuzzy comparison values for each indicator:
r j = i = 1 n d i j 1 n , i = 1 , 2 , , n
(7)
Calculation of fuzzy importance values of the indicators:
I M j ^ = r j ( r 1 r 2 r n ) 1 = ( l j , m j , u j )
(8)
Defuzzification. To obtain the weight values of each factor, the fuzzy weights calculated in the above steps must be defuzzified, yielding crisp values of each fuzzy number as the comprehensive evaluation standard for indicator weights:
I M j ^ = ( l j , m j , u j ) 3
(9)
Normalization of weights:
I M j = I M j ^ j = 1 n I M j ^ , j = 1 , 2 , , n

3.1.2. Calculation of Indicator Correlation Based on Fuzzy DEMATEL

The calculation steps of the fuzzy DEMATEL method are as follows:
(1)
Experts use fuzzy linguistic terms to determine the degree of correlation between each pair of indicators. These fuzzy linguistic terms are then converted into triangular fuzzy numbers according to predefined rules. The conversion rules for fuzzy linguistic terms are shown in Table 6.
(2)
The Converting the Fuzzy Data into Crisp Scores (CFCS) method is applied to transform triangular fuzzy numbers into crisp values. Suppose a triangular fuzzy number is A = ( l , m , u ) , the defuzzification steps are as follows:
For all triangular fuzzy numbers given by experts, calculate the average triangular fuzzy number for each indicator pair i j . Denote it as ( l i j ˜ , m i j ˜ , u i j ˜ ) .
For triangular fuzzy numbers, standardization is performed as follows: l i j * = l i j ˜ l min Δ , u i j * = u i j ˜ l min Δ , where Δ = u max l min , u max , l min represent the maximum right value and minimum left value among all fuzzy numbers.
The overall standardized crisp value is calculated as:
a i j c r i s p = l i j * 1 l i j * + u i j * × u i j * / 1 l i j * + u i j *
The crisp value of the triangular fuzzy number is then obtained as:
f i j = l min + a i j c r i s p Δ
Finally, the direct influence matrix is established and normalized as:
B = f i j n × n
(3)
According to the fuzzy DEMATEL model, the centrality value c e j and the causality value c a j of each indicator are calculated. The indicator influence I N j is then computed using the following formula:
I N j = c e j 2 + c a j 2 j = 1 n c e j 2 + c a j 2

3.1.3. Calculation of Physical Feature Weights

In this study, referring to the CRITIC method that integrates data variability and data conflict, the indicator importance I M j calculated by FAHP and the indicator relevance I N j calculated by fuzzy DEMATEL are combined to compute the physical feature strength C H j :
C H j = I M j × I N j
The physical feature weight W j is then calculated as:
W j = C H j j = 1 n C H j

3.2. Data-Driven Weighting Model Considering Indicator Characteristics

This study adopts the CRITIC method to determine the Data-driven weights of tertiary indicators for the dynamic capability of urban community FELS. This method takes into account both indicator variability and correlation, thereby avoiding the extreme weighting issues that may arise with the entropy method or independent weight coefficient method [25]. Data-driven weighting methods emphasize the degree of information contribution of evaluation indicators in sample data. The CRITIC method comprehensively measures the effective information contained in each indicator by simultaneously considering the dispersion of indicators and the inter-indicator correlations, thereby avoiding weight imbalances caused by information redundancy or insufficient variation. Specifically, the variability of indicators reflects their ability to differentiate among different community samples, while correlations are used to characterize the dependency relationships between indicators. By integrating these statistical features, the CRITIC method reduces reliance on expert judgment alone and enhances the sensitivity of weighting results to observed differences in ELS capabilities across communities:
(1)
Data Standardization. Since data standardization in this study is only used for weight calculation, preserving the characteristics of data variation is sufficient to meet the computational requirements. Therefore, this study does not apply different standardization methods according to data types; instead, the collected data are standardized using Formula (16), as shown below:
x i j = X i j X j min X j max X j min
(2)
Calculate indicator variability. Let x i j denote the i-th value of the j-th indicator. The formulas are as follows:
x ¯ j = 1 n i = 1 n x i j S j = i = 1 n x i j x ¯ j 2 n 1
(3)
Calculate the correlation coefficient between indicator k and indicator j . The formula is as follows:
r k j = i n ( x i j x j ¯ ) ( x i k x k ¯ ) 2 i n ( x i j x j ¯ ) 2 ( x i k x k ¯ ) 2
(4)
Calculate the conflict of indicator data. The formula is as follows:
R j = k = 1 n 1 r k j   r k j
(5)
Calculate the amount of information of the indicator. The formula is as follows:
C j = S j i = 1 n 1 r i j = S j × R j
(6)
Calculate the dynamic Data-driven weight of the indicator. The formula is as follows:
W j = C j j = 1 m C j

3.3. Multi-Feature Fusion Weighting Model

This study introduces the concept of Euclidean distance, combining the physical feature weight W j and the data feature weight W j to achieve integrated weighting. This process determines adaptive combination coefficients to balance mechanism-oriented interpretability and data-driven sensitivity. The resulting integrated weights are then combined with normalized indicator values in a linear weighted evaluation model to generate comprehensive ELS capability scores.
By calculating the Euclidean distance between the two types of weight vectors, the degree of divergence is measured, thereby determining the optimal combination coefficient and realizing multi-feature fusion weighting. The calculation formula for the Euclidean distance function weight is as follows:
d ( W j , W j ) = 1 2 j = 1 n ( W j W j ) 2 1 2
On this basis, the combined weight is defined as:
W j = α W j + β W j
where α and β are the allocation coefficients of physical feature weight and data feature weight, respectively, subject to the following conditions:
W j = α W j + β W j d ( W j , W j ) 2 = ( α β ) 2 α + β = 1
By solving the above system of equations, the optimal combination coefficients for weight fusion can be obtained, thereby achieving an adaptive balance between knowledge-driven and data-driven weights. This leads to a more representative and robust set of comprehensive indicator weights.

3.4. Capability Assessment Based on Linear Integration

In this study, the constructed tertiary indicator system is normalized to transform qualitative information into dimensionless data. Based on the combined weights obtained from the previous steps, the indicator data and weights are directly multiplied to calculate the evaluation value of the dynamic capability of urban community FELS. The calculation formula is as follows:
c = j = 1 n x j × W j
Here, c represents the capability evaluation score of a given community, x j is the normalized value of the j-th indicator, and W j is the combined weight of the j-th indicator.

4. Case Study

4.1. Community Profile

This study selects four representative urban communities in China as case objects, representing international mature-type, old disadvantaged-type, newly developed-type, and diverse economic-type communities. This typological selection aims to capture heterogeneity in population structure, governance capacity, and emergency language service conditions, thereby allowing the evaluation model to be tested across contrasting community contexts rather than a single homogeneous setting.
Community A is located in the central business district, where foreign enterprises and international organizations are concentrated. It has a high proportion of foreign residents, has established a bilingual (Chinese–English) public service system, and is equipped with multilingual volunteer teams and a well-developed information system. It can quickly achieve multilingual information dissemination and cross-department coordination during fires, and its emergency language service system is relatively mature. Community B is a typical old neighborhood, where elderly residents account for the majority and dialect usage is high. Housing is old, and digital governance is weak. Information dissemination mainly relies on bulletin boards and oral notifications, and the capacity of FELS is significantly insufficient. Community C is located in a newly developed industrial area, where migrant workers account for more than half of the residents. The linguistic backgrounds of residents are diverse, and their Mandarin proficiency varies greatly. Infrastructure and emergency governance systems are still under construction, emergency management institutions are not yet well developed, and multilingual information dissemination and emergency language service response capacity are weak during emergencies. Community D is situated in a modern, economically active new district. It has attracted a large number of inter-provincial migrants and a small number of foreign high-end professionals. The community enjoys a relatively high level of education and information literacy. Its information infrastructure is well developed, allowing information to be disseminated through multiple channels, but there is still room for improvement in the construction of the emergency language service system. This case analysis aims to test the accuracy, objectivity, and applicability of the evaluation model for dynamic capability of urban community FELS under different community contexts.

4.2. Data Collection

4.2.1. Knowledge-Driven Data Collection

To enhance the scientific rigor of the weighting procedure and to mitigate potential subjectivity bias, this study invited ten experts to conduct the Delphi consultation and the FAHP/fuzzy DEMATEL scoring. The criteria for expert selection include: (1) more than 10 years of relevant professional experience; (2) demonstrable expertise in at least one of the following domains closely related to the indicator system and the fire-response context: emergency language services, community emergency governance, fire protection engineering/fire rescue, risk communication and information dissemination, and computational science/engineering (e.g., data analytics, modeling, decision-support systems); and (3) familiarity with community-level emergency operations or evaluation practice.
The final panel deliberately balanced academic vs. professional backgrounds and ensured cross-domain coverage, particularly by adding experts with fire protection engineering and computational science/engineering backgrounds as recommended by the reviewers. The basic profiles of the experts are summarized in Table 7.
A Delphi-style procedure was applied with anonymized scoring and structured feedback. Experts first conducted pairwise comparisons of all tertiary indicators in terms of importance using the 1–9 fuzzy semantic scale required by FAHP. They then assessed the influence relationships among indicators using the five-level fuzzy semantic scale required by fuzzy DEMATEL. Group judgments were aggregated using standard group-integration rules to obtain the final matrices for subsequent calculations.
To address potential risk of bias, although expert-based weighting inevitably involves subjective judgments, this study reduced bias by (i) enlarging the panel size (n = 10) to dilute individual dominance; (ii) structuring the panel with complementary domains (fire engineering, computational science, governance, language services); (iii) using fuzzy linguistic terms to capture uncertainty and avoid forced precision; and (iv) applying group aggregation rather than single-expert weight. In addition, the robustness checks reported in Section 5.1.2 provide empirical evidence that the conclusions are stable under reasonable weight perturbations.

4.2.2. Data-Driven Data Collection

To ensure the objectivity and adaptability of the evaluation model for dynamic capability of urban community FELS, this study conducted field research in four types of representative communities and established a structured, hierarchical, and standardized system for Data-driven data collection. The design of data collection followed the principles of indicator measurability and spatial heterogeneity, with a time window of the past three years.
(1)
Criteria for sample selection. This study selected four types of communities—newly developed type, international mature type, old disadvantaged type, and diverse economic type—as typical samples, covering the main scenarios and capability types of urban community FELS. To enhance comparability and representativeness, the samples are required to meet the following conditions: a relatively well-developed governance system; at least one fire event related to emergency language services in the past three years; and a relatively standardized data recording mechanism. By adopting a unified indicator system structure and standardized data collection protocols, the study controlled for biases caused by institutional differences and heterogeneous data platforms, thereby ensuring the reliability and representativeness of model input data.
(2)
Data collection scheme. Based on the 12 tertiary evaluation indicators, the study covered the entire process of language service daily management, preparedness for drills, and actual emergency response in the sample communities. The data sources are integrated into three categories: Platform-based sources: community emergency management platforms, language volunteer systems/hotlines/intelligent customer service systems; Archival sources: community fire emergency drill reports, community fire protection and emergency management work summaries, language volunteer training records, community emergency publicity activity reports, and community emergency plan documents. In total, 172 sets of structured data are collected. The collection approach combined “system extraction, field verification and platform access.” All indicators are standardized into a 0–100 dimensionless scale according to predefined formulas and mapping rules before being incorporated into the model. Indicators that are missing or temporarily unavailable are either substituted with compliant proxies or excluded with annotations, in order to maintain the objectivity and rigor of the evaluation system.

4.3. Evaluation Results and Analysis

4.3.1. Weight Calculation Results and Analysis

First, based on the evaluation matrices provided by the ten experts, the knowledge-driven weights of the tertiary indicators are obtained through FAHP and fuzzy DEMATEL, as shown in Table 8. Among them, Emergency Language Human Resource Scheduling Capability (C21, 0.3190) has the highest weight, followed by Emergency Information Dissemination Capability (C23, 0.1363) and Cross-Department Language Coordination Capability (C24, 0.1120). This reflects the experts’ emphasis on placing key constraints that directly affect timeliness and coordination efficiency during the response stage at the core of capability building. Meanwhile, Language Demand Identification Capability (C11, 0.0116), Language Risk Early Warning Capability (C12, 0.0977), and Evaluation and Feedback Capability (C34, 0.1080) fall within relatively higher weight ranges, highlighting the chain logic of “early perception—rapid response—strong feedback loop.” By contrast, relatively lower weights are assigned to Intelligent Language Technology Capability (C22, 0.0099) and Residents’ Language Adaptability Capability (C33, 0.0023). Their lower knowledge-driven weights do not indicate lesser value, but rather reflect that these elements often function as enablers or mediators, indirectly exerting influence through demand identification, risk early warning, and information dissemination. In addition, their relatively similar baseline levels across the sampled communities reduce their marginally distinguishable contributions. Under contexts of higher linguistic complexity or scarce technological resources, however, their relative importance may increase.
From the perspective of data-driven weights (Table 9), the CRITIC method adopts the logic of “variability × conflict → information content,” which reflects the degree of differentiation of each indicator across sample communities rather than its importance. The results show that Language Demand Identification Capability (C11, 0.2252) and Language Risk Early Warning Capability (C12, 0.1989) have relatively high weights, indicating that the sensing dimension displays significant differences and best distinguishes community rankings. Residents’ Language Adaptability Capability (C33, 0.0940) and Emergency Information Dissemination Capability (C23, 0.0895) also carry relatively high weights, reflecting clear variation in residents’ participation in emergency language services and the coverage of service outreach. By contrast, relatively low Data-driven weights are found in indicators such as Emergency Language Human Resource Scheduling Capability (C21, 0.0202) and Emergency Language Plan Optimization Capability (C31, 0.0265). The main reason lies in the convergence of baseline conditions within the sample or consistent recording standards, which limit cross-community discriminability, rather than implying a lack of importance. In other words, a low data-driven weight indicates insufficient differentiation of the indicator in the given sample and observation period. Under different contexts (e.g., increased human resource constraints, frequent plan iterations) or over longer observation windows, their Data-driven weights may still increase.
The combined weights are obtained by integrating knowledge-driven and Data-driven weights, with normalization applied during calculation (Figure 3). Overall, the structure presents a pattern of “seizing ability as the core, sensing ability as the foundation, and transforming ability as support.” Based on the combined weights, seizing accounts for approximately 29%, sensing about 50%, and transforming about 21% (minor discrepancies due to rounding). At the indicator level, the highest weights are assigned to Emergency Language Human Resource Scheduling Capability (C21, 0.2554), Language Risk Early Warning Capability (C12, 0.1368), Emergency Information Dissemination Capability (C23, 0.1492), and Cross-Department Language Coordination Capability (C24, 0.0715). Specifically, human resource scheduling determines the availability and timeliness of multilingual forces, serving as the key constraint during emergency response. Risk early warning provides highly sensitive prior signals, directly affecting the timing of activation and the accuracy of resource allocation. Information dissemination ensures that instructions and alerts reach across multiple channels with broad coverage, making the difference between “ability to act” and “action taken.” Cross-department coordination reduces friction and transfer costs, ensuring rapid connectivity among medical services, public security, neighborhood offices, and social organizations. The integrated weighting approach fully considers the dialectical relationship between experiential knowledge and data representation, avoiding biases of relying solely on “expert experience” or “data differentiation.” It thereby provides multidimensional weighting references for evaluating the dynamic capability of urban community emergency language services.

4.3.2. Case Evaluation Results and Analysis

All indicator scores were standardized to a 0–100 scale. To improve transparency and illustrate how the quantitative scores were derived from the predefined scoring rules, C23 (Emergency Information Dissemination Capability) is taken as an illustrative example. According to Table 4, the measurement dimensions and score-mapping rules of this indicator include five components:
(1)
Number of emergency information dissemination channels (≥6 channels = 100 points; 4–5 channels = 75 points; 2–3 channels = 50 points; <2 channels = 25 points);
(2)
Information transmission error rate (≤1% = 100 points; 1.01–3% = 75 points; 3.01–5% = 50 points; >5% = 25 points);
(3)
Availability of a language emergency hotline (Yes = 100 points; No = 0 points);
(4)
Timeliness of information release (≤60 min = 100 points; 61–180 min = 75 points; 181–360 min = 50 points; >360 min = 25 points); and
(5)
Overall coverage rate of emergency language services, which is scored directly using the percentage value.
These five components are equally weighted, and their arithmetic mean is used to obtain the standardized score of C23 on a 0–100 scale.
Under this scoring scheme, the survey statistics of Community C indicate that more than six emergency information dissemination channels were actually used (e.g., community notice boards, WeChat groups, and property management notifications), yielding a score of 100 points for channel quantity. The measured information transmission error rate was 3.8%, corresponding to 50 points. A language emergency hotline was in operation, resulting in a score of 100 points for this item. The release time of the first emergency language message was within 180 min, also scored as 75 points. The annual overall coverage rate of emergency language services was calculated as 50%, yielding 50 points. Accordingly, the C23 score for Community C was calculated as (100 + 50 + 100 + 75 + 50)/5 = 75, which is reported in Table 10.
Using the same scoring rules, the scoring process for Community B under indicator C23 reveals pronounced deficiencies. Emergency information dissemination in this community mainly relied on three traditional channels (public notice boards, manual postings, and door-to-door notification), resulting in a channel quantity score of 50 points. The information transmission error rate exceeded 5%, corresponding to 25 points. No language emergency hotline was in operation, which was recorded as “No” and scored as 0 points. The release time of the first emergency information exceeded 360 min, yielding a timeliness score of 25 points. The overall coverage rate of emergency language services was calculated as 56.5%, corresponding to 56.5 points. Accordingly, the C23 score for Community B was computed as (50 + 25 + 0 + 25 + 56.5)/5 = 31.3, which is also reported in Table 10.
The scoring procedures for the remaining communities and all other indicators follow the same methodology. All scores are derived strictly in accordance with the value-mapping rules specified in Table 4, whereby community performance in actual fire incidents and emergency drills is systematically transformed into quantifiable scores. This ensures that the observed differences in FELS capabilities across communities are supported by transparent and verifiable empirical evidence.
From the comprehensive evaluation results (see Table 10), the dynamic capability scores of community emergency language services across the four cases are ranked as follows: A (86.76) > D (79.02) > C (62.43) > B (47.05), presenting a stratified pattern of “leading–stable–intermediate–weak”.
From the perspective of contribution effects of combined weights, the key indicators driving score differences are, in order: C21 (0.2554), C23 (0.1492), C12 (0.1368), C24 (0.0715), C11 (0.0849), and C34 (0.0740). Community A generally excels in these high-weight indicators, particularly in C12 (95), C23 (93.8), C24 (87.5), and C11 (96), where its outstanding performance, amplified by weight effects, secures its leading position. Community D demonstrates overall balance, yet lags behind A in C12 (76) and C23 (81.3), which account for the main sources of their gap. Community C shows a basic foundation in C31 (70) and C32 (68.8), but its weaker performance in C11 (36.7) and C12 (43.3) suppresses its overall score. Community B performs poorly in C23 (31.3), C24 (37.5) and across the perception dimension in general, which constitutes the core factor for its low overall ranking. Overall, score differences in high-weight indicators determine the final ranking.
From a dimensional structure perspective, (1) Perception capability: Community A ranks high across all four indicators (C11 = 96, C12 = 95, C13 = 80, C14 = 96.7); D follows (93.3, 76, 82.3, 73.3); C shows a basic foundation in C13 (61.7) and C14 (53.3) but is significantly weaker in C11 and C12; and B performs lowest across all four, with C14 (30) and C13 (44.3) as particular weaknesses. (2) Seizing capability: A demonstrates clear advantages in C23 (93.8), C24 (87.5) and cross-departmental coordination, followed closely by D (81.3, 81.3), with C in the middle (75, 68.8) and B significantly weaker (31.3, 37.5). Regarding C21, A and D (both 71.9) are close to C (68.8), while B is slightly lower (59.4). Considering its highest weight, this gap still makes a substantial contribution to overall scores. (3) Transforming capability: A scores high in C32 (100) and C31 (87.5); D remains stable (81.3, 80); C is moderate (68.8, 70); and B is relatively weak (37.5, 55). Notably, D achieves the highest score in C33 (100).

5. Discussion

5.1. Validation of the Evaluation Model

5.1.1. External Criterion Validity Verification

To verify whether the evaluation results truly reflect the actual performance of community FELS, this section adopts an external criterion validity strategy. The research sample consists of monthly data from the past year (July 2024–June 2025) covering four communities × 12 periods (n = 48). All operational indicators are standardized into “the higher, the better” relative measures: the sensing dimension corresponds to Completeness Rate of Demand Information (OI1) and Coverage Rate of Early Warning (OI2); the seizing dimension corresponds to Rate of Hotline Answered within 30 Seconds (OI3) and Rate of Personnel Dispatched within 1 h (OI4); and the Transforming dimension corresponds to Rate of Feedback Loops Closed within 48 h (OI5) and Excellent Rate in Emergency Knowledge Contests (OI6).
First, Spearman’s rank correlation test is applied to examine the positive association between model scores and real operational indicators. The results show that the composite score of capability is significantly and positively correlated with all six indicators (Figure 4). Further dimension-to-dimension tests also demonstrate that (Figure 5): the sensing dimension is moderately to strongly correlated with OI1, OI2; the seizing dimension shows the highest correlation with OI3, OI4; and the transforming dimension is significantly correlated with OI5 and OI6. These results indicate that the model scores and real-world performance exhibit a stable consistency, and the dimensional constructs demonstrate good convergent validity.
Second, in the absence of unified external thresholds, this study adopts a “threshold-free clustering” approach as a supplementary external criterion. Six operational indicators are standardized using Z-scores, and Ward’s hierarchical clustering (K = 4) is applied, yielding four community types: mature, stable, developing, and weak. The heatmap results show that these four types of communities display systematic differences in external indicators, forming a clear gradient (Figure 6). This further confirms that external indicators can effectively distinguish the real performance levels of communities.
In summary, the results of Spearman correlation and threshold-free clustering mutually support each other: model capability scores are not only significantly and positively correlated with key operational indicators, but also consistent with external stratification structures. Therefore, it can be concluded that the model constructed in this study demonstrates good external criterion validity.

5.1.2. Model Robustness Verification

To test whether the model conclusions would be affected by minor technical adjustments, this section implements two robustness checks: weight perturbation stability and indicator ablation experiments. The baseline is defined as the combined weighted scores of the “four communities × 12 tertiary indicators,” from which the baseline ranking of each community is determined.
(1)
Weight Perturbation Stability. Under two levels of perturbation (±10% and ±20%), multiplicative disturbances are applied to the weights of each tertiary indicator, followed by normalization: w j = w j ( 1 + δ j ) k w k ( 1 + δ k ) , δ j ~ U ( p ,   p ) , p 10 % , 20 % . For each level, 1000 Monte Carlo re-samplings are conducted, and the comprehensive scores and rankings of the four communities are recalculated. Robustness is measured by three types of indicators: Top-1/Top-2 hit rate (the proportion of simulations consistent with the baseline top 1/2 ranking sets), maximum rank displacement ( Δ r ), and the distribution of Spearman’s rank correlation coefficient ( ρ ) between perturbed scores and the baseline (represented by the median and interquartile range, IQR).
The results show that under both ±10% and ±20% weight perturbations, the Top-1 and Top-2 hit rates remain at 100%, and the maximum rank displacement (Δr) is 0, indicating that the relative ranking of communities is fully preserved. Meanwhile, the median Spearman’s ρ is 1.000 under both perturbation levels, with an interquartile range (IQR) approaching zero, reflecting an almost perfect rank-order consistency. These results demonstrate that the evaluation outcomes are highly stable with respect to weight perturbations, confirming the strong robustness of the proposed model and its reliability for cross-community comparison and capability type identification.
(2)
Indicator Ablation Experiment. To evaluate the sensitivity of conclusions to a single indicator, the average relative change rate (RI) is used to measure the strength of influence: R I j = mean i ( S i ( j ) S i ( b a s e ) S i ( b a s e ) ) , where S i ( b a s e ) is the baseline composite score of community i , and S i ( j ) is the composite score after adjusting the weight of indicator j (with the remaining weights normalized proportionally). Three scenarios that only modify technical settings are designed, and RI is calculated separately: Scenario A (full ablation): keeping the original weights, each indicator weight is sequentially set to 0; Scenario B (half ablation): each indicator weight is sequentially set to 50% of its original value; Scenario C (dimension-constrained ablation): sensing, seizing and transforming dimensions are each fixed at one-third, and then individual indicators are sequentially set to 0.
The results of the three scenarios are shown in Figure 7. Across all three scenarios, the relative rankings of the four communities remained unchanged (Δr = 0). The RI ranking is highly consistent across different settings: Emergency Language Human Resource Scheduling Capability and Language Risk Early Warning Capability consistently ranked at the top, while the third position alternated between Language Volunteer Training Capability and Emergency Information Dissemination Capability. This indicates that sensitivity is stably concentrated on 2–3 indicators across multiple scenarios.
In summary, even when applying reasonable levels of perturbation and ablation to weight settings and indicator selection, the model consistently satisfied the preset criteria (Top-1/Top-2 = 100%, Δr = 0). This demonstrates that the conclusions are robust to technical specifications and can be reliably used for cross-community comparisons and type identification.

5.2. Discussion on the Evaluation Status of Case Communities

According to the evaluation results in Chapter 4, the comprehensive scores of FELS dynamic capability across the four case communities are ranked as A > D > C > B (86.76, 79.01, 62.43, 47.05), which aligns with the relative stratification of mature, stable, developing, and weak types. From the perspective of the three-stage capability of “sensing-seizing-transforming,” Community A demonstrates overall balance and leads the ranking; Community D shows stronger performance in the seizing and transforming stages; Community C’s main weakness lies in the sensing stage; Community B remains weak across all three stages with insufficient linkage between them. This stage-based differentiation further indicates that disparities in FELS capability are closely associated with different positions of communities along the fire risk governance chain, particularly in early-stage risk recognition and coordination capacity.
The performance of the four communities on tertiary indicators of FELS capability is consistent with their respective economic, social, and linguistic-ecological conditions. Specifically, the results can be analyzed from five dimensions of People/Organization—Process/Institution—Data—Technology—Collaboration (PPDTC). which together constitute the operational foundation of language-based fire risk communication and coordinated execution at the community level.
The main weaknesses of community A lie in language service data support capability and intelligent predictive analysis capability. This community has a high proportion of migrant population, frequent international scenarios, and complex language demands, while its digital governance foundation is relatively strong. In terms of people/organization, both dedicated staff and volunteer teams are relatively complete and stable. Processes and institutions cover the full cycle from monitoring, early warning, dispatching, dissemination, feedback, to evaluation. Data collection channels are diversified with consistent statistical standards. Technological tools are broadly available and highly usable. Collaborative mechanisms are institutionalized through social organizations and inter-departmental linkages. As a result, Community A demonstrates stable advantages across the sensing-seizing-transforming stages. Its remaining constraints are mainly reflected in the sustainability of data-driven risk analysis and the continuous upgrading of language-based early warning mechanisms, rather than in basic response capacity.
The weaker aspects of community D include intelligent language technology capability, intelligent predictive analysis capability for fire scenarios, emergency language plan optimization capability, and language volunteer training mechanism. These communities generally benefit from strong grassroots organization and high public participation, with efficient human resource mobilization, effective feedback loops, and solid foundations for cross-departmental cooperation. However, data collection and integration are not sufficiently systematic, technological application remains limited, and early warning partly relies on experiential judgment. As a result, language-based risk sensing and early warning lag behind response and recovery functions, constraining further improvement in overall fire risk prevention effectiveness.
The weaknesses of community C are concentrated in language service data support capability, intelligent predictive analysis capability, intelligent language technology capability, and emergency language plan optimization capability for fire scenarios. Its demographic structure combines aging residents and a large inflow of migrant workers, with complex housing patterns. Data channels are fragmented, standards inconsistent, and timeliness insufficient, while technological application is still at an early stage. Supported by grid-based governance and local social networks, the community can maintain a certain level of human resource mobilization during emergencies. However, front-end activation of language-based risk warnings and post-event learning mechanisms remain unstable, and coordination relies heavily on case-specific relationships. Overall, its performance is characterized as “weak in the front stage—maintainable in the mid-stage—needing consolidation in the later stage”.
Community B exhibits weaknesses across multiple dimensions, especially in emergency information dissemination capability, cross-departmental language coordination capability, residents’ language adaptation capability, and evaluation and feedback capability, while also remaining at low levels in data support, intelligent analysis, and intelligent language technologies. The community faces simultaneous population outflow and aging, with limited fiscal and technological input. Human and organizational reserves are inadequate, with weak training and retention. Processes exist but are updated slowly and lack enforcement. Data systems are fragile and statistical standards inconsistent. Technology coverage and adoption are limited, while collaborative networks are weak and cross-departmental mobilization is slow. These deficiencies directly undermine the effectiveness of language-based risk communication and coordinated execution, resulting in low performance across the sensing–seizing–transforming stages, with poor linkage between them.
In summary, Community A demonstrates balanced performance across all stages; Community D excels in the seizing and transforming stages but lags in early recognition and warning; Community C is constrained by limited data and technological foundations in the front stage; and Community B suffers from systemic weaknesses across the board. These findings align with the model outputs and stratification results, providing a clear basis for proposing differentiated improvement pathways in the next section.

5.3. Improvement Strategies for Case Communities

Based on the preceding evaluation results, this study proposes differentiated optimization pathways for the four types of communities under the sensing–seizing–transforming framework. The overarching objective is to enhance the effectiveness of language-based fire risk prevention, early warning, response coordination, and post-event learning at the community level. The overarching principle is to first address common weaknesses, and then proceed with stepwise, context-specific optimization of community emergency language service dynamic capabilities.
Firstly, Strengthening Foundational Capacities: Advancing Common Initiatives. To ensure the stable functioning of the three capability stages and provide comparable data for future re-evaluation using the proposed model, five foundational initiatives are recommended for simultaneous implementation across all communities: (1) Data Standards and Collection. Establish community-level micro datasets with unified fields and coding [26]. The dataset should include, but not be limited to: fire-related emergency event ID, target languages and groups (elderly, foreign residents, persons with disabilities, etc.), information channels, processing outcomes, versions of information releases, and feedback records. Responsibilities for data sources, reporting, and updating frequency should be clearly defined. Validation rules should cover missing data, logic checks, and temporal consistency. This work can be embedded into existing ledgers and grid-based reporting systems, ensuring low cost and quick results. (2) Early-Warning Triggers and Response Checklists. For frequent emergency scenarios such as foreign language assistance and rumor clarification, define explicit early-warning triggers (thresholds or key fields), response deadlines, and role assignments. Supplement these with multilingual information templates, which should be executed as annexes to contingency plans. (3) Training and Drills. Organize scenario-based exercises on a quarterly basis [27], integrating language volunteers, hotline operators, grid staff, and relevant departments into unified workflows. Drill scripts should specify context, time limits, and languages involved. Observation metrics include connection delay, arrival time, translation accuracy, information consistency, and closure rate of feedback loops. Each exercise should generate a problem list, which will be incorporated into the annual revision process. (4) Technology Application and Resident Participation Evaluation [28]. Introduce lightweight tools such as AI translation, automatic transcription, and intelligent customer service into key processes (e.g., hotline reception, on-site record-keeping, generation of multilingual texts). Collect community residents’ feedback and integrate it with experts’ adjustment suggestions to continuously update the consensus. (5) Coordination Mechanisms. Regularly update contact directories and duty rosters, and institutionalize a “first contact–confirmation–receipt” closed-loop communication process. Effectiveness checks should be conducted monthly, and joint exercises should be organized quarterly to test and refine inter-organizational coordination. These five initiatives correspond to common weaknesses observed across community emergency language service dynamic capabilities. Importantly, the proposed optimization strategies are independent of community size and resource disparities, and can be implemented under existing organizational and digital governance conditions.
Secondly, Differentiated Optimization Strategies. Following the implementation of common capacity-building measures, differentiated improvement pathways are proposed based on evaluation results and community profiles: (1) Community A (Internationalized and Mature Type): Already equipped with bilingual (Chinese–English) public services, multilingual volunteer teams, and well-developed information systems. It is recommended to further refine data dictionaries and establish quality auditing mechanisms, including anomaly thresholds and temporal anomaly detection with automated alerts. A multilingual knowledge base (frequently asked questions, scenario templates, common terminology) should be integrated into volunteer training and assessment, while model evaluation results should be incorporated into annual contingency plan revisions. This pathway emphasizes process and rule optimization, with controllable marginal costs. (2) Community D (Diverse Economy Type): Strong in mid- and post-crisis capabilities, but relatively weak in sensing. It is recommended to build multi-source data collection systems with standardized fields, incorporating hotlines, grid units, community platforms, schools, and medical institutions into a fixed reporting list. Early-warning thresholds and multilingual information templates should be explicitly defined and linked to command platforms. On-site and hotline services should be equipped with data seizing and transcription tools, while language volunteer training should be integrated with quarterly drills. Given its strong grassroots organizational capacity and high resident participation, this community can advance improvements in phases, with technical and financial support potentially secured through district-level coordination. (3) Community C (New Development Zone Type): Characterized by weak front-end systems, insufficient institutional foundations, and highly diverse linguistic backgrounds. It is recommended to establish a one-to-one mechanism linking community informants with data collection points to ensure accountability. User-friendly mobile tools (e.g., mini-programs, speech-to-text applications) should be deployed to reduce reporting costs. Contingency plans should be compressed into key procedural steps and aligned with grid responsibilities, with fixed quarterly reviews and problem-closure processes. This pathway prioritizes institutional and procedural construction, with moderate investment and high feasibility. (4) Community B (Old and Vulnerable Type): Constrained by systemic weaknesses and requires comprehensive remedial measures. It is recommended to establish shared translation and command-support centers at the district/county level; develop simplified operating guidelines for communities, accompanied by centralized training and on-the-job practice; provide basic hotlines and entry-level AI tools; and build cross-departmental contact directories with quarterly joint training. For information dissemination, offline channels (bulletin boards, community broadcasts) and dialect/bilingual prompts should be maintained to cover elderly populations. Costs can be shared at the regional level through fiscal transfers, partnerships with social organizations, and collaborations with universities [29], enabling rapid formation of a stable operational mechanism.
Emergency language services span the entire process of fire governance. As a coupling mechanism between policy and action, they accelerate information transmission, promote cross-departmental coordination, and enhance public trust [30,31], thereby strengthening the overall capacity of urban community FELS. Moreover, language barriers during fire and other sudden disasters exacerbate information inequality [32], leaving linguistically vulnerable groups in “information blind zones”. Therefore, emergency language services play a crucial role in improving the accessibility of emergency communication for disadvantaged populations in urban communities.
This evaluation framework provides a practical decision-support tool for community-level fire governance. By decomposing emergency language service capability into sensing, seizing, and transforming dimensions, community managers can identify specific capability bottlenecks rather than relying on aggregated scores. For example, low scores in information dissemination or cross-department coordination indicate priority areas for intervention during the response stage, while weaknesses in demand identification or feedback mechanisms signal structural gaps in preparedness and post-event learning. Moreover, the weighted results enable resource prioritization under constrained conditions, allowing limited financial, technological, and human resources to be allocated to high-impact capability dimensions. In this way, the proposed model supports evidence-based capability diagnosis, targeted capacity-building strategies, and adaptive optimization of FELS in community.

5.4. International Perspective and Technological Evolution of FELS Evaluation

5.4.1. International Practices and Research Contribution

From an international perspective, multilingual emergency communication has become an essential component of emergency governance systems in communities with diverse populations. Existing research and policy practices indicate that multiple countries and regions have systematically promoted language accessibility in emergency contexts at the institutional level. For example, during the COVID-19 pandemic, South Korea developed and promoted a sign language interpretation application for people with hearing impairments, providing round-the-clock emergency language support [33]. Ireland’s Major Emergency Preparedness Guidelines explicitly require that disaster information be disseminated in accessible formats such as Braille and large-print versions. New Zealand’s National Civil Defense Emergency Management Plan proposes the use of multiple languages in disaster information dissemination and the provision of translation services for linguistically vulnerable groups. Japan, through long-term disaster governance practice, has developed institutionalized language strategies such as “disaster-reduction Japanese” and “Plain Japanese” to lower the barriers to information comprehension for foreign residents in emergency situations [34]. In addition, emergency communication guidelines jointly issued by the World Federation of the Deaf and the International Association of Sign Language Interpreters provide professional standards for information access and communication pathways for people with hearing impairments during sudden emergencies [35].
Overall, international research and practice in this field mainly focus on the policy design of emergency language services, modes of service provision, and case-based analyses of specific emergency events. These studies emphasize multilingual information dissemination, accessibility for vulnerable linguistic groups, the use of sign language and plain language, and the effectiveness of translation technologies in emergency scenarios. However, most of this work takes individual tools, specific institutional arrangements, or particular case contexts as its analytical focus, and pays relatively limited attention to the systematic evaluation and structural diagnosis of emergency language service capabilities at the community level.
In contrast to these research orientations, the contribution of this study lies in constructing a community-level evaluation framework for emergency language service capabilities from the perspective of dynamic governance capabilities. This framework conceptualizes emergency language services as an integrated capability system composed of three stages—sensing, seizing, and transforming—rather than as isolated institutional arrangements or technical tools. Through indicator-based design and multi-source weighting methods, the framework enables quantitative identification of capability bottlenecks, prioritization of governance actions, and guidance for improvement pathways. In this sense, the present study does not replicate existing international experience-oriented research, but instead complements international multilingual emergency practice by providing an evaluation- and governance-diagnostic perspective, offering a transferable analytical framework for the comparative assessment and optimization of community emergency language capabilities across different institutional contexts.

5.4.2. Technological Evolution and Adaptive Evaluation Under Rapid AI Development

Building on the above international comparison and the evaluation-oriented contribution of this study, it should be further noted that the role of technology in emergency language service capability assessment requires careful interpretation. In the context of the rapid evolution of artificial intelligence and language technologies, the weights assigned to technology-related indicators exhibit clear stage-dependent and context-sensitive characteristics. The technology-related indicator weights derived in this study, based on the current sample and temporal window, do not imply limited long-term significance of technological capacity in governance practice. Rather, they reflect the fact that, at the present stage of urban community emergency governance, such technological capacities have not yet achieved a high level of differentiation, stability, or institutional embedding across communities.
As technologies such as machine translation, speech recognition, automated text generation, and intelligent predictive analytics gradually shift from auxiliary tools to foundational components of grassroots governance, the functional role of technological capacity within emergency language service systems is expected to continue evolving. Under conditions of higher technological penetration or more complex risk scenarios, these technology-related indicators may move from a supporting role to a more decisive or driving position, accompanied by corresponding adjustments in their relative weight within the overall evaluation framework.
To address the potential evaluation lag induced by the rapid iteration of artificial intelligence technologies, the evaluation framework developed in this study incorporates a certain degree of methodological adaptability. First, the data-driven weighting approach is capable of dynamically capturing the actual discriminative effect of technological capacities across communities through changes in indicator dispersion and information content. Second, the multi-feature fusion weighting mechanism avoids rigid dependence on any single technological pathway, thereby reducing the risk of distorted evaluation results caused by the rapid updating of specific technologies. Third, the proposed evaluation model is not intended as a one-off analytical tool; rather, it allows for the synchronized adjustment of both weight structures and evaluation outcomes through rolling data updates and periodic application.
Accordingly, the proposed evaluation system is better understood as an iterative and updatable governance diagnostic instrument, rather than as a static judgment on technological development trends. This characteristic enables the framework to continuously support the dynamic assessment and governance optimization of emergency language service capabilities in urban communities amid the ongoing evolution of artificial intelligence.

6. Conclusions

This study develops an evaluation system for the dynamic capability of FELS in urban communities, covering key stages including indicator system construction, multi-source heterogeneous data integration, combined knowledge-driven and Data-driven weighting, and linear integrated evaluation. The system has been applied and validated in four representative types of communities. The evaluation framework consists of two components, a customizable indicator system and a fixed evaluation model, which can be adapted and reused across different community contexts. By design, the proposed framework serves as a governance-oriented assessment tool that supports language-based fire risk prevention, early warning, coordinated response, and post-event recovery at the community level.
At the indicator level, by combining dynamic capability theory with text mining methods, a three-tier indicator system for community FELS is constructed. It covers the entire process of FELS, from pre-event identification to post-event improvement, thereby enhancing the comprehensiveness and objectivity of the indicator system. This process-oriented design enables the systematic identification of language-related risks and communication bottlenecks along the fire risk governance chain.
At the data processing level, qualitative indicators are converted into quantitative measures based on domain knowledge, and corresponding scoring rules are developed. This unifies diverse and heterogeneous observations into a 0–100 dimensionless range, providing a standardized data foundation for subsequent weight calculation and comprehensive scoring. Such standardization allows language-based governance capacities to be compared across communities and integrated into broader fire risk management diagnostics.
In terms of weight integration, the physical characteristics of indicators are quantified using FAHP and Fuzzy DEMATEL, and the CRITIC-based conflict quantification strategy is applied to generate physical characteristic weights. The Euclidean distance approach is then employed to determine the combination coefficient of knowledge-driven and Data-driven weights, resulting in the Multi-Feature Fusion Weighting Model. This effectively consolidates expert experience on the knowledge-driven side with data characteristics on the data-driven side, ensuring that the weighting results are both interpretable in terms of governance mechanisms and sensitive to observed differences in community-level fire risk communication and coordination performance.
In the case evaluation, the overall dynamic capability levels of FELS across the four communities exhibit a stratified pattern of mature, stable, developing, and weak types. The main sources of differentiation lie in high-weight indicators (such as human resource scheduling, risk early warning, information dissemination, and cross-departmental coordination). These results indicate that weaknesses in high-weight indicators correspond to critical leverage points where deficiencies in language-based risk communication and coordinated execution may amplify fire risks.
In the model validation, external validity tests show that the composite score is significantly correlated with six operational indicators. Dimension-level results are consistent with their corresponding operational stages. Threshold-free clustering and inter-group tests effectively distinguish the four community types of mature, stable, developing, and weak. Robustness checks demonstrate that underweight perturbations (±10%/±20%) and indicator ablation, the relative rankings remain largely unchanged. Sensitivity is concentrated on a few key indicators, such as human resource scheduling and risk early warning, confirming that the model’s conclusions are insensitive to technical settings and remain stable.
In summary, the proposed evaluation system can be applied to diagnose urban community FELS capabilities, identify weaknesses, and prioritize governance actions. It also provides a quantitative basis for resource allocation and process optimization. By adopting FELS dynamic capability as a diagnostic lens, the framework indirectly but critically supports community fire risk prevention, control, and resilience enhancement through improved language-based communication and execution mechanisms. Despite the contributions of this study, a limitation remains. The empirical data were collected from urban communities in four provinces in China, and although the sample encompasses economic, geographic, and governance diversity, caution is required when extending the findings to other national or institutional contexts. Future research may further enhance the generalizability of the results by incorporating cross-regional or international comparative data and expanding expert participation.

Author Contributions

Conceptualization, H.L.; methodology, H.L.; formal analysis, H.L.; investigation, H.M.; resources, Z.G. and Q.S.; writing—original draft, H.L.; writing—review and editing, Q.S.; visualization, H.L. and H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, L.; Wang, J.; Li, G.; Chew, M.Y.L.; Zhang, H.; Zhang, G.; Dlugogorski, B.Z. Increasing fire risks in cities worldwide under warming climate. Nat. Cities 2025, 2, 254–264. [Google Scholar] [CrossRef]
  2. Haupt, B. The use of crisis communication strategies in emergency management. J. Homel. Secur. Emerg. Manag. 2021, 18, 125–150. [Google Scholar] [CrossRef]
  3. Xiang, T.; Gerber, B.J.; Zhang, F. Language access in emergency and disaster preparedness: An assessment of local government “whole community” efforts in the United States. Int. J. Disaster Risk Reduct. 2021, 55, 102072. [Google Scholar] [CrossRef]
  4. Wu, J.; Yang, G.; Liu, Z.; Liu, Y.; Guo, J.; Yan, G.; Ding, G.; Fu, C.; Yang, Z.; Yang, X.; et al. Language processing in emergencies recruits both language and default mode networks. Neuropsychologia 2025, 213, 109152. [Google Scholar] [CrossRef]
  5. Wang, L.F.; Ren, J.; Sun, J.W.; Meng, Y.Y. Concept, research status, and institutional construction of emergency language services. J. Beijing Int. Stud. Univ. 2020, 42, 21–30. [Google Scholar]
  6. Li, Y.M.; Rao, G.Q.; Zhang, J.; Li, J. Conceptualizing national emergency language competence. Multilingua 2020, 39, 617–623. [Google Scholar] [CrossRef]
  7. Li, Z.; She, J.; Guo, Z.; Du, J.; Zhou, Y. An evaluation of factors influencing the community emergency management under compounding risks perspective. Int. J. Disaster Risk Reduct. 2024, 100, 104179. [Google Scholar] [CrossRef]
  8. Wang, H.; Liu, X.; Wang, J.; Han, H. Research on the formulation of national emergency language service plans. Lang. Strategy Res. 2025, 10, 83–96. [Google Scholar]
  9. Teng, Y.J. Competence of emergency language service providers and evaluation of emergency language talents. J. Tianjin Foreign Stud. Univ. 2021, 28, 20–31, 157–158. [Google Scholar]
  10. Wang, L.F.; Li, Z. Construction and interpretation of the framework for emergency language education system. Shandong Foreign Lang. Teach. 2023, 44, 8–18. [Google Scholar]
  11. Brandenberger, J.; Stedman, I.; Stancati, N.; Sappleton, K.; Kanathasan, S.; Fayyaz, J.; Singh, D. Using artificial intelligence-based language interpretation in non-urgent paediatric emergency consultations: A clinical performance test and legal evaluation. BMC Health Serv. Res. 2025, 25, 138. [Google Scholar] [CrossRef] [PubMed]
  12. Elmhadhbi, L.; Karray, M.H.; Archimède, B.; Otte, J.N.; Smith, B. PROMES: An ontology-based messaging service for semantically interoperable information exchange during disaster response. J. Contingencies Crisis Manag. 2020, 28, 324–338. [Google Scholar] [CrossRef]
  13. Wang, L.F.; Xie, Y.X. Construction and validation of an emergency language service capability evaluation system. China Emerg. Manag. Sci. 2025, 06, 112–123. [Google Scholar]
  14. Wang, L.F.; Jin, Y.J.; Li, J.X. Construction and validation of an evaluation index system for language service competitiveness. Chin. Transl. J. 2022, 43, 116–125. [Google Scholar]
  15. Guan, X.; Li, W.; Cui, N.; Yu, J.; An, L. Construction of an evaluation indicator system for the emergency management capability of major infectious diseases in urban communities. BMC Health Serv. Res. 2025, 25, 857. [Google Scholar] [CrossRef]
  16. Embrett, M.; Carson, A.; Sim, M.; Conway, A.; Moore, E.; Hancock, K.; Bielska, I. Building resilient and responsive health research systems: Responses and the lessons learned from the COVID-19 pandemic. Health Res. Policy Syst. 2025, 23, 38. [Google Scholar] [CrossRef]
  17. Mahl, D.; Schäfer, M.S.; Voinea, S.A.; Adib, K.; Duncan, B.; Salvi, C.; Novillo-Ortiz, D. Responsible artificial intelligence in public health: A Delphi study on risk communication, community engagement and infodemic management. BMJ Glob. Health 2025, 10, 5. [Google Scholar] [CrossRef]
  18. Teece, D.J.; Pisano, G.; Shuen, A. Dynamic capabilities and strategic management. Strateg. Manag. J. 1997, 18, 509–533. [Google Scholar] [CrossRef]
  19. Teece, D.J.; Peteraf, M.; Leih, S. Dynamic Capabilities and Organizational Agility: Risk, Uncertainty, and Strategy in the Innovation Economy. Calif. Manag. Rev. 2016, 58, 13–35. [Google Scholar] [CrossRef]
  20. Jiang, X.M. A survey on the demand and supply of medical language services for Yi patients in Xichang hospitals. Lang. Strategy Res. 2025, 10, 34–43. [Google Scholar]
  21. Sun, Q.; Zhang, Y.; Wang, F. Relations of residents’ knowledge, satisfaction, perceived importance and participation in emergency management: A case study in modern and old urban communities of Ningbo, China. Int. J. Disaster Risk Reduct. 2022, 76, 102997. [Google Scholar] [CrossRef]
  22. Lu, J.K. Language Services for Fighting Against COVlD—19 in South Korea//Language Situation in Foreign Countries; The Commercial Press: Beijing, China, 2021; pp. 139–143. [Google Scholar]
  23. Lu, C.; Li, S.; Xu, K.; Zhang, Y. Research on data-driven coal mine environmental safety risk assessment system. Saf. Sci. 2025, 183, 106727. [Google Scholar] [CrossRef]
  24. Lu, C.; Li, S.; Xu, N.; Qin, Y. A knowledge-data-structure hybrid-driven multi-characteristic fusion weighting model—Application of coal and gas outburst risk assessment. Expert Syst. Appl. 2026, 299, 130201. [Google Scholar] [CrossRef]
  25. Yu, L.P.; Zheng, K. Comparison and Consideration of Different Objective Weighting Methods in Journal Evaluation. J. Mod. Inf. 2021, 41, 121–130. [Google Scholar]
  26. Lowndes, A.M.; Connelly, D.M. User experiences of older adults navigating an online database of community-based physical activity programs. Digit. Health 2023, 9, 20552076231167004. [Google Scholar] [CrossRef]
  27. Wolf-Fordham, S. Integrating government silos: Local emergency management and public health department collaboration for emergency planning and response. Am. Rev. Public Adm. 2020, 50, 560–567. [Google Scholar] [CrossRef]
  28. Cheng, X.; Gan, W.; Zhang, K.; Xu, Z.; Gou, X. A dynamically updated consensus model for large-scale group decision-making driven by the evolution of public sentiment in emergencies. Inf. Fusion 2026, 126, 103484. [Google Scholar] [CrossRef]
  29. Satizábal, P.; Cornes, I.; Zurita, M.D.L.M.; Cook, B.R. The power of connection: Navigating the constraints of community engagement for disaster risk reduction. Int. J. Disaster Risk Reduct. 2022, 68, 102699. [Google Scholar] [CrossRef]
  30. Shaghaghi, N.; Patel, S.; Pabari, B.; Francis, M. Implementing Communications and Information Dissemination Technologies for First Responders. In Proceedings of the 2019 IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 17–20 October 2019; pp. 1–4. [Google Scholar]
  31. Jiang, X.M. Textual analysis of emergency language policies, laws, and regulations in the context of Chinese-style modernization. J. Cap. Norm. Univ. Soc. Sci. Ed. 2025, 2, 141–150. [Google Scholar]
  32. Wei, Y.; Qian, Y. Modernization of China’s emergency language service system and capacity in the new era. J. South-Cent. Univ. Natl. (Humanit. Soc. Sci.) 2023, 43, 134–142, 187. [Google Scholar]
  33. Lu, C.; Li, S.; Xu, N.; Zhang, Y.; Qin, Y. Hybrid-driven risk assessment methodology for coal and gas outburst: Integration of complex network, disaster mechanism, and multi-level fusion modeling. Process Saf. Environ. Prot. 2025, 199, 107226. [Google Scholar] [CrossRef]
  34. Yao, Y.L. A Study of “Plain Language” Policy in Japan and Features of Emergency Language. J. Jpn. Lang. Study Res. 2021, 05, 21–28. [Google Scholar] [CrossRef]
  35. Zhang, T.W. An Effective Way to Build up U.S. On-Call National Language Capacity: A Case Study of the U.S. National Language Service Corps. Chin. J. Lang. Policy Plan. 2016, 1, 8896. [Google Scholar]
Figure 1. Conceptual Framework of Urban Community FELS Capabilities. (Note: The different colors in this figure are used only for visual distinction and do not indicate specific categories.)
Figure 1. Conceptual Framework of Urban Community FELS Capabilities. (Note: The different colors in this figure are used only for visual distinction and do not indicate specific categories.)
Fire 09 00015 g001
Figure 2. Keyword Cloud of Urban Community FELS.
Figure 2. Keyword Cloud of Urban Community FELS.
Fire 09 00015 g002
Figure 3. Combined Weights of Tertiary Indicators for the Dynamic Capability of FELS in Communities A, B, C, and D.
Figure 3. Combined Weights of Tertiary Indicators for the Dynamic Capability of FELS in Communities A, B, C, and D.
Fire 09 00015 g003
Figure 4. Spearman Correlation between Model Composite Scores and External Operational Indicators.
Figure 4. Spearman Correlation between Model Composite Scores and External Operational Indicators.
Fire 09 00015 g004
Figure 5. Spearman Correlation between Model Dimension Scores and Corresponding External Indicators.
Figure 5. Spearman Correlation between Model Dimension Scores and Corresponding External Indicators.
Fire 09 00015 g005
Figure 6. Cluster Profile Heatmap of External Operational Indicators (Z-scores).
Figure 6. Cluster Profile Heatmap of External Operational Indicators (Z-scores).
Fire 09 00015 g006
Figure 7. Results of Indicator Ablation Experiment under Three Scenarios.
Figure 7. Results of Indicator Ablation Experiment under Three Scenarios.
Fire 09 00015 g007
Table 1. Transformation of Capability Elements for Urban Community FELS.
Table 1. Transformation of Capability Elements for Urban Community FELS.
KeywordsCapability Elements
Elderly Needs, Foreign Nationals, Ethnic Minorities, Information Needs, Actual Needs, Emergency Knowledge Needs, Unmet NeedsLanguage Demand Identification
Monitoring, Monitoring Network, Population Structure, Risk AssessmentLanguage Risk Monitoring
Predictive Warning, Prevention, Early Warning ResponseRapid Early Warning Capability
WeChat Survey, Volunteer Home Visits, Baseline Survey, Real-Time Website Registration, Statistical Registration, Survey, Basic Information RecordsData Collection Channels
Level of Networking, Big Data, Decision-Making, Prevention Measures, Assessment ReportIntelligent Predictive Analysis
Foreign Languages, Foreign Residents, Dialects, Doctor-Patient Communication Volunteers, Psychological Counseling, Dispute MediationHuman Resource Reserve
Emergency Language Communication, Volunteer Arrival Time, Dispatch EfficiencyHuman Resource Dispatch Response Time
Language Resource Reserve and Allocation, Low-Income Households, Social OrganizationsLanguage Service Coverage
AI Translation, Speech Recognition, Big Data, Equipment MaintenanceIntelligent Language Technologies and Equipment
Community Broadcasting, Outdoor Billboards, Banners, Publicity Slogans, Publicity Boards, Leaflets, Social Media, Hotline, Door-to-Door Notification, Gong Beating, WeChat, Accurate and Clear Information, Complete Communication LinksEmergency Information Dissemination Channels
Information Delay, Late Submission of Emergency Information, Timely ReportingTimeliness of Information Dissemination
Liaison Mechanism, Joint Response, Poor Communication and Coordination, Collaborative Linkage, Police, Firefighting, Medical Services, Cross-Department Coordination Meetings, Joint ActionsCross-Department Language Coordination
Emergency Drills, Field DrillsQuality of Emergency Drills
Delayed Update of Emergency Plans, Emergency Plan Formulation, Lag in Plan Updates, Plan RevisionEmergency Plan Optimization
Volunteer Service Team, Emergency Knowledge Training, Psychological Adjustment Skills, Mobilization, Team Building, Inadequate Training, Online Training, Theoretical KnowledgeLanguage Volunteer Training
Emergency Publicity and Education, Emergency Knowledge, Publicity Coverage, Popular Science, Knowledge Popularization Rate, Emergency Awareness, First Aid Knowledge, Emergency Knowledge Competition, Emergency Knowledge Q&A, Public Emergency Competence, Public Participation, Emergency Knowledge Lectures, Safety AwarenessResidents’ Language Adaptability
Risk Assessment Mechanism, Summary, Annual ReviewEvaluation Mechanism
Interviewed Residents, ImprovementResident Feedback
Table 2. Transformation of Evaluation Indicators for Urban Community FELS Capability.
Table 2. Transformation of Evaluation Indicators for Urban Community FELS Capability.
Capability ElementsCorresponding Tertiary Indicators
Language Demand IdentificationLanguage Demand Identification Capability
Language Risk MonitoringLanguage Risk Early Warning Capability
Rapid Early Warning Capability
Data Collection ChannelsLanguage Service Data Support Capability
Intelligent Predictive AnalysisIntelligent Predictive Analysis Capability
Human Resource ReserveEmergency Language Human Resource Scheduling Capability
Human Resource Dispatch Response Time
Language Service Coverage
Intelligent Language Technologies and EquipmentIntelligent Language Technology Capability
Emergency Information Dissemination ChannelsEmergency Information Dissemination Capability
Timeliness of Information Dissemination
Cross-Department Language CoordinationCross-Department Language Coordination Capability
Quality of Emergency DrillsEmergency Language Plan Optimization Capability
Emergency Plan Optimization
Language Volunteer TrainingLanguage Volunteer Training Capability
Residents’ Language AdaptabilityResidents’ Language Adaptability Capability
Evaluation MechanismEvaluation and Feedback Capability
Resident Feedback
Table 3. Evaluation Indicator System for Urban Community FELS Capability.
Table 3. Evaluation Indicator System for Urban Community FELS Capability.
First-Level IndicatorSecond-Level IndicatorCodeTertiary IndicatorCodeIndicator Definition
Dynamic Capability of Community FELSSensing CapabilityC1Language Demand Identification CapabilityC11Evaluates the community’s ability to accurately identify and predict language needs of different groups (e.g., foreign nationals, ethnic minorities, elderly), and to anticipate possible FELS needs in advance.
Language Risk Early Warning CapabilityC12Evaluates the extent to which the community has established a risk monitoring and early warning system for language services, including its capability to monitor linguistic risks and issue rapid early warnings prior to fire incidents.
Language Service Data Support CapabilityC13Evaluates the completeness and timeliness of data collection channels for language services, reflecting the depth and quality of data support.
Intelligent Predictive Analysis CapabilityC14Evaluates the community’s ability to use AI, big data, and machine learning to analyze language needs and support decision-making.
Seizing CapabilityC2Emergency Language Human Resource Scheduling CapabilityC21Evaluates assesses the community’s ability to efficiently and accurately mobilize FELS personnel, including professional interpreters, volunteers, psychological counselors, and speakers of dialects and minority languages, to ensure effective support for emergency communication, psychological comfort, and information dissemination.
Intelligent Language Technology CapabilityC22Evaluates whether the community effectively applies intelligent language technologies (e.g., AI translation, automatic speech recognition) to improve the efficiency of language services in fires.
Emergency Information Dissemination CapabilityC23Evaluates the coverage and precision of the community’s information dissemination channels in fires.
Cross-Department Language Coordination CapabilityC24Evaluates how the community coordinates multiple actors (e.g., government agencies, NGOs, technology enterprises) during fires to ensure rapid sharing and mobilization of language resources.
Transforming CapabilityC3Emergency Language Plan Optimization CapabilityC31Evaluates the professionalism and executability of the community’s FELS plans.
Language Volunteer Training CapabilityC32Evaluates the scale, quality, and sustained impact of regular community language volunteer training, ensuring sufficient coverage, achievement of assessment standards, and effective participation in drills and emergencies.
Residents’ Language Adaptability CapabilityC33Evaluates the multilingual adaptability of community residents, including participation in everyday language training and educational outreach activities.
Evaluation and Feedback CapabilityC34Evaluates the normalization and enforcement of post-event evaluation mechanisms, the coverage and participation of feedback channels, and the effectiveness of applying evaluation results to improvement measures.
Table 4. Evaluation Indicator System for the Dynamic Capability of Urban Community FELS.
Table 4. Evaluation Indicator System for the Dynamic Capability of Urban Community FELS.
Tertiary IndicatorMeasurement MethodsCalculation Formula
C11Fire-related Accuracy of predicted number of language needs (%);
Frequency of language demand surveys (times/year);
Validity of sampling surveys (whether sampling methods and sample size are clearly defined) (yes/no)
Fire-related Language Demand Prediction Accuracy (%) = Predicted total number of people with language needs ÷ Actual total number of people with language needs.
Frequency of Language Demand Surveys (times/year) = Number of language demand-related surveys conducted by the community in one year.
Validity of Sampling Surveys (yes/no) = “Yes” if the community clearly specifies the sampling method, sample size, and frequency for language demand surveys; otherwise “No.”
C12Fire-related Number of risk monitoring indicators (count);
Fire-related Accuracy of the language risk early warning system (%);
Average response time of early warning (minutes)
Number of Fire Risk Monitoring Indicators (count) = Total number of indicators in the community’s language service risk monitoring database, e.g., size of population with language barriers, number of multilingual demands, number of foreign residents, failure rate of translation technologies, volunteer attrition rate.
Accuracy of Fire-related Language Risk Early Warning System (%) = Valid warnings (accurate predictions of real events) ÷ Total warnings issued × 100%.
Average Early Warning Response Time (minutes) = (time from reaching early warning threshold to release of warning information) ÷ Number of valid warnings triggered.
C13Number of FELS data collection channels (count);
Population coverage rate of data collection (%);
Proportion of real-time data collection (%)
Number of FELS Data Collection Channels (count) = Total number of channels used by the community to collect language service data (e.g., hotline, website platform, WeChat surveys, volunteer household registration), with each channel counted as one.
Population Coverage Rate of Data Collection (%) = Number of people covered by data collection ÷ Actual total number of people with language needs in the community.
Proportion of Real-Time Data Collection (%) = Number of channels capable of real-time data collection ÷ Total number of data collection channels (e.g., smartphone apps, online registration systems, 24 h hotline).
C14Frequency of data analysis report generation (times/year);
Frequency of model iteration and updates (times/year);
Adoption rate of decision-making recommendations (%)
Frequency of Data Analysis Report Generation (times/year) = Total number of language demand data analysis reports produced in one year.
Frequency of Model Iteration and Updates (times/year) = Number of version updates or optimizations of the community’s predictive models or algorithms in one year.
Adoption Rate of Intelligent Analysis Recommendations (%) = Number of recommendations implemented ÷ Total number of recommendations × 100%.
C21Status of emergency language service talent pool construction;
Volunteer coverage rate (%);
Talent mobilization rate (%);
Reserve rate of professional translators (%);
Reserve rate of multilingual volunteers (%);
Reserve rate of dialect volunteers (%);
Reserve rate of psychological counseling personnel (%);
Response time for human resource scheduling (minutes)
Status of Emergency Language Service Talent Pool Construction (yes/no).
Volunteer Coverage Rate (%) = Number of language service volunteers ÷ Community population × 100%.
Talent Mobilization Rate (%) = Number of volunteers actually deployed ÷ Total number in the talent pool × 100%.
Reserve Rate of Professional Translators (%) = Total number of professional translators ÷ Community population × 100%.
Reserve Rate of Multilingual Volunteers (%) = Number of volunteers proficient in ≥2 languages ÷ Community population × 100%.
Reserve Rate of Dialect Volunteers (%) = Number of dialect-speaking volunteers ÷ Community population × 100%.
Reserve Rate of Psychological Counseling Personnel (%) = Number of personnel providing psychological language services ÷ Community population × 100%.
Human Resource Scheduling Response Time (minutes) = (time from dispatch order to actual arrival) ÷ Number of personnel deployed in one event.
C22Availability rate of AI tools (%);
Accuracy of AI translation tools (%);
Coverage rate of speech recognition/intelligent customer service systems (%);
Number of calls to intelligent language tools during emergencies (times/event)
Availability Rate of AI Tools (%) = Number of AI language service tools available for immediate use ÷ Total number of AI tools allocated in the community × 100%.
Accuracy of AI Translation Tools (%) = Number of sentences judged accurate (by human review or automated evaluation) ÷ Total number of sampled sentences × 100%.
Coverage Rate of Speech Recognition/Intelligent Customer Service Systems (%) = Number of voice requests correctly recognized and answered ÷ Total number of voice requests × 100%.
Number of Calls to Intelligent Language Tools During Emergencies (times/event) = Total number of times intelligent language tools are invoked during emergencies ÷ Total number of emergency events.
C23Number of emergency information dissemination channels (count);
Error rate of information transmission (%);
Availability of emergency language hotline (yes/no);
Timeliness of emergency information dissemination (minutes);
Overall coverage of emergency language services (%)
Number of Emergency Information Dissemination Channels (count) = Number of emergency language information dissemination channels established and actually used by the community during events.
Error Rate of Information Transmission (%) = Number of miscommunications, mistranslations, or distortions in messages ÷ Total number of language messages released during events × 100%.
Availability of Emergency Language Hotline (yes/no) = “Yes” if the community opened a dedicated language emergency hotline during events; otherwise “No.”
Timeliness of Emergency Information Dissemination (minutes) = (time from event occurrence to release of first language message) ÷ Number of emergency events.
Overall Coverage of Emergency Language Services (%) = Number of service requests completed in the year ÷ Total number of service requests recorded in the year × 100%.
C24Frequency of cross-department coordination meetings during fires (times/year);
Number of joint emergency response actions for language services (times/year);
Number of emergency cooperation agreements (count);
Response time for cross-department joint actions (hours)
Frequency of Cross-Department Coordination Meetings During Emergencies (times/year) = Number of cross-department emergency language coordination meetings organized in response to emergencies within one year.
Number of Joint Emergency Response Actions for Language Services (times/year) = Total number of joint language emergency actions conducted by departments in one year.
Number of Emergency Cooperation Agreements (count) = Total number of formal agreements or memoranda of understanding on language emergency services signed in one year.
Response Time for Cross-Department Joint Actions (hours) = (time from event occurrence to initiation of cross-department joint action) ÷ Number of emergency events in the year.
C31Quality score of emergency language service plans (points);
Effectiveness evaluation of emergency plans (%)
Quality Score of Emergency Language Service Plans = (expert scores from each update) ÷ Number of updates (experts score 0–100 based on the “Emergency Plan Standard,” averaged annually).
Effectiveness Rate of Drills (%) = Number of effective drills ÷ Total number of drills × 100% (an “effective drill” meets both conditions: participants target group specified in the plan, and key process steps [dispatch → release → feedback] pass evaluation).
C32Frequency of volunteer training sessions (times/year);
Training coverage rate (%);
Training pass rate (%);
Training retention rate (%)
Frequency of Volunteer Training Sessions (times/year) = Total number of language volunteer training activities conducted in a year.
Proportion of Trained Volunteers (%) = Number of volunteers trained in a year ÷ Total number of volunteers in the language volunteer pool × 100%.
Training Pass Rate (%) = Number of volunteers who passed tests (written + practical) ÷ Total number of trained volunteers × 100%.
Training Retention Rate (%) = Number of trained volunteers who participated in drills/emergencies within six months ÷ Total number of trained volunteers × 100%.
C33Proportion of community residents participating in language training (%);
Frequency of emergency language outreach/science popularization activities (times/year)
Proportion of Community Residents Participating in Language Training (%) = Total number of residents who participated in community language training within one year ÷ Community population × 100%.
Frequency of Emergency Language Outreach/Science Activities (times/year) = Number of community science popularization activities related to emergency language services conducted in a year.
C34Frequency of evaluations (times/year);
Coverage rate of resident feedback (%);
Implementation rate of improvement measures (%)
Frequency of Evaluations (times/year) = Total number of post-event evaluation activities completed in a year.
Coverage Rate of Resident Feedback (%) = Number of residents who provided feedback ÷ Number of residents invited to provide feedback × 100%.
Implementation Rate of Improvement Measures (%) = Number of language service improvement measures implemented ÷ Total number of planned measures × 100%.
Table 5. Conversion Rules of Fuzzy Linguistic Terms for Indicator Importance.
Table 5. Conversion Rules of Fuzzy Linguistic Terms for Indicator Importance.
Fuzzy Linguistic Terms for Indicator ImportanceInteger ScaleTriangular Fuzzy Number
Equally Important1(1,1,1)
2(1,2,3)
Slightly Important3(2,3,4)
4(3,4,5)
Important5(4,5,6)
6(5,6,7)
Very Important7(6,7,8)
8(7,8,9)
Extremely Important9(8,9,9)
Table 6. Conversion Rules of Fuzzy Linguistic Terms for Indicator Influence.
Table 6. Conversion Rules of Fuzzy Linguistic Terms for Indicator Influence.
ScaleFuzzy Linguistic Terms for Indicator CorrelationTriangular Fuzzy Number
1Very Weak Influence[0,0,0.25]
2Weak Influence[0,0.25,0.5]
3Moderate Influence[0.25,0.5,0.75]
4Strong Influence[0.5,0.75,1]
5Very Strong Influence[0.75,1,1]
Table 7. Profile of the Expert Panel.
Table 7. Profile of the Expert Panel.
ExpertAgeGenderEducationField of Work/ResearchTypeYears of Experience
Expert 153MaleDoctor’s DegreeResearch on Emergency Language ServicesAcademic20
Expert 250FemaleBachelor’s DegreeGrassroots Community GovernanceProfessional25
Expert 346FemaleDoctor’s DegreeResearch on Information DisseminationAcademic16
Expert 442MaleMaster’s DegreeFire Protection PracticeProfessional13
Expert 549MaleDoctor’s DegreeFire Protection EngineeringAcademic18
Expert 645FemaleMaster’s DegreeFire Rescue Operations/Incident CommandProfessional15
Expert 747MaleDoctor’s DegreeComputational Science/EngineeringAcademic17
Expert 841FemaleDoctor’s DegreeData Analytics/Machine Learning EngineeringProfessional10
Expert 944MaleDoctor’s DegreeEmergency Management and Community Resilience EvaluationAcademic14
Expert 1043FemaleMaster’s DegreePublic Communication/Multilingual Service ManagementProfessional13
Table 8. Knowledge-driven Weights of Tertiary Indicators for the Dynamic Capability of FELS in Communities A, B, C, and D.
Table 8. Knowledge-driven Weights of Tertiary Indicators for the Dynamic Capability of FELS in Communities A, B, C, and D.
Tertiary IndicatorIndicator ImportanceIndicator InfluenceKnowledge-Driven Weight
C110.01590.07650.0116
C120.10520.10210.0977
C130.02280.09680.0212
C140.02280.07830.0170
C210.28590.13490.3190
C220.02460.04510.0099
C230.14550.12480.1363
C240.11420.07070.1120
C310.06360.0840.0495
C320.06610.08560.0471
C330.01610.02060.0023
C340.11750.08060.1080
Table 9. Data-driven Weights of Tertiary Indicators for the Dynamic Capability of Emergency Language Services in Communities A, B, C, and D.
Table 9. Data-driven Weights of Tertiary Indicators for the Dynamic Capability of Emergency Language Services in Communities A, B, C, and D.
Tertiary IndicatorIndicator VariabilityIndicator ConflictInformation ContentData-Driven Weight
C1130.4992.75483.9970.2252
C1222.8783.24374.1850.1989
C1317.7620.91816.3140.0437
C1428.4280.91626.0520.0699
C215.9171.2757.5460.0202
C2217.1341.00817.2790.0463
C2327.181.22833.390.0895
C2422.2530.8919.8090.0531
C3114.050.7039.8730.0265
C3226.2740.86122.6090.0606
C3321.3481.64335.0690.094
C3419.2090.68613.1680.0353
Table 10. Evaluation Values of the Dynamic Capability of FELS in Communities A, B, C, and D.
Table 10. Evaluation Values of the Dynamic Capability of FELS in Communities A, B, C, and D.
Tertiary IndicatorCombined WeightABCD
C110.08499648.336.793.3
C120.1368955543.376
C130.03018044.361.782.3
C140.036396.73053.373.3
C210.255471.959.468.871.9
C220.023788.847.566.573.5
C230.149293.831.37581.3
C240.071587.537.568.881.3
C310.044187.5557080
C320.058410037.568.881.3
C330.035787.55075100
C340.074083.34061.776.7
Evaluation Values of Community FELS Dynamic Capability86.7647.0562.4379.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Mao, H.; Guo, Z.; Shao, Q. Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response. Fire 2026, 9, 15. https://doi.org/10.3390/fire9010015

AMA Style

Li H, Mao H, Guo Z, Shao Q. Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response. Fire. 2026; 9(1):15. https://doi.org/10.3390/fire9010015

Chicago/Turabian Style

Li, Han, Haoran Mao, Zhenning Guo, and Qinghua Shao. 2026. "Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response" Fire 9, no. 1: 15. https://doi.org/10.3390/fire9010015

APA Style

Li, H., Mao, H., Guo, Z., & Shao, Q. (2026). Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response. Fire, 9(1), 15. https://doi.org/10.3390/fire9010015

Article Metrics

Back to TopTop