Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response
Abstract
1. Introduction
- (1)
- Construction of a comprehensive and objective three-level indicator system. By combining dynamic capability theory with text mining, a three-level indicator system is developed, covering multiple dimensions such as language risk early warning, human resource scheduling, information dissemination, cross-department coordination, and evaluation and feedback. The system incorporates multi-source heterogeneous data from community emergency drills, management reports, volunteer training records, emergency publicity reports, and emergency plans. It also covers the full process of emergency language services, from pre-event identification to in-event response and post-event improvement.
- (2)
- Development of unified quantification and scoring rules. Based on domain knowledge, quantitative methods and scoring rules are formulated for the third-level qualitative indicators. All data are normalized into a dimensionless scale from 0 to 100, providing an accurate, comparable, and reusable data foundation for evaluation.
- (3)
- Establishment of a data-driven multi-model integrated evaluation method. By combining FAHP, fuzzy DEMATEL, CRITIC, Multi-Feature Fusion Weighting Model, the method derives integrated knowledge-driven and data-driven weights. Linear weighting approach is then used to achieve integrated evaluation. This method has a multi-layered structure, process interpretability, and the capacity to handle complex data while providing robust assessment results.
- (4)
- Application testing and validation of the model. The evaluation system is applied and validated in four representative types of urban communities. The results demonstrate that the system can effectively and objectively assess the dynamic capability of community emergency language services. It also reveals the differentiated distribution of capability, which is mainly driven by key high-weight dimensions. The system shows strong governance adaptability and application potential, and it provides important support for capability diagnosis, weakness identification, and resource allocation optimization in urban community emergency language services.
2. Construction of the Evaluation Index System for Urban Community FELS Capabilities
2.1. Connotation and Framework of Dynamic Capabilities in Urban Community FELS
2.2. Construction of the Evaluation Indicator System for Urban Community FELS
2.2.1. Dataset
2.2.2. Preliminary Screening of Evaluation Indicators Based on Text Mining
2.2.3. Refinement and Transformation of Capability Evaluation Indicators Based on Expert Consensus
2.2.4. Construction of the Evaluation Indicator System for Urban Community FELS Capability
2.2.5. Scoring Rules for the Evaluation Indicators of Urban Community FELS Capability
3. Construction of the Evaluation Model
3.1. Knowledge-Driven Weighting Model Considering Indicator Characteristics
3.1.1. Computation of Indicator Importance Based on FAHP
- (1)
- Transformation of experts’ prior knowledge. Experts use fuzzy linguistic terms to determine the relative importance of each pair of indicators. These fuzzy linguistic terms are then converted into triangular fuzzy numbers. The conversion rules for fuzzy linguistic terms are shown in Table 5.
- (2)
- Concept and Basic Operations of Fuzzy Sets. Let the domain be . Any mapping from to the interval is called a fuzzy set on . Denote it as . The function is the membership function of the fuzzy set , and represents the degree to which element belongs to , referred to as the membership degree.
- (3)
- Construction of the Triangular Fuzzy Pairwise Comparison Matrix. By using triangular fuzzy numbers to integrate the evaluations of decision-makers, a triangular fuzzy pairwise comparison matrix can be further established:
- (4)
- Group Integration. When multiple decision-makers are involved, the average value of their preferences is calculated. The expression for is given as follows:
- (5)
- According to the average preferences of multiple decision-makers, the fuzzy pairwise comparison matrix is updated:
- (6)
- Calculation of the geometric mean of fuzzy comparison values for each indicator:
- (7)
- Calculation of fuzzy importance values of the indicators:
- (8)
- Defuzzification. To obtain the weight values of each factor, the fuzzy weights calculated in the above steps must be defuzzified, yielding crisp values of each fuzzy number as the comprehensive evaluation standard for indicator weights:
- (9)
- Normalization of weights:
3.1.2. Calculation of Indicator Correlation Based on Fuzzy DEMATEL
- (1)
- Experts use fuzzy linguistic terms to determine the degree of correlation between each pair of indicators. These fuzzy linguistic terms are then converted into triangular fuzzy numbers according to predefined rules. The conversion rules for fuzzy linguistic terms are shown in Table 6.
- (2)
- The Converting the Fuzzy Data into Crisp Scores (CFCS) method is applied to transform triangular fuzzy numbers into crisp values. Suppose a triangular fuzzy number is , the defuzzification steps are as follows:
- (3)
- According to the fuzzy DEMATEL model, the centrality value and the causality value of each indicator are calculated. The indicator influence is then computed using the following formula:
3.1.3. Calculation of Physical Feature Weights
3.2. Data-Driven Weighting Model Considering Indicator Characteristics
- (1)
- Data Standardization. Since data standardization in this study is only used for weight calculation, preserving the characteristics of data variation is sufficient to meet the computational requirements. Therefore, this study does not apply different standardization methods according to data types; instead, the collected data are standardized using Formula (16), as shown below:
- (2)
- Calculate indicator variability. Let denote the i-th value of the j-th indicator. The formulas are as follows:
- (3)
- Calculate the correlation coefficient between indicator and indicator . The formula is as follows:
- (4)
- Calculate the conflict of indicator data. The formula is as follows:
- (5)
- Calculate the amount of information of the indicator. The formula is as follows:
- (6)
- Calculate the dynamic Data-driven weight of the indicator. The formula is as follows:
3.3. Multi-Feature Fusion Weighting Model
3.4. Capability Assessment Based on Linear Integration
4. Case Study
4.1. Community Profile
4.2. Data Collection
4.2.1. Knowledge-Driven Data Collection
4.2.2. Data-Driven Data Collection
- (1)
- Criteria for sample selection. This study selected four types of communities—newly developed type, international mature type, old disadvantaged type, and diverse economic type—as typical samples, covering the main scenarios and capability types of urban community FELS. To enhance comparability and representativeness, the samples are required to meet the following conditions: a relatively well-developed governance system; at least one fire event related to emergency language services in the past three years; and a relatively standardized data recording mechanism. By adopting a unified indicator system structure and standardized data collection protocols, the study controlled for biases caused by institutional differences and heterogeneous data platforms, thereby ensuring the reliability and representativeness of model input data.
- (2)
- Data collection scheme. Based on the 12 tertiary evaluation indicators, the study covered the entire process of language service daily management, preparedness for drills, and actual emergency response in the sample communities. The data sources are integrated into three categories: Platform-based sources: community emergency management platforms, language volunteer systems/hotlines/intelligent customer service systems; Archival sources: community fire emergency drill reports, community fire protection and emergency management work summaries, language volunteer training records, community emergency publicity activity reports, and community emergency plan documents. In total, 172 sets of structured data are collected. The collection approach combined “system extraction, field verification and platform access.” All indicators are standardized into a 0–100 dimensionless scale according to predefined formulas and mapping rules before being incorporated into the model. Indicators that are missing or temporarily unavailable are either substituted with compliant proxies or excluded with annotations, in order to maintain the objectivity and rigor of the evaluation system.
4.3. Evaluation Results and Analysis
4.3.1. Weight Calculation Results and Analysis
4.3.2. Case Evaluation Results and Analysis
- (1)
- Number of emergency information dissemination channels (≥6 channels = 100 points; 4–5 channels = 75 points; 2–3 channels = 50 points; <2 channels = 25 points);
- (2)
- Information transmission error rate (≤1% = 100 points; 1.01–3% = 75 points; 3.01–5% = 50 points; >5% = 25 points);
- (3)
- Availability of a language emergency hotline (Yes = 100 points; No = 0 points);
- (4)
- Timeliness of information release (≤60 min = 100 points; 61–180 min = 75 points; 181–360 min = 50 points; >360 min = 25 points); and
- (5)
- Overall coverage rate of emergency language services, which is scored directly using the percentage value.
5. Discussion
5.1. Validation of the Evaluation Model
5.1.1. External Criterion Validity Verification
5.1.2. Model Robustness Verification
- (1)
- Weight Perturbation Stability. Under two levels of perturbation (±10% and ±20%), multiplicative disturbances are applied to the weights of each tertiary indicator, followed by normalization: , . For each level, 1000 Monte Carlo re-samplings are conducted, and the comprehensive scores and rankings of the four communities are recalculated. Robustness is measured by three types of indicators: Top-1/Top-2 hit rate (the proportion of simulations consistent with the baseline top 1/2 ranking sets), maximum rank displacement (), and the distribution of Spearman’s rank correlation coefficient () between perturbed scores and the baseline (represented by the median and interquartile range, IQR).
- (2)
- Indicator Ablation Experiment. To evaluate the sensitivity of conclusions to a single indicator, the average relative change rate (RI) is used to measure the strength of influence: , where is the baseline composite score of community , and is the composite score after adjusting the weight of indicator (with the remaining weights normalized proportionally). Three scenarios that only modify technical settings are designed, and RI is calculated separately: Scenario A (full ablation): keeping the original weights, each indicator weight is sequentially set to 0; Scenario B (half ablation): each indicator weight is sequentially set to 50% of its original value; Scenario C (dimension-constrained ablation): sensing, seizing and transforming dimensions are each fixed at one-third, and then individual indicators are sequentially set to 0.
5.2. Discussion on the Evaluation Status of Case Communities
5.3. Improvement Strategies for Case Communities
5.4. International Perspective and Technological Evolution of FELS Evaluation
5.4.1. International Practices and Research Contribution
5.4.2. Technological Evolution and Adaptive Evaluation Under Rapid AI Development
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Shi, L.; Wang, J.; Li, G.; Chew, M.Y.L.; Zhang, H.; Zhang, G.; Dlugogorski, B.Z. Increasing fire risks in cities worldwide under warming climate. Nat. Cities 2025, 2, 254–264. [Google Scholar] [CrossRef]
- Haupt, B. The use of crisis communication strategies in emergency management. J. Homel. Secur. Emerg. Manag. 2021, 18, 125–150. [Google Scholar] [CrossRef]
- Xiang, T.; Gerber, B.J.; Zhang, F. Language access in emergency and disaster preparedness: An assessment of local government “whole community” efforts in the United States. Int. J. Disaster Risk Reduct. 2021, 55, 102072. [Google Scholar] [CrossRef]
- Wu, J.; Yang, G.; Liu, Z.; Liu, Y.; Guo, J.; Yan, G.; Ding, G.; Fu, C.; Yang, Z.; Yang, X.; et al. Language processing in emergencies recruits both language and default mode networks. Neuropsychologia 2025, 213, 109152. [Google Scholar] [CrossRef]
- Wang, L.F.; Ren, J.; Sun, J.W.; Meng, Y.Y. Concept, research status, and institutional construction of emergency language services. J. Beijing Int. Stud. Univ. 2020, 42, 21–30. [Google Scholar]
- Li, Y.M.; Rao, G.Q.; Zhang, J.; Li, J. Conceptualizing national emergency language competence. Multilingua 2020, 39, 617–623. [Google Scholar] [CrossRef]
- Li, Z.; She, J.; Guo, Z.; Du, J.; Zhou, Y. An evaluation of factors influencing the community emergency management under compounding risks perspective. Int. J. Disaster Risk Reduct. 2024, 100, 104179. [Google Scholar] [CrossRef]
- Wang, H.; Liu, X.; Wang, J.; Han, H. Research on the formulation of national emergency language service plans. Lang. Strategy Res. 2025, 10, 83–96. [Google Scholar]
- Teng, Y.J. Competence of emergency language service providers and evaluation of emergency language talents. J. Tianjin Foreign Stud. Univ. 2021, 28, 20–31, 157–158. [Google Scholar]
- Wang, L.F.; Li, Z. Construction and interpretation of the framework for emergency language education system. Shandong Foreign Lang. Teach. 2023, 44, 8–18. [Google Scholar]
- Brandenberger, J.; Stedman, I.; Stancati, N.; Sappleton, K.; Kanathasan, S.; Fayyaz, J.; Singh, D. Using artificial intelligence-based language interpretation in non-urgent paediatric emergency consultations: A clinical performance test and legal evaluation. BMC Health Serv. Res. 2025, 25, 138. [Google Scholar] [CrossRef] [PubMed]
- Elmhadhbi, L.; Karray, M.H.; Archimède, B.; Otte, J.N.; Smith, B. PROMES: An ontology-based messaging service for semantically interoperable information exchange during disaster response. J. Contingencies Crisis Manag. 2020, 28, 324–338. [Google Scholar] [CrossRef]
- Wang, L.F.; Xie, Y.X. Construction and validation of an emergency language service capability evaluation system. China Emerg. Manag. Sci. 2025, 06, 112–123. [Google Scholar]
- Wang, L.F.; Jin, Y.J.; Li, J.X. Construction and validation of an evaluation index system for language service competitiveness. Chin. Transl. J. 2022, 43, 116–125. [Google Scholar]
- Guan, X.; Li, W.; Cui, N.; Yu, J.; An, L. Construction of an evaluation indicator system for the emergency management capability of major infectious diseases in urban communities. BMC Health Serv. Res. 2025, 25, 857. [Google Scholar] [CrossRef]
- Embrett, M.; Carson, A.; Sim, M.; Conway, A.; Moore, E.; Hancock, K.; Bielska, I. Building resilient and responsive health research systems: Responses and the lessons learned from the COVID-19 pandemic. Health Res. Policy Syst. 2025, 23, 38. [Google Scholar] [CrossRef]
- Mahl, D.; Schäfer, M.S.; Voinea, S.A.; Adib, K.; Duncan, B.; Salvi, C.; Novillo-Ortiz, D. Responsible artificial intelligence in public health: A Delphi study on risk communication, community engagement and infodemic management. BMJ Glob. Health 2025, 10, 5. [Google Scholar] [CrossRef]
- Teece, D.J.; Pisano, G.; Shuen, A. Dynamic capabilities and strategic management. Strateg. Manag. J. 1997, 18, 509–533. [Google Scholar] [CrossRef]
- Teece, D.J.; Peteraf, M.; Leih, S. Dynamic Capabilities and Organizational Agility: Risk, Uncertainty, and Strategy in the Innovation Economy. Calif. Manag. Rev. 2016, 58, 13–35. [Google Scholar] [CrossRef]
- Jiang, X.M. A survey on the demand and supply of medical language services for Yi patients in Xichang hospitals. Lang. Strategy Res. 2025, 10, 34–43. [Google Scholar]
- Sun, Q.; Zhang, Y.; Wang, F. Relations of residents’ knowledge, satisfaction, perceived importance and participation in emergency management: A case study in modern and old urban communities of Ningbo, China. Int. J. Disaster Risk Reduct. 2022, 76, 102997. [Google Scholar] [CrossRef]
- Lu, J.K. Language Services for Fighting Against COVlD—19 in South Korea//Language Situation in Foreign Countries; The Commercial Press: Beijing, China, 2021; pp. 139–143. [Google Scholar]
- Lu, C.; Li, S.; Xu, K.; Zhang, Y. Research on data-driven coal mine environmental safety risk assessment system. Saf. Sci. 2025, 183, 106727. [Google Scholar] [CrossRef]
- Lu, C.; Li, S.; Xu, N.; Qin, Y. A knowledge-data-structure hybrid-driven multi-characteristic fusion weighting model—Application of coal and gas outburst risk assessment. Expert Syst. Appl. 2026, 299, 130201. [Google Scholar] [CrossRef]
- Yu, L.P.; Zheng, K. Comparison and Consideration of Different Objective Weighting Methods in Journal Evaluation. J. Mod. Inf. 2021, 41, 121–130. [Google Scholar]
- Lowndes, A.M.; Connelly, D.M. User experiences of older adults navigating an online database of community-based physical activity programs. Digit. Health 2023, 9, 20552076231167004. [Google Scholar] [CrossRef]
- Wolf-Fordham, S. Integrating government silos: Local emergency management and public health department collaboration for emergency planning and response. Am. Rev. Public Adm. 2020, 50, 560–567. [Google Scholar] [CrossRef]
- Cheng, X.; Gan, W.; Zhang, K.; Xu, Z.; Gou, X. A dynamically updated consensus model for large-scale group decision-making driven by the evolution of public sentiment in emergencies. Inf. Fusion 2026, 126, 103484. [Google Scholar] [CrossRef]
- Satizábal, P.; Cornes, I.; Zurita, M.D.L.M.; Cook, B.R. The power of connection: Navigating the constraints of community engagement for disaster risk reduction. Int. J. Disaster Risk Reduct. 2022, 68, 102699. [Google Scholar] [CrossRef]
- Shaghaghi, N.; Patel, S.; Pabari, B.; Francis, M. Implementing Communications and Information Dissemination Technologies for First Responders. In Proceedings of the 2019 IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 17–20 October 2019; pp. 1–4. [Google Scholar]
- Jiang, X.M. Textual analysis of emergency language policies, laws, and regulations in the context of Chinese-style modernization. J. Cap. Norm. Univ. Soc. Sci. Ed. 2025, 2, 141–150. [Google Scholar]
- Wei, Y.; Qian, Y. Modernization of China’s emergency language service system and capacity in the new era. J. South-Cent. Univ. Natl. (Humanit. Soc. Sci.) 2023, 43, 134–142, 187. [Google Scholar]
- Lu, C.; Li, S.; Xu, N.; Zhang, Y.; Qin, Y. Hybrid-driven risk assessment methodology for coal and gas outburst: Integration of complex network, disaster mechanism, and multi-level fusion modeling. Process Saf. Environ. Prot. 2025, 199, 107226. [Google Scholar] [CrossRef]
- Yao, Y.L. A Study of “Plain Language” Policy in Japan and Features of Emergency Language. J. Jpn. Lang. Study Res. 2021, 05, 21–28. [Google Scholar] [CrossRef]
- Zhang, T.W. An Effective Way to Build up U.S. On-Call National Language Capacity: A Case Study of the U.S. National Language Service Corps. Chin. J. Lang. Policy Plan. 2016, 1, 8896. [Google Scholar]







| Keywords | Capability Elements |
|---|---|
| Elderly Needs, Foreign Nationals, Ethnic Minorities, Information Needs, Actual Needs, Emergency Knowledge Needs, Unmet Needs | Language Demand Identification |
| Monitoring, Monitoring Network, Population Structure, Risk Assessment | Language Risk Monitoring |
| Predictive Warning, Prevention, Early Warning Response | Rapid Early Warning Capability |
| WeChat Survey, Volunteer Home Visits, Baseline Survey, Real-Time Website Registration, Statistical Registration, Survey, Basic Information Records | Data Collection Channels |
| Level of Networking, Big Data, Decision-Making, Prevention Measures, Assessment Report | Intelligent Predictive Analysis |
| Foreign Languages, Foreign Residents, Dialects, Doctor-Patient Communication Volunteers, Psychological Counseling, Dispute Mediation | Human Resource Reserve |
| Emergency Language Communication, Volunteer Arrival Time, Dispatch Efficiency | Human Resource Dispatch Response Time |
| Language Resource Reserve and Allocation, Low-Income Households, Social Organizations | Language Service Coverage |
| AI Translation, Speech Recognition, Big Data, Equipment Maintenance | Intelligent Language Technologies and Equipment |
| Community Broadcasting, Outdoor Billboards, Banners, Publicity Slogans, Publicity Boards, Leaflets, Social Media, Hotline, Door-to-Door Notification, Gong Beating, WeChat, Accurate and Clear Information, Complete Communication Links | Emergency Information Dissemination Channels |
| Information Delay, Late Submission of Emergency Information, Timely Reporting | Timeliness of Information Dissemination |
| Liaison Mechanism, Joint Response, Poor Communication and Coordination, Collaborative Linkage, Police, Firefighting, Medical Services, Cross-Department Coordination Meetings, Joint Actions | Cross-Department Language Coordination |
| Emergency Drills, Field Drills | Quality of Emergency Drills |
| Delayed Update of Emergency Plans, Emergency Plan Formulation, Lag in Plan Updates, Plan Revision | Emergency Plan Optimization |
| Volunteer Service Team, Emergency Knowledge Training, Psychological Adjustment Skills, Mobilization, Team Building, Inadequate Training, Online Training, Theoretical Knowledge | Language Volunteer Training |
| Emergency Publicity and Education, Emergency Knowledge, Publicity Coverage, Popular Science, Knowledge Popularization Rate, Emergency Awareness, First Aid Knowledge, Emergency Knowledge Competition, Emergency Knowledge Q&A, Public Emergency Competence, Public Participation, Emergency Knowledge Lectures, Safety Awareness | Residents’ Language Adaptability |
| Risk Assessment Mechanism, Summary, Annual Review | Evaluation Mechanism |
| Interviewed Residents, Improvement | Resident Feedback |
| Capability Elements | Corresponding Tertiary Indicators |
|---|---|
| Language Demand Identification | Language Demand Identification Capability |
| Language Risk Monitoring | Language Risk Early Warning Capability |
| Rapid Early Warning Capability | |
| Data Collection Channels | Language Service Data Support Capability |
| Intelligent Predictive Analysis | Intelligent Predictive Analysis Capability |
| Human Resource Reserve | Emergency Language Human Resource Scheduling Capability |
| Human Resource Dispatch Response Time | |
| Language Service Coverage | |
| Intelligent Language Technologies and Equipment | Intelligent Language Technology Capability |
| Emergency Information Dissemination Channels | Emergency Information Dissemination Capability |
| Timeliness of Information Dissemination | |
| Cross-Department Language Coordination | Cross-Department Language Coordination Capability |
| Quality of Emergency Drills | Emergency Language Plan Optimization Capability |
| Emergency Plan Optimization | |
| Language Volunteer Training | Language Volunteer Training Capability |
| Residents’ Language Adaptability | Residents’ Language Adaptability Capability |
| Evaluation Mechanism | Evaluation and Feedback Capability |
| Resident Feedback |
| First-Level Indicator | Second-Level Indicator | Code | Tertiary Indicator | Code | Indicator Definition |
|---|---|---|---|---|---|
| Dynamic Capability of Community FELS | Sensing Capability | C1 | Language Demand Identification Capability | C11 | Evaluates the community’s ability to accurately identify and predict language needs of different groups (e.g., foreign nationals, ethnic minorities, elderly), and to anticipate possible FELS needs in advance. |
| Language Risk Early Warning Capability | C12 | Evaluates the extent to which the community has established a risk monitoring and early warning system for language services, including its capability to monitor linguistic risks and issue rapid early warnings prior to fire incidents. | |||
| Language Service Data Support Capability | C13 | Evaluates the completeness and timeliness of data collection channels for language services, reflecting the depth and quality of data support. | |||
| Intelligent Predictive Analysis Capability | C14 | Evaluates the community’s ability to use AI, big data, and machine learning to analyze language needs and support decision-making. | |||
| Seizing Capability | C2 | Emergency Language Human Resource Scheduling Capability | C21 | Evaluates assesses the community’s ability to efficiently and accurately mobilize FELS personnel, including professional interpreters, volunteers, psychological counselors, and speakers of dialects and minority languages, to ensure effective support for emergency communication, psychological comfort, and information dissemination. | |
| Intelligent Language Technology Capability | C22 | Evaluates whether the community effectively applies intelligent language technologies (e.g., AI translation, automatic speech recognition) to improve the efficiency of language services in fires. | |||
| Emergency Information Dissemination Capability | C23 | Evaluates the coverage and precision of the community’s information dissemination channels in fires. | |||
| Cross-Department Language Coordination Capability | C24 | Evaluates how the community coordinates multiple actors (e.g., government agencies, NGOs, technology enterprises) during fires to ensure rapid sharing and mobilization of language resources. | |||
| Transforming Capability | C3 | Emergency Language Plan Optimization Capability | C31 | Evaluates the professionalism and executability of the community’s FELS plans. | |
| Language Volunteer Training Capability | C32 | Evaluates the scale, quality, and sustained impact of regular community language volunteer training, ensuring sufficient coverage, achievement of assessment standards, and effective participation in drills and emergencies. | |||
| Residents’ Language Adaptability Capability | C33 | Evaluates the multilingual adaptability of community residents, including participation in everyday language training and educational outreach activities. | |||
| Evaluation and Feedback Capability | C34 | Evaluates the normalization and enforcement of post-event evaluation mechanisms, the coverage and participation of feedback channels, and the effectiveness of applying evaluation results to improvement measures. |
| Tertiary Indicator | Measurement Methods | Calculation Formula |
|---|---|---|
| C11 | Fire-related Accuracy of predicted number of language needs (%); Frequency of language demand surveys (times/year); Validity of sampling surveys (whether sampling methods and sample size are clearly defined) (yes/no) | Fire-related Language Demand Prediction Accuracy (%) = Predicted total number of people with language needs ÷ Actual total number of people with language needs. Frequency of Language Demand Surveys (times/year) = Number of language demand-related surveys conducted by the community in one year. Validity of Sampling Surveys (yes/no) = “Yes” if the community clearly specifies the sampling method, sample size, and frequency for language demand surveys; otherwise “No.” |
| C12 | Fire-related Number of risk monitoring indicators (count); Fire-related Accuracy of the language risk early warning system (%); Average response time of early warning (minutes) | Number of Fire Risk Monitoring Indicators (count) = Total number of indicators in the community’s language service risk monitoring database, e.g., size of population with language barriers, number of multilingual demands, number of foreign residents, failure rate of translation technologies, volunteer attrition rate. Accuracy of Fire-related Language Risk Early Warning System (%) = Valid warnings (accurate predictions of real events) ÷ Total warnings issued × 100%. Average Early Warning Response Time (minutes) = (time from reaching early warning threshold to release of warning information) ÷ Number of valid warnings triggered. |
| C13 | Number of FELS data collection channels (count); Population coverage rate of data collection (%); Proportion of real-time data collection (%) | Number of FELS Data Collection Channels (count) = Total number of channels used by the community to collect language service data (e.g., hotline, website platform, WeChat surveys, volunteer household registration), with each channel counted as one. Population Coverage Rate of Data Collection (%) = Number of people covered by data collection ÷ Actual total number of people with language needs in the community. Proportion of Real-Time Data Collection (%) = Number of channels capable of real-time data collection ÷ Total number of data collection channels (e.g., smartphone apps, online registration systems, 24 h hotline). |
| C14 | Frequency of data analysis report generation (times/year); Frequency of model iteration and updates (times/year); Adoption rate of decision-making recommendations (%) | Frequency of Data Analysis Report Generation (times/year) = Total number of language demand data analysis reports produced in one year. Frequency of Model Iteration and Updates (times/year) = Number of version updates or optimizations of the community’s predictive models or algorithms in one year. Adoption Rate of Intelligent Analysis Recommendations (%) = Number of recommendations implemented ÷ Total number of recommendations × 100%. |
| C21 | Status of emergency language service talent pool construction; Volunteer coverage rate (%); Talent mobilization rate (%); Reserve rate of professional translators (%); Reserve rate of multilingual volunteers (%); Reserve rate of dialect volunteers (%); Reserve rate of psychological counseling personnel (%); Response time for human resource scheduling (minutes) | Status of Emergency Language Service Talent Pool Construction (yes/no). Volunteer Coverage Rate (%) = Number of language service volunteers ÷ Community population × 100%. Talent Mobilization Rate (%) = Number of volunteers actually deployed ÷ Total number in the talent pool × 100%. Reserve Rate of Professional Translators (%) = Total number of professional translators ÷ Community population × 100%. Reserve Rate of Multilingual Volunteers (%) = Number of volunteers proficient in ≥2 languages ÷ Community population × 100%. Reserve Rate of Dialect Volunteers (%) = Number of dialect-speaking volunteers ÷ Community population × 100%. Reserve Rate of Psychological Counseling Personnel (%) = Number of personnel providing psychological language services ÷ Community population × 100%. Human Resource Scheduling Response Time (minutes) = (time from dispatch order to actual arrival) ÷ Number of personnel deployed in one event. |
| C22 | Availability rate of AI tools (%); Accuracy of AI translation tools (%); Coverage rate of speech recognition/intelligent customer service systems (%); Number of calls to intelligent language tools during emergencies (times/event) | Availability Rate of AI Tools (%) = Number of AI language service tools available for immediate use ÷ Total number of AI tools allocated in the community × 100%. Accuracy of AI Translation Tools (%) = Number of sentences judged accurate (by human review or automated evaluation) ÷ Total number of sampled sentences × 100%. Coverage Rate of Speech Recognition/Intelligent Customer Service Systems (%) = Number of voice requests correctly recognized and answered ÷ Total number of voice requests × 100%. Number of Calls to Intelligent Language Tools During Emergencies (times/event) = Total number of times intelligent language tools are invoked during emergencies ÷ Total number of emergency events. |
| C23 | Number of emergency information dissemination channels (count); Error rate of information transmission (%); Availability of emergency language hotline (yes/no); Timeliness of emergency information dissemination (minutes); Overall coverage of emergency language services (%) | Number of Emergency Information Dissemination Channels (count) = Number of emergency language information dissemination channels established and actually used by the community during events. Error Rate of Information Transmission (%) = Number of miscommunications, mistranslations, or distortions in messages ÷ Total number of language messages released during events × 100%. Availability of Emergency Language Hotline (yes/no) = “Yes” if the community opened a dedicated language emergency hotline during events; otherwise “No.” Timeliness of Emergency Information Dissemination (minutes) = (time from event occurrence to release of first language message) ÷ Number of emergency events. Overall Coverage of Emergency Language Services (%) = Number of service requests completed in the year ÷ Total number of service requests recorded in the year × 100%. |
| C24 | Frequency of cross-department coordination meetings during fires (times/year); Number of joint emergency response actions for language services (times/year); Number of emergency cooperation agreements (count); Response time for cross-department joint actions (hours) | Frequency of Cross-Department Coordination Meetings During Emergencies (times/year) = Number of cross-department emergency language coordination meetings organized in response to emergencies within one year. Number of Joint Emergency Response Actions for Language Services (times/year) = Total number of joint language emergency actions conducted by departments in one year. Number of Emergency Cooperation Agreements (count) = Total number of formal agreements or memoranda of understanding on language emergency services signed in one year. Response Time for Cross-Department Joint Actions (hours) = (time from event occurrence to initiation of cross-department joint action) ÷ Number of emergency events in the year. |
| C31 | Quality score of emergency language service plans (points); Effectiveness evaluation of emergency plans (%) | Quality Score of Emergency Language Service Plans = (expert scores from each update) ÷ Number of updates (experts score 0–100 based on the “Emergency Plan Standard,” averaged annually). Effectiveness Rate of Drills (%) = Number of effective drills ÷ Total number of drills × 100% (an “effective drill” meets both conditions: participants target group specified in the plan, and key process steps [dispatch → release → feedback] pass evaluation). |
| C32 | Frequency of volunteer training sessions (times/year); Training coverage rate (%); Training pass rate (%); Training retention rate (%) | Frequency of Volunteer Training Sessions (times/year) = Total number of language volunteer training activities conducted in a year. Proportion of Trained Volunteers (%) = Number of volunteers trained in a year ÷ Total number of volunteers in the language volunteer pool × 100%. Training Pass Rate (%) = Number of volunteers who passed tests (written + practical) ÷ Total number of trained volunteers × 100%. Training Retention Rate (%) = Number of trained volunteers who participated in drills/emergencies within six months ÷ Total number of trained volunteers × 100%. |
| C33 | Proportion of community residents participating in language training (%); Frequency of emergency language outreach/science popularization activities (times/year) | Proportion of Community Residents Participating in Language Training (%) = Total number of residents who participated in community language training within one year ÷ Community population × 100%. Frequency of Emergency Language Outreach/Science Activities (times/year) = Number of community science popularization activities related to emergency language services conducted in a year. |
| C34 | Frequency of evaluations (times/year); Coverage rate of resident feedback (%); Implementation rate of improvement measures (%) | Frequency of Evaluations (times/year) = Total number of post-event evaluation activities completed in a year. Coverage Rate of Resident Feedback (%) = Number of residents who provided feedback ÷ Number of residents invited to provide feedback × 100%. Implementation Rate of Improvement Measures (%) = Number of language service improvement measures implemented ÷ Total number of planned measures × 100%. |
| Fuzzy Linguistic Terms for Indicator Importance | Integer Scale | Triangular Fuzzy Number |
|---|---|---|
| Equally Important | 1 | (1,1,1) |
| — | 2 | (1,2,3) |
| Slightly Important | 3 | (2,3,4) |
| — | 4 | (3,4,5) |
| Important | 5 | (4,5,6) |
| — | 6 | (5,6,7) |
| Very Important | 7 | (6,7,8) |
| — | 8 | (7,8,9) |
| Extremely Important | 9 | (8,9,9) |
| Scale | Fuzzy Linguistic Terms for Indicator Correlation | Triangular Fuzzy Number |
|---|---|---|
| 1 | Very Weak Influence | [0,0,0.25] |
| 2 | Weak Influence | [0,0.25,0.5] |
| 3 | Moderate Influence | [0.25,0.5,0.75] |
| 4 | Strong Influence | [0.5,0.75,1] |
| 5 | Very Strong Influence | [0.75,1,1] |
| Expert | Age | Gender | Education | Field of Work/Research | Type | Years of Experience |
|---|---|---|---|---|---|---|
| Expert 1 | 53 | Male | Doctor’s Degree | Research on Emergency Language Services | Academic | 20 |
| Expert 2 | 50 | Female | Bachelor’s Degree | Grassroots Community Governance | Professional | 25 |
| Expert 3 | 46 | Female | Doctor’s Degree | Research on Information Dissemination | Academic | 16 |
| Expert 4 | 42 | Male | Master’s Degree | Fire Protection Practice | Professional | 13 |
| Expert 5 | 49 | Male | Doctor’s Degree | Fire Protection Engineering | Academic | 18 |
| Expert 6 | 45 | Female | Master’s Degree | Fire Rescue Operations/Incident Command | Professional | 15 |
| Expert 7 | 47 | Male | Doctor’s Degree | Computational Science/Engineering | Academic | 17 |
| Expert 8 | 41 | Female | Doctor’s Degree | Data Analytics/Machine Learning Engineering | Professional | 10 |
| Expert 9 | 44 | Male | Doctor’s Degree | Emergency Management and Community Resilience Evaluation | Academic | 14 |
| Expert 10 | 43 | Female | Master’s Degree | Public Communication/Multilingual Service Management | Professional | 13 |
| Tertiary Indicator | Indicator Importance | Indicator Influence | Knowledge-Driven Weight |
|---|---|---|---|
| C11 | 0.0159 | 0.0765 | 0.0116 |
| C12 | 0.1052 | 0.1021 | 0.0977 |
| C13 | 0.0228 | 0.0968 | 0.0212 |
| C14 | 0.0228 | 0.0783 | 0.0170 |
| C21 | 0.2859 | 0.1349 | 0.3190 |
| C22 | 0.0246 | 0.0451 | 0.0099 |
| C23 | 0.1455 | 0.1248 | 0.1363 |
| C24 | 0.1142 | 0.0707 | 0.1120 |
| C31 | 0.0636 | 0.084 | 0.0495 |
| C32 | 0.0661 | 0.0856 | 0.0471 |
| C33 | 0.0161 | 0.0206 | 0.0023 |
| C34 | 0.1175 | 0.0806 | 0.1080 |
| Tertiary Indicator | Indicator Variability | Indicator Conflict | Information Content | Data-Driven Weight |
|---|---|---|---|---|
| C11 | 30.499 | 2.754 | 83.997 | 0.2252 |
| C12 | 22.878 | 3.243 | 74.185 | 0.1989 |
| C13 | 17.762 | 0.918 | 16.314 | 0.0437 |
| C14 | 28.428 | 0.916 | 26.052 | 0.0699 |
| C21 | 5.917 | 1.275 | 7.546 | 0.0202 |
| C22 | 17.134 | 1.008 | 17.279 | 0.0463 |
| C23 | 27.18 | 1.228 | 33.39 | 0.0895 |
| C24 | 22.253 | 0.89 | 19.809 | 0.0531 |
| C31 | 14.05 | 0.703 | 9.873 | 0.0265 |
| C32 | 26.274 | 0.861 | 22.609 | 0.0606 |
| C33 | 21.348 | 1.643 | 35.069 | 0.094 |
| C34 | 19.209 | 0.686 | 13.168 | 0.0353 |
| Tertiary Indicator | Combined Weight | A | B | C | D |
|---|---|---|---|---|---|
| C11 | 0.0849 | 96 | 48.3 | 36.7 | 93.3 |
| C12 | 0.1368 | 95 | 55 | 43.3 | 76 |
| C13 | 0.0301 | 80 | 44.3 | 61.7 | 82.3 |
| C14 | 0.0363 | 96.7 | 30 | 53.3 | 73.3 |
| C21 | 0.2554 | 71.9 | 59.4 | 68.8 | 71.9 |
| C22 | 0.0237 | 88.8 | 47.5 | 66.5 | 73.5 |
| C23 | 0.1492 | 93.8 | 31.3 | 75 | 81.3 |
| C24 | 0.0715 | 87.5 | 37.5 | 68.8 | 81.3 |
| C31 | 0.0441 | 87.5 | 55 | 70 | 80 |
| C32 | 0.0584 | 100 | 37.5 | 68.8 | 81.3 |
| C33 | 0.0357 | 87.5 | 50 | 75 | 100 |
| C34 | 0.0740 | 83.3 | 40 | 61.7 | 76.7 |
| Evaluation Values of Community FELS Dynamic Capability | 86.76 | 47.05 | 62.43 | 79.02 | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Li, H.; Mao, H.; Guo, Z.; Shao, Q. Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response. Fire 2026, 9, 15. https://doi.org/10.3390/fire9010015
Li H, Mao H, Guo Z, Shao Q. Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response. Fire. 2026; 9(1):15. https://doi.org/10.3390/fire9010015
Chicago/Turabian StyleLi, Han, Haoran Mao, Zhenning Guo, and Qinghua Shao. 2026. "Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response" Fire 9, no. 1: 15. https://doi.org/10.3390/fire9010015
APA StyleLi, H., Mao, H., Guo, Z., & Shao, Q. (2026). Data-Driven Evaluation of Dynamic Capabilities in Urban Community Emergency Language Services for Fire Response. Fire, 9(1), 15. https://doi.org/10.3390/fire9010015

