**Kwoting Fang \* and Shuoche Lin**

National Yunlin University of Science and Technology, Douliu City, Yunlin County 640, Taiwan; diablo79802@gmail.com

**\*** Correspondence: fangkt@yuntech.edu.tw

Received: 15 April 2019; Accepted: 18 June 2019; Published: 25 June 2019

**Abstract:** This paper presents the TTIPP methodology, an integration of task analysis, task ontology, integration definition function modeling (IDEF0), Petri net, and Petri net mark language (PNML), to organize and model the task knowledge in the form of natural language expressions acquired during the knowledge-acquisition process. The goal of the methodology is to make the tasks more useful, accessible, and sharable through the web for a variety of stakeholders interested in solving a problem which is expressed mostly in linguistic form, and to shed light on the nature of problem-solving knowledge. This study provides a core epistemology for the knowledge engineer while developing the task ontology for a generic task. The proposed model overcomes the drawbacks of IDEF0, which are its static nature and Petri net which has no concept of hierarchy. A good number of countries lie on the typhoon and earthquake belts, which make them vulnerable to natural calamities. However, a practical incident command system (ICS) that provides a common framework to allow emergency responders of different backgrounds to work together effectively for standardized, on-the-scene, incident management has yet to be developed. There is a strong need to explicitly share, copy, and reuse the existing problem-solving knowledge in a complex ICS. As an example, the TTIPP model is applied to the task of emergency response for debris-flow during a typhoon as a part of an ICS.

**Keywords:** problem-solving; incident command system; task ontology; knowledge management

#### **1. Introduction**

In the past few decades, several large-scale earthquakes have occurred in various parts of the world, such as the 1994 Northridge earthquake in the U.S., the 1995 Hanshin-Awaji earthquake in Japan, the 1999 Chi-chi earthquake in Taiwan, and the 2001 Izmit earthquake in Istanbul, Turkey. In 2003, a series of earthquakes shook parts of the world, including Altay, Russia; Boumerdes, Algeria; Hokkaido, Japan; Bam, Iran. These were followed by the December 2004 earthquake in Indonesia, with its devastating large-scale tsunami. Recently, in 2019, an earthquake hit the Sulawest, Indonesia; off the East Coast of Honshu, Japan; South Sandwich Islands Region; and Banda Sea, respectively [1]. These natural calamities caused tragic loss of life, severe property damage, and a decline in regional prosperity.

Although a lot of effort has been spent on protecting people from natural calamities around the world, some countries which lie on the typhoon and earthquake belts, are still vulnerable to natural calamities. However, there is no practical incident command system (ICS) that provides a common framework allowing people to work together effectively for standardized, on-the-scene, incident management. There is a strong need to explicitly share, copy, and reuse prior or existing problem-solving knowledge in a complex ICS.

In general, there are four ICS stages: mitigation, preparedness, response, and post-disaster reconstruction, and each stage needs a complex and dynamic problem-solving method. ICS is a complex problem which requires collaboration and participation among many different stakeholders with conflicting interests and it also covers the complex aspects of environmental, economic, and social problems. With the vigorous development and rapid spread of the World Wide Web, a significant quantity of information is available to people. However, reusable and sharable problem-solving knowledge in an ICS has emerged much more slowly. A stream of existing knowledge representation techniques can be used, including semantic networks, frames, uncertain reasoning, ontology, and rules, etc. [2–6]. For a complex problem, such as an incident command system, simpler mechanisms, using declarative statements that are true or false, as semantic networks or frames are not appropriate. In a similar sense, uncertainty reasoning only provides solutions for situations when true or false conclusions cannot be reached, which may lead to conflicts with problem-solving knowledge in managerial practice [7].

To bridge the gap between real-world problem-solving methods and information found on the Internet and to enable people to communicate with computers by using accurate syntax and semantics, this research proposes a novel model not only for effectively capturing and representing real-world problem-solving knowledge, but also for overcoming the drawbacks of the existing methodologies for integration of heterogeneous information. The concept of task ontology is first adopted to build a three-level mediation representation for a task analysis. Second, an integrated methodology, integrating task analysis, task ontology, integration definition function modeling (IDEF0), Petri net, and Petri net mark language (PNML) (TTIPP), is proposed to systematically analyze the tasks and subtasks in terms of the inputs, outputs, mechanisms, and controls, using integration definition function modeling (IDEF0) and Petri net. Finally, Petri net mark language (PNML) is used as a standard interchange format to make the tasks of searching, displaying, integrating, and maintaining more assessable through the web. The practicality of the proposed model is demonstrated through an emergency response for debris-flow cases. It is hoped that this model lays the groundwork for understanding how to build reusable and sharable real-world problem-solving knowledge.

#### **2. Related Work**

#### *2.1. Knowledge Management*

With the dramatic growth of globalization and the necessity to boost value creation, knowledge has continuously played the important role of being the major source of sustainable advantage [8]. Alonderienne et al. [9] defined knowledge as being the result of a process which combines ideas, rules, procedures, and information. Specifically, based on reasoning and understanding, made by the mind, through the posterior and the frontal hierarchy of cognitive networks, people can capture perceptual information and executive information through experience, learning, or introspection [10]. From an evolutionary economic standpoint, Erkut [11] pointed out that the generation of new knowledge, in turn of shaping markets, is associated with the perception of objectively available information in a system (called the nano dimension). Perception means that an individual classifies a certain experience according to his or her own categorization by means of a pattern recognition, through the interactions of hierarchical networks in the cerebral cortex, based on similar past experiences existing in an ever-evolving cerebral cortex [12]. Polanyi [13] divided knowledge into two categories: tacit (weak) and explicit (hard) knowledge. Explicit knowledge can easily be codified and transmitted through formal and systematic processes as published knowledge [14]. In contrast, tacit knowledge, with personal contextual expertise, has a cognitive component that intervenes in perception and learning [15].

In the past decade, perhaps the most dramatic evolution in business has been the dawn of the so-called "new economy" based on Internet and information technologies (IT), such as intelligent systems technologies. New economy is established based on an organization successfully shifting its economic value to intellectual assets, assets of information, product distribution, and affiliation.

In general, an analysis of the literature can identify a set of different approaches for data analysis, whether numerical or text. Essentially, they consign two dimensions of data analysis: linguistic techniques and statistical approaches. Roughly, they can be categorized into several groups based on differences and similarities in their features with each other, i.e., through similarity and nearest-neighbor methods, document similarity, decision rules, decision trees, and probabilities linear scoring methods [16].

Based on Twitter data from a large multinational telecommunication company and 200 managers, with three years of communication data, Fronzetti Colladon and Gloor [17] combined social network analysis (SNA) and text mining to investigate the effect of spammers' activity on the network structure. More recently, given process fragmentation and information exchange among port actors, Aloini et al. [18], by exploiting data from the Port Community Systems, integrated process mining (PM), SNA, and text mining to reconstruct, analyze, and evaluate the information exchange network of the freight export process.

Knowledge creation and knowledge management have been the new goals for organizations that want to increase their competitiveness. Unfortunately, due to a lack of absorptive capacity, many knowledge management projects are, in reality, information systems projects [19–21]. Gold et al. [22] mentioned that knowledge management becomes questionable when the projects only provide some consolidated data but lack innovation that is extended from prior knowledge or innovation which is unprecedented. Overall, to reach knowledge management from information management, in terms of exchange and combination, is a rather complex process that involves developing a knowledge structure that enables organizations to effectively generate knowledge [23–26].

Over the past few decades, there have been many methodologies born for building new knowledge bases, particular in ontologies, from scratch and from existing bases in a variety of settings. Combined with METHONTOLOGY and WebODE, in term of management and support activities, Corcho et al. [27] built a legal entity ontology in the context of the Spanish legal domain. From the viewpoint of knowledge workers on the day-to-day ontology life cycle, Kotis and Vouros [28] presented the human-centered ontology engineering methodology (HCOME) in living ontologies that can accentuate the role of knowledge workers in shaping their information by actively being involved in ontology engineering tasks. In 2011, Villazon-Terrazas et al. [29] developed a network of ontology networks, including local ontology networks and a reference ontology network, using the NeOn methodology to enable an exchange of curricula vitae and job offers in different languages in a semantic interoperability platform. Given existing ontology problems, Sofia Pinto, Tempich, and Staab [30] proposed the DILIGENT methodology that draws domain experts, users, knowledge engineers, and ontology engineers together to collaboratively build an ontology to solve the drawbacks of decentralization, partial autonomy, iteration, and non-expert builders.

There is burgeoning interest in the study of knowledge management which pertains to the interdisciplinary nature of research and practice in decision-making, with particular emphasis on ontology means and methods. From the standpoint of expert knowledge and complying with railway standards, Saa et al. [31] developed an ontology-driven decision support system for designing complex railway problems. Focused on integrating and restructuring methods in the repository, Ziemba et al. [32] adopted the algorithm to build a repository of knowledge about the methods for assessing the quality of a website. To satisfy particular accessibility needs in e-learning contexts, Elias, Lohmann, and Auer [33] presented rule-based queries, in terms of ontology, to retrieve relevant educational resources for learners with disabilities. Traditionally, water pollution accidents have been digitalized through the combination of monitoring sensors, management servers, and application software by adopting mechanistic water-quality models with achieved data. Meng et al. [34] provided the architecture of the ontology-underpinned emergency response system for water pollution accidents to make the water pollution information semantic and the referred applications intelligent. Due to a lack of knowledge systematization in the sustainability assessment domain, Konys [35] contributed knowledge-based

mechanisms, with formal, practical, and technological guidance, to make the collected knowledge publicly available, reusable, and interoperable.

The growth of the Internet offers enormous potential for professionals and creating a significant body of virtual communities of practice can provide alternative channels for professionals to collaborate with their peers, manage information, and develop and spread knowledge. Research on social interaction falls into three broad categories [36]: (1) connectivity, (2) interactivity, and (3) language use. Through a seven-year longitudinal study with 14,000 members of 16 different healthcare virtual communities of practice, Antonacci et al. [37] pointed out that centralized structure, dynamic leaders, and complex language have driven the growth of the community. Moreover, by enriching the theoretical foundation or framework of knowledge creation and sharing, particularly in an online discussion forum, Barker [38] discovered that, from a continuous basis standpoint, an expert should play a proactive role to ensure new knowledge is created and shared by individuals.

Noy and Musen [39] point out that one of the major shortcomings of the current technology for knowledge-based building is the lack of both reusable and sharable knowledge. Because one must build knowledge bases based on "what one believes" and cannot take into consideration of "justified true beliefs" derived from the actual and potential resources, this increases the difficulty of building knowledge bases. Clearly, facilitating usable and useful knowledge should thus contribute to making it easier to build knowledge bases and fit them into the context within which they must be used. In order to achieve this, Therani [40] and Mizoguchi et al. [41] suggest expertise can be decomposed into a task-dependent but domain-independent portion in which applications can use common data for all domains but not for all tasks and a task-independent but domain-dependent portion in which applications can use common data for all tasks but not for all domains. The former is called "task knowledge", formalized knowledge for an independent problem-solving domain.

#### *2.2. Task Ontology*

Ontologies has been a field of study with growing importance in the academia from the late twentieth to the early twenty-first century. This phenomenon stems from both their conceptual use in organizing information and their practical use in communicating system characteristics [22,35,42].

In general, an ontology can be viewed as an information model that explicitly describes the various entities and abstractions that exist in a universe of discourse, along with their properties [43,44]. Furthermore, an ontology specifies a conceptual phrase partly to articulate knowledge-level theories of a certain field. From a system standpoint, ontologies provide an overarching framework and vocabulary with which to describe system components and relationships for communicating among architecture and domain areas [45]. Therefore, the more the essence of things is captured, the more possible it is for an ontology to be shared [46–49].

A number of different categorizations of ontologies have been proposed. Van Heijst et al. [50] classify ontologies according to the amount and type of structure of the conceptualization and the subject of the conceptualization, while Guarino [51] distinguishes the type of ontologies by their level of dependence on a particular task or point of view. Subsequently, Lassila and McGuinness [52] group ontologies from the perspective of the information the ontology needs to express and the richness of its internal structure.

Ontologies and problem-solving methods (PSMs), higher-order cognitive processes that require the modulation and control of more routine or fundamental skills, have been created to share and reuse knowledge and reasoning behavior across domains and tasks [20,46]. In general, ontologies are concerned with static domain knowledge, a given specific domain, while PSMs deal with modeling reasoning processes and describing the vocabulary related to a generic task or activity. Benjamins and Gomez-Perez [53] define PSMs as a way of achieving the goal of a task. PSMs have inputs and outputs, and many decompose a task into subtasks, and subtasks into methods. In addition, a PSM specifies the data flow between its subtasks. Guarino [51] defines task ontology as an ontology which formally specifies the terminology associated with a problem type, a high-level generic task which has characteristic generic classes of knowledge-based application. Chandrasekaren and Benjamins [54] also define task ontology as "a base of generic vocabulary that organizes the task knowledge for a generic task.". From a problem-solving viewpoint, Newell [55] illustrates that task ontology can be used to model the problem-solving behavior of a task, either at the knowledge level or the symbolic level. Thus, the advantage of task ontology is that it specifies not only a skeleton of the problem-solving process, but also the context in which domain concepts are used. In 2007, Mizoguchi et al. [56] developed an ontology-development tool known as Hozo which has the ability to deal with roles according to their context dependencies. Ikeda et al. [57] suggest that task ontology can be roughly interpreted in two ways: (1) task–subtask decomposition together with task categorization; and (2) an ontology for specifying problem-solving processes. They developed a conceptual level programming environment (CLEPE) based on task ontology in order to make problem-solving knowledge explicit and to exemplify its availability. Rajpathak et al. [58] formalized the task–method–domain–application knowledge modeling framework, which supports both constructing a generic scheduling task ontology to formalize the space of scheduling problems as well as constructing a generic problem-solving model of scheduling that generalizes from the variety of approaches to scheduling problem-solving.

With the volumes of information that continue to increase, the task of turning and integrating this resource dispersed across the Web into a coherent corpus of interrelated information has become a major problem [59]. The emergence of the Semantic Web, providing highly readable data without modifying any of the contents, has shown great promise for the next generation of more capable information technology solutions and mark another stage in the evolution of ontologies and PSMs [46]. Berners-Lee [60] who coined the term Semantic Web, comments that it is envisioned as an extension of the current Web, in which information is given well-defined meaning to better enable computers and people to work in corporations, effectively interweaving human understanding of symbols with machine processability [61]. The way to fulfillment of the corporation can be paved by sharing and re-using domain and task ontologies. The Semantic Web, with domain and task ontologies, can solve some problems much more simply than before and can make it possible to provide certain capabilities that have otherwise been very difficult to support [62–64].

#### **3. Research Methodology**

In this section we present the TTIPP methodology, which was an integration of task analysis, task ontology, integration definition function modeling (IDEF0), Petri net, and Petri net mark language (PNML), along with the framework shown in Figure 1, to organize and model the task knowledge acquired during the knowledge-acquisition process. The TTIPP methodology is aimed not only at reducing the brittle nature of traditional knowledge-based systems, but also at enhancing knowledge reusability and shareability over different real-world problem-solving applications. Moreover, the proposed model overcomes the drawbacks of IDEF0, namely its static nature and Petri net which has no concept of hierarchy. From the viewpoint of knowledge sharing, the TTIPP methodology can integrate heterogeneous information and distributed information sources to resolve the problems of information access on the Web by translating information into machine processable semantics to facilitate communication between machines and humans.

The TTIPP is composed of three layers and five phases (Figure 1). The top layer, the lexical level model, deals mainly with the syntactic aspect of the problem-solving description in terms of the task analysis phase and task ontology phase. The middle layer, called the conceptual level model, captures the conceptual level meaning of the activity description. The IDEF0 and Petri net models are used in this layer. The bottom layer, called the symbol level model, is the PNML corresponding to the executable program and specifying the computational semantics of problem solving.

This study provides a core epistemology for the knowledge engineer while developing the task ontology for a generic task. The five phases of the proposed integrated model are described in the following sub-sections.

**Figure 1.** Research framework.

#### *3.1. Phase-I: Task Analysis*

During the first phase, the nature of a task needs to be thoroughly analyzed at a fine-grained level with diverse informational needs. Structured, semi-structured, or even unstructured knowledge is acquired and elicited from various sources, such as the available literature on the task, the test cases specific to the problem area, the actual interview of the domain experts, one's previous experience in the field, etc. Ikeda et al. [57] divide task analysis into two major steps: (1) rough identification and (2) detailed task analysis.

Based on various sources of knowledge, rough identification of task structure is a classification problem, while detailed task analysis is concerned with interaction with domain experts and then articulating how they perform their tasks. Once various knowledge sources, in terms of a variety of forms, such as document, fact, and records, are analyzed in detail, the important concepts from all of the different classes of application lead to a heightened awareness in such a way that this knowledge provides enough theoretical foundation for expressing the nature of the problem. According to this, the initial focus of the task analysis is to concentrate on the most important concepts around which the task ontology needs to be built

#### *3.2. Phase-II: Task Ontology*

Detailed categorization of concepts involved is indispensable for task knowledge description. This stage provides a fundamental understanding of the relationships among different concepts. Also, in accordance with the elicited concepts given in the previous phase, this stage provides the ontological engineer with an idea about the important axioms that need to be developed in order to decide on the competence of the task ontology. From the standpoint of granularity and generality, following Ikeda et al. [57], the lexical level task ontology consists of four concepts: (1) generic nouns representing objects reflecting their roles that appear in the problem-solving process; (2) generic verbs representing unit activities that appear in the problem-solving process; (3) generic adjectives and/or adverbs modifying the objects; and (4) generic constraints specific to the task. Figure 2 presents the hierarchy of the lexical level task ontology.

**Figure 2.** Lexical level of task ontology.

#### *3.3. Phase-III: IDEF0 Model*

During this phase, task ontology in the research framework can be operationalized by using a formal modeling language tool. IDEF0 is a widely-used, activity-oriented graphic notation and modeling approach for system specification and requirement analysis [65,66]. It transforms the concepts described at the natural language level into the formal knowledge modeling level in terms of structured graphical forms. A multi-level model with different classes and relations is created in order to decompose the complex problem into smaller and more detailed sub-problems until the purpose of the model building is reached. An IDEF0 diagram consists of an ordered set of boxes that represent activities performed by a given task. Each box or component in the diagram, representing a given activity, has a simple syntax, shown in Figure 3, with inputs of the activity entering from the left side and the results or outputs of the activity exiting from the right side. The mechanisms, indicated by arrows entering from the bottom of the box, represent resources such as machines, computers, operators, etc. The controls, shown by arrows entering from the top, represent control information, such as parameters and rules of the control systems. The boxes in an IDEF0 diagram, called ICOM for input–control–output–mechanism, are hierarchically decomposed in as many levels as necessary until there is sufficient detail on the basic activities to serve the tasks [21]. The mappings between elements of IDEF0 diagrams and generic vocabularies, from the lexical level to the conceptual level (see Figure 1), are shown in Table 1.

**Figure 3.** Component of the integration definition function modeling (IDEF0) model.


**Table 1.** The mapping between IDEF0 and vocabulary elements.

#### *3.4. Phase-IV: Petri Net Model*

Broadly speaking, the IDEF0 model has a number of disadvantages such as its static nature and ambiguity in activity specification [62]. A Petri net consists of three entries: (1) the place, drawn as a circle; (2) the transition, drawn as a bar; and (3) the arcs, connecting places and transitions, as shown in Figure 4a [45]. Known as condition/event nets or place/transition nets, Petri nets are suitable for representation of the structure of hierarchical systems that exhibit concurrency, conflict, and synchronization [67]. To clearly visualize the information flow and control through transitions, a Petri net (PN) allows a place to hold none or a positive number of tokens pictured as small solid dots. Generally, the PN is defined as quintuple, PN = (P, T, I, O, m) [68], where:

P = {p1, p2, ... , pn} finite set of places; where integer n > 0;

T = {t1, t2, ... , ts} finite set of transitions, where integer s > 0, with P ∪ T Ø and

P ∩ T = Ø;

I: P × T → N input incidence function with n × s matrices containing the nonnegative integer that defines the set of directed arcs from P to T where N = {0,1,2,3 ... ... };

O: P × T → N output incidence function with n × s matrices containing the nonnegative integer that defines the set of directed arcs from T to P;

m: P <sup>→</sup> N marking vector whose ith component represents the number of tokens in the ith place. An initial marking is denoted by m0.

The change of system states, called transition firing, will happen by an event when all the input places hold enough number of tokens. Cassandras and Lafortune [69] further explain that a transition is enabled when each input place P of T contains at least a number of tokens equal to the number of the directed arc connecting P to T. When an enabled transition T1 fires as shown in Figure 4b, it removes the token from its input place and deposits it in its output place. Therefore, the execution rules of a PN include enabling and firing rules are shown as follows [68]:


$$\begin{aligned} \mathbf{m'(p)} &= \mathbf{m(p)} \cdot \mathbf{l'(p,t)} + \mathbf{O} \, (\mathbf{p}, \mathbf{t}) \\ &= \mathbf{m(p)} + \mathbf{C(p,t)} \, \mathbf{Vp} \varepsilon \mathbf{P} \end{aligned}$$

where C = O *–* I and m' is said to be immediately reachable from m.

**Figure 4.** Component of a Petri net. (**a**) The token in its input place. (**b**) The token transfer into its output place.

Murata [70] presents invariant analysis methods, including P-invariant and T-invariant, to govern the dynamic behavior of concurrent systems modeled by Petri nets. He calls C = O *–* I an incidence matrix, to prove that subsets of places over which the sum of the tokens may remain unchanged (P-invariant) and a transition firing sequence brings the marking back to the same one (T-invariant). In writing matrix equations, Murata [70] describes the execution rules as abiding by the following equation:

$$\mathbf{m}\_{\mathbf{k}} = \mathbf{m}\_{\mathbf{k} \cdot 1} + \mathbf{C}\_{\mathbf{uk}} \qquad \mathbf{k} = 1, 2, 3 \dots \dots \tag{1}$$

where mk donates a marking immediately reachable from marking mk-1. The kth firing vector or uk is an s <sup>×</sup> 1 column vector and the ith column of C represents a change of a marking if transition ti fires at the kth firing, and then the ith position of uk is 1 and the other positions are filled with 0s. He further defines a P-invariant of the nonzero nonnegative integer solution x of CTx = 0, then, the previously stated equation is rewritten as follows:

$$\mathbf{x}^{\mathrm{T}}\mathbf{m}\_{\mathrm{k}} = \mathbf{x}^{\mathrm{T}}\mathbf{m}\_{\mathrm{k}\cdot1} + \mathbf{x}^{\mathrm{T}}\mathbf{C}\_{\mathrm{uk}} \qquad \mathbf{k} = 1, 2, 3 \dots \dots \tag{2}$$

Since CTx = 0, thus xTC = 0, then

$$\mathbf{x}^{\mathrm{T}}\mathbf{m}\_{\mathrm{k}} = \mathbf{x}^{\mathrm{T}}\mathbf{m}\_{\mathrm{k}\cdot\mathrm{1}} \qquad \mathbf{k} = \mathbf{1}, \mathbf{2}, \mathbf{3}\times\dots\tag{3}$$

Therefore, xTmk <sup>=</sup> xTmk−<sup>1</sup> <sup>=</sup> constant.

He also defines the nonzero nonnegative solution y of Cy = 0 as a T-invariant associated with firing a sequence of transitions, leading mo back to m0, and the ith element of the aggregate firing vector u being the number of tis firing times in the sequence. Clearly, based on the equation m0 = m0 +Cu, Cu = 0 and u is a T-invariant.

To transform static models generated by the IDEF0 method into a dynamic PN model, Santarek and Buseif [21] developed the following transformation rules:

Tr1: If activities exist, then transform them into a Petri net sequence: transition–place–transition.

Tr2: If arrow outputs exist, then form PN place with tokens in them.

Tr3: (1) If shared mechanisms exist, then form PN place with tokens in them.

(2) If a shared mechanism decomposed into a sub-mechanism exists, then there is no generation of PN place.

Tr4: If all mechanisms used in the PN do not exist in any IDEF0 diagram, then transformation is completed.

The relationships, from a static perspective to dynamic viewpoint, between IDEF0 diagrams and Petri net components are presented in Table 2 [67].


## *3.5. Phase-V: Petri Net Markup Language*

The Petri net markup language (PNML) is an extensible markup language (XML) based on interchangeable formats for Petri nets. The PNML is designed to be a Petri net interchange format that is independent of specific tools and platforms. Moreover, the interchange format needs to support different dialects of Petri nets and must be extensible. Thus, PNML should necessarily include the following essential characteristics [71]: (1) Flexibility to represent any kind of Petri net with its specific extensions and features, (2) assurance to remove ambiguity for uniquely determining its PNML representation, and (3) compatibility to exchange as much information as possible between different types of Petri nets.

Even with a mature development of Petri net technology, it is difficult to know what is possible in the future. Certainly, PNML should shed light on the definition of Petri net types to support different versions of Petri nets and, in particular, future versions of Petri nets. Given the above-mentioned information, PNML is adopted as a starting point for a standard interchange format for Petri nets. For implementing purposes, XML was used for its platform, independent in terms of having many tools available for reading, writing, and validating. Table 3 presents the translation of the PNML meta model into XML elements, along with the attributes and their data types.


**Table 3.** Translation of the Petri net mark language (PNML) meta model into extensible markup language (XML) elements and attributes.

#### **4. An Example Application**

The development of an incident command system (ICS) for a natural disaster requires collaboration and participation at the national level, as well as the local community level. TTIPP methodology not only could help to integrate heterogeneous and distributed information sources with machine processable semantics, but also could help users browse the information based on semantics with common understanding to achieve the knowledge sharing purpose.

As an example application, the TTIPP model presented in this paper was used for management of the knowledge of debris-flow during typhoons in the past three decades in the area of Homchu in Nantou county located in Central Taiwan. Homchu village is located at an altitude of 550 to 750 m. It is connected to Mingde village in the north, Tongfu village in the south, and Chenyoulan River in the west (see Figure 5). There are over 70 families, with 451 males and 370 females, in the village and they plant grape orchards as their main source of income. The village was the site of significant damage resulting from the earthquake on September 21st, 1999. Such typhoons and the resulting debris flow originating from the mountainous region are an annual occurrence on the Chenyoulan River in the region. The Homchu community activity center, elementary school, and church are the main distribution centers when it comes to disaster prevention and refuge in the village. Prior to a disaster, they store refuge materials and prepare temporary shelters and medical emergency stations. The main task is to protect the public from debris flow, which includes water, rocks, soil, and tree trunks and it is divided into four subtasks: mitigation, preparedness, response, and post-disaster reconstruction. The subtasks correspond with the rough identification steps of Ikeda et al. [57].

**Figure 5.** Study area.

#### *4.1. Phase-I: Task Analysis*

Based on the available literature on the task, an open-ended interview questionnaire, in terms of how, when, who, what, how many, and why, specific to the aforementioned subtasks was designed and the six stakeholders (different teams or domain experts) involved in the ICS operations were invited to share their experience and reconfirm the content of their interviews which were recorded and transferred verbatim for future analysis. Table 4 shows the profile of domain experts, along with their numbers of years of experience with the ICS for debris-flow management during typhoons.

