Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 2 (February 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-21
Export citation of selected articles as:
Open AccessArticle Software-Driven Definition of Virtual Testbeds to Validate Emergent Network Technologies
Information 2018, 9(2), 45; https://doi.org/10.3390/info9020045
Received: 9 January 2018 / Revised: 9 February 2018 / Accepted: 21 February 2018 / Published: 24 February 2018
PDF Full-text (1860 KB) | HTML Full-text | XML Full-text
Abstract
The lack of privileged access to emergent and operational deployments is one of the key matters during validation and testing of novel telecommunication systems and technologies. This matter jeopardizes the repeatability of experiments, which results in burdens for innovation and research in these
[...] Read more.
The lack of privileged access to emergent and operational deployments is one of the key matters during validation and testing of novel telecommunication systems and technologies. This matter jeopardizes the repeatability of experiments, which results in burdens for innovation and research in these areas. In this light, we present a method and architecture to make the software-driven definition of virtual testbeds easier. As distinguishing features, our proposal can mimic operational deployments by using high-dimensional activity patterns. These activity patterns shape the effect of a control module that triggers agents for the generation of network traffic. This solution exploits the capabilities of network emulation and virtualization systems, which nowadays can be easily deployed in commodity servers. With this, we accomplish a reproducible definition of realistic experimental conditions and the introduction of real agent implementations in a cost-effective fashion. We evaluate our solution in a case study that is comprised of the validation of a network-monitoring tool for Voice over IP (VoIP) deployments. Our experimental results support the viability of the method and illustrate how this formulation can improve the experimentation in emergent technologies. Full article
(This article belongs to the Special Issue Selected Papers from WMNC 2017 and JITEL 2017)
Figures

Figure 1

Open AccessArticle A Blockchain Approach Applied to a Teledermatology Platform in the Sardinian Region (Italy)
Information 2018, 9(2), 44; https://doi.org/10.3390/info9020044
Received: 15 January 2018 / Revised: 15 February 2018 / Accepted: 19 February 2018 / Published: 23 February 2018
PDF Full-text (2313 KB) | HTML Full-text | XML Full-text
Abstract
The use of teledermatology in primary care has been shown to be reliable, offering the possibility of improving access to dermatological care by using telecommunication technologies to connect several medical centers and enable the exchange of information about skin conditions over long distances.
[...] Read more.
The use of teledermatology in primary care has been shown to be reliable, offering the possibility of improving access to dermatological care by using telecommunication technologies to connect several medical centers and enable the exchange of information about skin conditions over long distances. This paper describes the main points of a teledermatology project that we have implemented to promote and facilitate the diagnosis of skin diseases and improve the quality of care for rural and remote areas. Moreover, we present a blockchain-based approach which aims to add new functionalities to an innovative teledermatology platform which we developed and tested in the Sardinian Region (Italy). These functionalities include giving the patient complete access to his/her medical records while maintaining security. Finally, the advantages that this new decentralized system can provide for patients and specialists are presented. Full article
(This article belongs to the Special Issue e-Health Pervasive Wireless Applications and Services (e-HPWAS'17))
Figures

Figure 1

Open AccessArticle On the Use of Affordable COTS Hardware for Network Measurements: Limits and Good Practices
Information 2018, 9(2), 43; https://doi.org/10.3390/info9020043
Received: 15 January 2018 / Revised: 7 February 2018 / Accepted: 19 February 2018 / Published: 22 February 2018
PDF Full-text (552 KB) | HTML Full-text | XML Full-text
Abstract
Wireless access technologies are widespread in domestic scenarios, and end users extensively use mobile phones or tablets to browse the Web. Therefore, methods and platforms for the measurement of network key performance indicators must be adapted to the peculiarities of this environment. In
[...] Read more.
Wireless access technologies are widespread in domestic scenarios, and end users extensively use mobile phones or tablets to browse the Web. Therefore, methods and platforms for the measurement of network key performance indicators must be adapted to the peculiarities of this environment. In this light, the experiments should capture the true conditions of such connections, particularly in terms of the hardware and multi-device interactions that are present in real networks. On the basis of this, this paper presents an evaluation of the capabilities of several affordable commercial off-the-shelf (COTS) devices as network measuring probes, for example, computers-on-module or domestic routers with software measurement tools. Our main goal is to detect the limits of such devices and define a guide of good practices to optimize them. Hence, our work paves the way for the development of fair measurement systems in domestic networks with low expenditures. The obtained experimental results show that these types of devices are suitable as network measuring probes, if they are adequately configured and minimal accuracy losses are assumable. Full article
(This article belongs to the Special Issue Selected Papers from WMNC 2017 and JITEL 2017)
Figures

Figure 1

Open AccessArticle A Caching Strategy for Transparent Computing Server Side Based on Data Block Relevance
Information 2018, 9(2), 42; https://doi.org/10.3390/info9020042
Received: 13 December 2017 / Revised: 4 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (2089 KB) | HTML Full-text | XML Full-text
Abstract
The performance bottleneck of transparent computing (TC) is on the server side. Caching is one of the key factors of the server-side performance. The count of disk input/output (I/O) can be reduced if multiple data blocks that are correlated with the data block
[...] Read more.
The performance bottleneck of transparent computing (TC) is on the server side. Caching is one of the key factors of the server-side performance. The count of disk input/output (I/O) can be reduced if multiple data blocks that are correlated with the data block currently accessed are prefetched in advance. As a result, the service performance and user experience quality in TC can be improved. In this study, we propose a caching strategy for the TC server side based on data block relevance, which is called the correlation pattern-based caching strategy (CPCS). In this method, we adjust a model that is based on a frequent pattern tree (FP-tree) for mining frequent patterns from data streams (FP-stream) to the characteristics of data access in TC, and we devise a cache structure in accordance with the storage model of TC. Finally, the access process in TC with real access traces in different caching strategies. Simulation results show that the cache hit rate under the CPCS is higher than that using other algorithms under conditions in which the parameters are coordinated properly. Full article
Figures

Graphical abstract

Open AccessArticle Reducing JPEG False Contour Using Visual Illumination
Information 2018, 9(2), 41; https://doi.org/10.3390/info9020041
Received: 28 December 2017 / Revised: 12 February 2018 / Accepted: 13 February 2018 / Published: 20 February 2018
PDF Full-text (16946 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an efficient and simple technique on reducing the false contour problem which often occurs in the JPEG decoded image. The false contour appears on JPEG decoded image while applying small quality factor. This problem induces unpleasant visual appearance. The proposed
[...] Read more.
This paper presents an efficient and simple technique on reducing the false contour problem which often occurs in the JPEG decoded image. The false contour appears on JPEG decoded image while applying small quality factor. This problem induces unpleasant visual appearance. The proposed scheme exploits the usability of Halftoning-Based Block Truncation Coding (HBTC) approach on generating visual illumination to reduce the aforementioned problem. Three HBTC techniques, namely Ordered Dither Block Truncation Coding (ODBTC), Error Diffusion Block Truncation Coding (EDBTC) and Dot Diffused Block Truncation Coding (DDBTC), modify the DC components of all Discrete Cosine Transform (DCT) processed image on JPEG encoding stage. Experimental results show the proposed method superiority on JPEG false contour reduction. To further improve the JPEG decoded image quality, the proposed method utilizes the Gaussian kernel on replacing the DDBTC diffused kernel on spreading the error term. It assumes that the adjacent image blocks have a strong correlation in which high diffused coefficient should be applied on nearest adjacent neighbor. This paper extends the HBTC usability on JPEG false contour suppression. Full article
Figures

Figure 1

Open AccessArticle Mean-Variance Analysis of Retailers Deploying RFID-Enabled Smart Shelves
Information 2018, 9(2), 40; https://doi.org/10.3390/info9020040
Received: 31 December 2017 / Revised: 4 February 2018 / Accepted: 10 February 2018 / Published: 12 February 2018
PDF Full-text (1589 KB) | HTML Full-text | XML Full-text
Abstract
Radio frequency identification (RFID)-enabled smart shelves have recently attracted enormous attention from both industry and academia. Retailers have an explosion of interest in deploying smart shelves for the automatic ordering and the reduction of inventory inaccuracy. This study explores the retailers’ optimal policy
[...] Read more.
Radio frequency identification (RFID)-enabled smart shelves have recently attracted enormous attention from both industry and academia. Retailers have an explosion of interest in deploying smart shelves for the automatic ordering and the reduction of inventory inaccuracy. This study explores the retailers’ optimal policy for investing in a smart shelf inventory control system. Since the decision maker’s risk attitude plays an important role in its investment in technology, we therefore specifically consider the retailers’ risk attitude. The risk attitude is measured by the mean-variance analysis. We derive analytical expressions for the benefit of the smart shelf and specify the conditions under which retailers should invest in it. We also show the explicit relationship between the retailer’s risk attitude and his optimal policy. Finally, we conduct a numerical analysis to provide managerial insights for retailers to invest in smart shelves. Full article
(This article belongs to the Section Information Systems)
Figures

Graphical abstract

Open AccessArticle Structural Equation Model Analysis of Factors in the Spread of Unsafe Behavior among Construction Workers
Information 2018, 9(2), 39; https://doi.org/10.3390/info9020039
Received: 11 December 2017 / Revised: 26 January 2018 / Accepted: 2 February 2018 / Published: 10 February 2018
PDF Full-text (1019 KB) | HTML Full-text | XML Full-text
Abstract
To create a safe work atmosphere, it is essential to control the spread of unsafe behaviors among construction workers. Based on group behavior theory, the relationship between the influence factors and means of spreading unsafe behavior among construction workers is verified using a
[...] Read more.
To create a safe work atmosphere, it is essential to control the spread of unsafe behaviors among construction workers. Based on group behavior theory, the relationship between the influence factors and means of spreading unsafe behavior among construction workers is verified using a structural equation model (SEM) and multiround research data. According to the research, (1) spreading methods of unsafe behaviors consists of demonstration–imitation and infecting–conformity; (2) the influence of important figures, personal safety accomplishments, intensity of penalties for individual violations, and benefit-to-risk ratios of unsafe behaviors are correlated significantly with the spread of unsafe demonstration–imitation behaviors; and (3) the relational closeness of members, personal safety accomplishments, safety climate, and benefit-to-risk ratio of unsafe behaviors are correlated significantly with the spread of unsafe infecting–conformity behaviors. Full article
Figures

Figure 1

Open AccessArticle Local Patch Vectors Encoded by Fisher Vectors for Image Classification
Information 2018, 9(2), 38; https://doi.org/10.3390/info9020038
Received: 22 December 2017 / Revised: 30 January 2018 / Accepted: 6 February 2018 / Published: 9 February 2018
PDF Full-text (3029 KB) | HTML Full-text | XML Full-text
Abstract
The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i) For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher
[...] Read more.
The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i) For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV) subsequently; (ii) For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii) In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv) For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI), which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle NS-Cross Entropy-Based MAGDM under Single-Valued Neutrosophic Set Environment
Information 2018, 9(2), 37; https://doi.org/10.3390/info9020037
Received: 29 December 2017 / Revised: 3 February 2018 / Accepted: 6 February 2018 / Published: 9 February 2018
Cited by 3 | PDF Full-text (1032 KB) | HTML Full-text | XML Full-text
Abstract
A single-valued neutrosophic set has king power to express uncertainty characterized by indeterminacy, inconsistency and incompleteness. Most of the existing single-valued neutrosophic cross entropy bears an asymmetrical behavior and produces an undefined phenomenon in some situations. In order to deal with these disadvantages,
[...] Read more.
A single-valued neutrosophic set has king power to express uncertainty characterized by indeterminacy, inconsistency and incompleteness. Most of the existing single-valued neutrosophic cross entropy bears an asymmetrical behavior and produces an undefined phenomenon in some situations. In order to deal with these disadvantages, we propose a new cross entropy measure under a single-valued neutrosophic set (SVNS) environment, namely NS-cross entropy, and prove its basic properties. Also we define weighted NS-cross entropy measure and investigate its basic properties. We develop a novel multi-attribute group decision-making (MAGDM) strategy that is free from the drawback of asymmetrical behavior and undefined phenomena. It is capable of dealing with an unknown weight of attributes and an unknown weight of decision-makers. Finally, a numerical example of multi-attribute group decision-making problem of investment potential is solved to show the feasibility, validity and efficiency of the proposed decision-making strategy. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Figures

Graphical abstract

Open AccessArticle Logarithmic Similarity Measure between Interval-Valued Fuzzy Sets and Its Fault Diagnosis Method
Information 2018, 9(2), 36; https://doi.org/10.3390/info9020036
Received: 21 January 2018 / Revised: 5 February 2018 / Accepted: 7 February 2018 / Published: 8 February 2018
PDF Full-text (442 KB) | HTML Full-text | XML Full-text
Abstract
Fault diagnosis is an important task for the normal operation and maintenance of equipment. In many real situations, the diagnosis data cannot provide deterministic values and are usually imprecise or uncertain. Thus, interval-valued fuzzy sets (IVFSs) are very suitable for expressing imprecise or
[...] Read more.
Fault diagnosis is an important task for the normal operation and maintenance of equipment. In many real situations, the diagnosis data cannot provide deterministic values and are usually imprecise or uncertain. Thus, interval-valued fuzzy sets (IVFSs) are very suitable for expressing imprecise or uncertain fault information in real problems. However, existing literature scarcely deals with fault diagnosis problems, such as gasoline engines and steam turbines with IVFSs. However, the similarity measure is one of the important tools in fault diagnoses. Therefore, this paper proposes a new similarity measure of IVFSs based on logarithmic function and its fault diagnosis method for the first time. By the logarithmic similarity measure between the fault knowledge and some diagnosis-testing samples with interval-valued fuzzy information and its relation indices, we can determine the fault type and ranking order of faults corresponding to the relation indices. Then, the misfire fault diagnosis of the gasoline engine and the vibrational fault diagnosis of a turbine are presented to demonstrate the simplicity and effectiveness of the proposed diagnosis method. The fault diagnosis results of gasoline engine and steam turbine show that the proposed diagnosis method not only gives the main fault types of the gasoline engine and steam turbine but also provides useful information for multi-fault analyses and predicting future fault trends. Hence, the logarithmic similarity measure and its fault diagnosis method are main contributions in this study and they provide a useful new way for the fault diagnosis with interval-valued fuzzy information. Full article
Figures

Graphical abstract

Open AccessArticle Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity
Information 2018, 9(2), 35; https://doi.org/10.3390/info9020035
Received: 18 December 2017 / Revised: 23 January 2018 / Accepted: 4 February 2018 / Published: 8 February 2018
PDF Full-text (374 KB) | HTML Full-text | XML Full-text
Abstract
Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be
[...] Read more.
Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs) for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined. Full article
Figures

Figure 1

Open AccessArticle A Survey on Portuguese Lexical Knowledge Bases: Contents, Comparison and Combination
Information 2018, 9(2), 34; https://doi.org/10.3390/info9020034
Received: 28 December 2017 / Revised: 25 January 2018 / Accepted: 31 January 2018 / Published: 2 February 2018
PDF Full-text (270 KB) | HTML Full-text | XML Full-text
Abstract
In the last decade, several lexical-semantic knowledge bases (LKBs) were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure,
[...] Read more.
In the last decade, several lexical-semantic knowledge bases (LKBs) were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure, and overlapping contents. However, we go further and exploit all of the analysed LKBs in the creation of new LKBs, based on the redundant contents. Both original and redundancy-based LKBs are then compared, indirectly, based on the performance of automatic procedures that exploit them for solving four different semantic analysis tasks. In addition to conclusions on the performance of the original LKBs, results show that, instead of selecting a single LKB to use, it is generally worth combining the contents of all the open Portuguese LKBs, towards better results. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Open AccessArticle Finding Group-Based Skyline over a Data Stream in the Sensor Network
Information 2018, 9(2), 33; https://doi.org/10.3390/info9020033
Received: 30 December 2017 / Revised: 26 January 2018 / Accepted: 30 January 2018 / Published: 1 February 2018
PDF Full-text (5786 KB) | HTML Full-text | XML Full-text
Abstract
Along with the application of the sensor network, there will be large amount of dynamic data coming from sensors. How to dig the useful information from such data is significant. Skyline query is aiming to identify the interesting points from a large dataset.
[...] Read more.
Along with the application of the sensor network, there will be large amount of dynamic data coming from sensors. How to dig the useful information from such data is significant. Skyline query is aiming to identify the interesting points from a large dataset. The group-based skyline query is to find the outstanding Pareto Optimal groups which cannot be g-dominated by any other groups with the group same size. However, the existing algorithms of group-based skyline (G-Skyline) focus on the static data set, how to conduct advanced research on data stream remains an open problem at large. In this paper, we propose the group-based skyline query over the data stream. In order to compute G-Skyline efficiently, we present a sharing strategy, and based on which we propose two algorithms to efficiently compute the G-Skyline over the data stream: the point-arriving algorithm and the point-expiring algorithm. In our experiments, three synthetic data sets are used to test our algorithms; the experiments results show that our algorithms perform efficiently over a data stream. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Realizing Loose Communication with Tangible Avatar to Facilitate Recipient’s Imagination
Information 2018, 9(2), 32; https://doi.org/10.3390/info9020032
Received: 31 December 2017 / Revised: 26 January 2018 / Accepted: 29 January 2018 / Published: 1 February 2018
Cited by 1 | PDF Full-text (2352 KB) | HTML Full-text | XML Full-text
Abstract
Social network services (SNSs) allow users to share their daily experiences and significant life events with family, friends, and colleagues. However, excessive use of SNSs or dependence upon them can cause a problem known as “SNS fatigue” that is associated with feelings of
[...] Read more.
Social network services (SNSs) allow users to share their daily experiences and significant life events with family, friends, and colleagues. However, excessive use of SNSs or dependence upon them can cause a problem known as “SNS fatigue” that is associated with feelings of anxiety and loneliness. In other words, the tighter and stronger the social bonds are through SNSs, the more users feel anxiety and loneliness. We propose a method for providing users with a sense of security and connectedness with others by facilitating loose communication. Loose communication is defined by the presentation of abstract information and passive (one-way) communication. By focusing on the physicality and anthropomorphic characteristics of tangible avatars, we investigated a communication support system, Palco, that displays three types of contextual information with respect to the communication partner—emotional state, activity, and location—in a loose manner. Our approach contrasts with typical SNS interaction methods characterized by tight communication with interactivity and concrete information. This paper describes the design and implementation of Palco, as well as its usefulness as a communication tool. The emotional effects on the users are evaluated through a user study with 10 participants over four days. The results imply that Palco can effectively communicate the context of the communication partner, and provide a sense of security. Full article
(This article belongs to the Special Issue Context Awareness)
Figures

Figure 1

Open AccessFeature PaperArticle Mobile Mixed Reality for Experiential Learning and Simulation in Medical and Health Sciences Education
Information 2018, 9(2), 31; https://doi.org/10.3390/info9020031
Received: 19 December 2017 / Revised: 8 January 2018 / Accepted: 27 January 2018 / Published: 31 January 2018
PDF Full-text (2297 KB) | HTML Full-text | XML Full-text
Abstract
New accessible learning methods delivered through mobile mixed reality are becoming possible in education, shifting pedagogy from the use of two dimensional images and videos to facilitating learning via interactive mobile environments. This is especially important in medical and health education, where the
[...] Read more.
New accessible learning methods delivered through mobile mixed reality are becoming possible in education, shifting pedagogy from the use of two dimensional images and videos to facilitating learning via interactive mobile environments. This is especially important in medical and health education, where the required knowledge acquisition is typically much more experiential, self-directed, and hands-on than in many other disciplines. Presented are insights obtained from the implementation and testing of two mobile mixed reality interventions across two Australian higher education classrooms in medicine and health sciences, concentrating on student perceptions of mobile mixed reality for learning physiology and anatomy in a face-to-face medical and health science classroom and skills acquisition in airways management focusing on direct laryngoscopy with foreign body removal in a distance paramedic science classroom. This is unique because most studies focus on a single discipline, focusing on either skills or the learner experience and a single delivery modality rather than linking cross-discipline knowledge acquisition and the development of a student’s tangible skills across multimodal classrooms. Outcomes are presented from post-intervention student interviews and discipline academic observation, which highlight improvements in learner motivation and skills, but also demonstrated pedagogical challenges to overcome with mobile mixed reality learning. Full article
(This article belongs to the Special Issue Serious Games and Applications for Health (SeGAH 2017))
Figures

Figure 1

Open AccessArticle UPCaD: A Methodology of Integration Between Ontology-Based Context-Awareness Modeling and Relational Domain Data
Information 2018, 9(2), 30; https://doi.org/10.3390/info9020030
Received: 20 December 2017 / Revised: 21 January 2018 / Accepted: 26 January 2018 / Published: 30 January 2018
Cited by 1 | PDF Full-text (13933 KB) | HTML Full-text | XML Full-text
Abstract
Context-awareness is a key feature for ubiquitous computing scenarios applications. Currently, technologies and methodologies have been proposed for the integration of context-awareness concepts in intelligent information systems to adapt them to the execution of services, user interfaces and data retrieval. Recent research proposed
[...] Read more.
Context-awareness is a key feature for ubiquitous computing scenarios applications. Currently, technologies and methodologies have been proposed for the integration of context-awareness concepts in intelligent information systems to adapt them to the execution of services, user interfaces and data retrieval. Recent research proposed conceptual modeling alternatives to the integration of the domain modeling in RDBMS and context-awareness modeling. The research described using highly expressiveness ontologies. The present work describes the UPCaD (Unified Process for Integration between Context-Awareness and Domain) methodology, which is composed of formalisms and processes to guide the data integration considering RDBMS and context modeling. The methodology was evaluated in a virtual learning environment application. The evaluation shows the possibility to use a highly expressive context ontology to filter the relational data query and discusses the main contributions of the methodology compared with recent approaches. Full article
(This article belongs to the Special Issue Context Awareness)
Figures

Figure 1

Open AccessArticle Pedagogy before Technology: A Design-Based Research Approach to Enhancing Skills Development in Paramedic Science Using Mixed Reality
Information 2018, 9(2), 29; https://doi.org/10.3390/info9020029
Received: 21 December 2017 / Revised: 19 January 2018 / Accepted: 23 January 2018 / Published: 29 January 2018
PDF Full-text (2079 KB) | HTML Full-text | XML Full-text
Abstract
In health sciences education, there is growing evidence that simulation improves learners’ safety, competence, and skills, especially when compared to traditional didactic methods or no simulation training. However, this approach to simulation becomes difficult when students are studying at a distance, leading to
[...] Read more.
In health sciences education, there is growing evidence that simulation improves learners’ safety, competence, and skills, especially when compared to traditional didactic methods or no simulation training. However, this approach to simulation becomes difficult when students are studying at a distance, leading to the need to develop simulations that suit this pedagogical problem and the logistics of this intervention method. This paper describes the use of a design-based research (DBR) methodology, combined with a new model for putting ‘pedagogy before technology’ when approaching these types of education problems, to develop a mixed reality education solution. This combined model is used to analyse a classroom learning problem in paramedic health sciences with respect to student evidence, assisting the educational designer to identify a solution, and subsequently develop a technology-based mixed reality simulation via a mobile phone application and three-dimensional (3D) printed tools to provide an analogue approximation for an on-campus simulation experience. The developed intervention was tested with students and refined through a repeat of the process, showing that a DBR process, supported by a model that puts ‘pedagogy before technology’, can produce over several iterations a much-improved simulation that results in a simulation that satisfies student pedagogical needs. Full article
(This article belongs to the Special Issue Serious Games and Applications for Health (SeGAH 2017))
Figures

Figure 1

Open AccessArticle Experimental Analysis of Stemming on Jurisprudential Documents Retrieval
Information 2018, 9(2), 28; https://doi.org/10.3390/info9020028
Received: 3 January 2018 / Revised: 24 January 2018 / Accepted: 25 January 2018 / Published: 27 January 2018
PDF Full-text (3612 KB) | HTML Full-text | XML Full-text
Abstract
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the
[...] Read more.
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the effect of stemming when applied to dictionaries or textual bases of news. On the other hand, we have not found any studies analyzing the impact of radicalization on Brazilian judicial jurisprudence, composed of decisions handed down by the judiciary, a fundamental instrument for law professionals to play their role. Thus, this work presents two complete experiments, showing the results obtained through the analysis and evaluation of the stemmers applied on real jurisprudential documents, originating from the Court of Justice of the State of Sergipe. In the first experiment, the results showed that, among the analyzed algorithms, the RSLP (Removedor de Sufixos da Lingua Portuguesa) possessed the greatest capacity of dimensionality reduction of the data. In the second one, through the evaluation of the stemming algorithms on the legal documents retrieval, the RSLP-S (Removedor de Sufixos da Lingua Portuguesa Singular) and UniNE (University of Neuchâtel), less aggressive stemmers, presented the best cost-benefit ratio, since they reduced the dimensionality of the data and increased the effectiveness of the information retrieval evaluation metrics in one of analyzed collections. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessReview A Comparative Study of Web Content Management Systems
Information 2018, 9(2), 27; https://doi.org/10.3390/info9020027
Received: 15 December 2017 / Revised: 22 January 2018 / Accepted: 25 January 2018 / Published: 27 January 2018
PDF Full-text (2062 KB) | HTML Full-text | XML Full-text
Abstract
Web Content Management Systems (WCMS) play an increasingly important role in the Internet’s evolution. They are software platforms that facilitate the implementation of a web site or an e-commerce and are gaining popularity due to its flexibility and ease of use. In this
[...] Read more.
Web Content Management Systems (WCMS) play an increasingly important role in the Internet’s evolution. They are software platforms that facilitate the implementation of a web site or an e-commerce and are gaining popularity due to its flexibility and ease of use. In this work, we explain from a tutorial perspective how to manage WCMS and what can be achieved by using them. With this aim, we select the most popular open-source WCMS; namely, Joomla!, WordPress, and Drupal. Then, we implement three websites that are equal in terms of requirements, visual aspect, and functionality, one for each WCMS. Through a qualitative comparative analysis, we show the advantages and drawbacks of each solution, and the complexity associated. On the other hand, security concerns can arise if WCMS are not appropriately used. Due to the key position that they occupy in today’s Internet, we perform a basic security analysis of the three implement websites in the second part of this work. Specifically, we explain vulnerabilities, security enhancements, which errors should not be done, and which WCMS is initially safer. Full article
(This article belongs to the Special Issue Selected Papers from WMNC 2017 and JITEL 2017)
Figures

Figure 1

Open AccessArticle Importance Degree Research of Safety Risk Management Processes of Urban Rail Transit Based on Text Mining Method
Information 2018, 9(2), 26; https://doi.org/10.3390/info9020026
Received: 29 November 2017 / Revised: 21 January 2018 / Accepted: 22 January 2018 / Published: 26 January 2018
PDF Full-text (3492 KB) | HTML Full-text | XML Full-text
Abstract
China’s urban rail transit (URT) construction is coming into the stage of rapid development under the guidance of national policies. However, the URT construction projects belong to high-risk projects and construction safety accidents occur frequently. Presently, safety risk management is in continuous development.
[...] Read more.
China’s urban rail transit (URT) construction is coming into the stage of rapid development under the guidance of national policies. However, the URT construction projects belong to high-risk projects and construction safety accidents occur frequently. Presently, safety risk management is in continuous development. Unfortunately, due to risk data deficiencies and lack of relationship between participants and safety risk factors, most of the research results cannot be well applied to URT projects. To overcome the limits, this paper has applied the text mining method into safety risk analysis. Through word frequency analysis and cluster analysis, 15 safety risk factors and 3 participants are identified from 156 accident reports. In addition, the accident descriptive model has been established, which is composed of indirect safety risk factors (management defects), direct safety risk factors and participants. In this model, each accident is the standardized description of the corresponding accident information. This is useful for risk data accumulation and analysis. Then the network structure analysis and risk assessment methods are utilized to make clear 63 relationships among participants, management defects and direct safety risk factors. Subsequently, the risk value of each relationship is evaluated. These safety risk information is integrated into the accident descriptive model by using accident points. Finally, ABC analysis which is a popular and effective method used to classify items into specific categories that can be managed and controlled separately is used to analyze the safety risk management’s core process(A), important process(B) and general process(C) in the accident descriptive model. The research results show that the constructor should pay attention to construction coordination, safety specifications, safety measures and personnel education, the supervisor should attach importance to timely communication, the monitoring unit should pay attention to advanced forecast and dynamic control. The main research contributions are as follows: (1) A method of obtaining risk data from unstructured content has been provided; (2) The accident descriptive model could be utilized for risk data continuous accumulation; (3) The emphases of URT construction safety risk management are made clear. Full article
Figures

Figure 1

Open AccessArticle Querying Workflow Logs
Information 2018, 9(2), 25; https://doi.org/10.3390/info9020025
Received: 29 November 2017 / Revised: 2 January 2018 / Accepted: 12 January 2018 / Published: 25 January 2018
Cited by 1 | PDF Full-text (953 KB) | HTML Full-text | XML Full-text
Abstract
A business process or workflow is an assembly of tasks that accomplishes a business goal. Business process management is the study of the design, configuration/implementation, enactment and monitoring, analysis, and re-design of workflows. The traditional methodology for the re-design and improvement of workflows
[...] Read more.
A business process or workflow is an assembly of tasks that accomplishes a business goal. Business process management is the study of the design, configuration/implementation, enactment and monitoring, analysis, and re-design of workflows. The traditional methodology for the re-design and improvement of workflows relies on the well-known sequence of extract, transform, and load (ETL), data/process warehousing, and online analytical processing (OLAP) tools. In this paper, we study the ad hoc queryiny of process enactments for (data-centric) business processes, bypassing the traditional methodology for more flexibility in querying. We develop an algebraic query language based on “incident patterns” with four operators inspired from Business Process Model and Notation (BPMN) representation, allowing the user to formulate ad hoc queries directly over workflow logs. A formal semantics of this query language, a preliminary query evaluation algorithm, and a group of elementary properties of the operators are provided. Full article
Figures

Figure 1

Back to Top