Next Issue
Volume 5, June
Previous Issue
Volume 4, December
 
 

Informatics, Volume 5, Issue 1 (March 2018) – 15 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 8757 KiB  
Article
A Smart Sensor Data Transmission Technique for Logistics and Intelligent Transportation Systems
by Kyunghee Sun and Intae Ryoo
Informatics 2018, 5(1), 15; https://doi.org/10.3390/informatics5010015 - 16 Mar 2018
Cited by 7 | Viewed by 11450
Abstract
When it comes to Internet of Things systems that include both a logistics system and an intelligent transportation system, a smart sensor is one of the key elements to collect useful information whenever and wherever necessary. This study proposes the Smart Sensor Node [...] Read more.
When it comes to Internet of Things systems that include both a logistics system and an intelligent transportation system, a smart sensor is one of the key elements to collect useful information whenever and wherever necessary. This study proposes the Smart Sensor Node Group Management Medium Access Control Scheme designed to group smart sensor devices and collect data from them efficiently. The proposed scheme performs grouping of portable sensor devices connected to a system depending on the distance from the sink node and transmits data by setting different buffer thresholds to each group. This method reduces energy consumption of sensor devices located near the sink node and enhances the IoT system’s general energy efficiency. When a sensor device is moved and, thus, becomes unable to transmit data, it is allocated to a new group so that it can continue transmitting data to the sink node. Full article
Show Figures

Figure 1

25 pages, 1576 KiB  
Article
Utilizing Provenance in Reusable Research Objects
by Zhihao Yuan, Dai Hai Ton That, Siddhant Kothari, Gabriel Fils and Tanu Malik
Informatics 2018, 5(1), 14; https://doi.org/10.3390/informatics5010014 - 8 Mar 2018
Cited by 21 | Viewed by 8759
Abstract
Science is conducted collaboratively, often requiring the sharing of knowledge about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes [...] Read more.
Science is conducted collaboratively, often requiring the sharing of knowledge about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. Computational provenance is often the key to enable such reuse. In this paper, we show how reusable research objects can utilize provenance to correctly repeat a previous reference execution, to construct a subset of a research object for partial reuse, and to reuse existing contents of a research object for modified reuse. We describe two methods to summarize provenance that aid in understanding the contents and past executions of a research object. The first method obtains a process-view by collapsing low-level system information, and the second method obtains a summary graph by grouping related nodes and edges with the goal to obtain a graph view similar to application workflow. Through detailed experiments, we show the efficacy and efficiency of our algorithms. Full article
(This article belongs to the Special Issue Using Computational Provenance)
Show Figures

Figure 1

19 pages, 1124 KiB  
Article
A Novel Three-Stage Filter-Wrapper Framework for miRNA Subset Selection in Cancer Classification
by Mohammad Bagher Dowlatshahi, Vali Derhami and Hossein Nezamabadi-pour
Informatics 2018, 5(1), 13; https://doi.org/10.3390/informatics5010013 - 1 Mar 2018
Cited by 28 | Viewed by 7577
Abstract
Micro-Ribonucleic Acids (miRNAs) are small non-coding Ribonucleic Acid (RNA) molecules that play an important role in the cancer growth. There are a lot of miRNAs in the human body and not all of them are responsible for cancer growth. Therefore, there is a [...] Read more.
Micro-Ribonucleic Acids (miRNAs) are small non-coding Ribonucleic Acid (RNA) molecules that play an important role in the cancer growth. There are a lot of miRNAs in the human body and not all of them are responsible for cancer growth. Therefore, there is a need to propose the novel miRNA subset selection algorithms to remove irrelevant and redundant miRNAs and find miRNAs responsible for cancer development. This paper tries to propose a novel three-stage miRNAs subset selection framework for increasing the cancer classification accuracy. In the first stage, multiple filter algorithms are used for ranking the miRNAs according to their relevance with the class label, and then generating a miRNA pool obtained based on the top-ranked miRNAs of each filter algorithm. In the second stage, we first rank the miRNAs of the miRNA pool by multiple filter algorithms and then this ranking is used to weight the probability of selecting each miRNA. In the third stage, Competitive Swarm Optimization (CSO) tries to find an optimal subset from the weighed miRNAs of the miRNA pool, which give us the most information about the cancer patients. It should be noted that the balance between exploration and exploitation in the proposed algorithm is accomplished by a zero-order Fuzzy Inference System (FIS). Experiments on several miRNA cancer datasets indicate that the proposed three-stage framework has a great performance in terms of both the low error rate of the cancer classification and minimizing the number of miRNAs. Full article
(This article belongs to the Special Issue Biomedical Informatics)
Show Figures

Figure 1

18 pages, 379 KiB  
Article
Using Introspection to Collect Provenance in R
by Barbara Lerner, Emery Boose and Luis Perez
Informatics 2018, 5(1), 12; https://doi.org/10.3390/informatics5010012 - 1 Mar 2018
Cited by 10 | Viewed by 10385
Abstract
Data provenance is the history of an item of data from the point of its creation to its present state. It can support science by improving understanding of and confidence in data. RDataTracker is an R package that collects data provenance from R [...] Read more.
Data provenance is the history of an item of data from the point of its creation to its present state. It can support science by improving understanding of and confidence in data. RDataTracker is an R package that collects data provenance from R scripts (https://github.com/End-to-end-provenance/RDataTracker). In addition to details on inputs, outputs, and the computing environment collected by most provenance tools, RDataTracker also records a detailed execution trace and intermediate data values. It does this using R’s powerful introspection functions and by parsing R statements prior to sending them to the interpreter so it knows what provenance to collect. The provenance is stored in a specialized graph structure called a Data Derivation Graph, which makes it possible to determine exactly how an output value is computed or how an input value is used. In this paper, we provide details about the provenance RDataTracker collects and the mechanisms used to collect it. We also speculate about how this rich source of information could be used by other tools to help an R programmer gain a deeper understanding of the software used and to support reproducibility. Full article
(This article belongs to the Special Issue Using Computational Provenance)
Show Figures

Figure 1

38 pages, 2623 KiB  
Article
LabelFlow Framework for Annotating Workflow Provenance
by Pinar Alper, Khalid Belhajjame, Vasa Curcin and Carole A. Goble
Informatics 2018, 5(1), 11; https://doi.org/10.3390/informatics5010011 - 23 Feb 2018
Cited by 6 | Viewed by 9171
Abstract
Scientists routinely analyse and share data for others to use. Successful data (re)use relies on having metadata describing the context of analysis of data. In many disciplines the creation of contextual metadata is referred to as reporting. One method of implementing analyses [...] Read more.
Scientists routinely analyse and share data for others to use. Successful data (re)use relies on having metadata describing the context of analysis of data. In many disciplines the creation of contextual metadata is referred to as reporting. One method of implementing analyses is with workflows. A stand-out feature of workflows is their ability to record provenance from executions. Provenance is useful when analyses are executed with changing parameters (changing contexts) and results need to be traced to respective parameters. In this paper we investigate whether provenance can be exploited to support reporting. Specifically; we outline a case-study based on a real-world workflow and set of reporting queries. We observe that provenance, as collected from workflow executions, is of limited use for reporting, as it supports queries partially. We identify that this is due to the generic nature of provenance, its lack of domain-specific contextual metadata. We observe that the required information is available in implicit form, embedded in data. We describe LabelFlow, a framework comprised of four Labelling Operators for decorating provenance with domain-specific Labels. LabelFlow can be instantiated for a domain by plugging it with domain-specific metadata extractors. We provide a tool that takes as input a workflow, and produces as output a Labelling Pipeline for that workflow, comprised of Labelling Operators. We revisit the case-study and show how Labels provide a more complete implementation of reporting queries. Full article
(This article belongs to the Special Issue Using Computational Provenance)
Show Figures

Figure 1

22 pages, 9256 KiB  
Article
From Offshore Operation to Onshore Simulator: Using Visualized Ethnographic Outcomes to Work with Systems Developers
by Yushan Pan and Sisse Finken
Informatics 2018, 5(1), 10; https://doi.org/10.3390/informatics5010010 - 9 Feb 2018
Cited by 3 | Viewed by 7594
Abstract
This paper focuses on the process of translating insights from a Computer Supported Cooperative Work (CSCW)-based study, conducted on a vessel at sea, into a model that can assist systems developers working with simulators, which are used by vessel operators for training purposes [...] Read more.
This paper focuses on the process of translating insights from a Computer Supported Cooperative Work (CSCW)-based study, conducted on a vessel at sea, into a model that can assist systems developers working with simulators, which are used by vessel operators for training purposes on land. That is, the empirical study at sea brought about rich insights into cooperation, which is important for systems developers to know about and consider in their designs. In the paper, we establish a model that primarily consists of a ‘computational artifact’. The model is designed to support researchers working with systems developers. Drawing on marine examples, we focus on the translation process and investigate how the model serves to visualize work activities; how it addresses relations between technical and computational artifacts, as well as between functions in technical systems and functionalities in cooperative systems. In turn, we link design back to fieldwork studies. Full article
Show Figures

Figure 1

17 pages, 976 KiB  
Article
Bus Operations Scheduling Subject to Resource Constraints Using Evolutionary Optimization
by Konstantinos Gkiotsalitis and Rahul Kumar
Informatics 2018, 5(1), 9; https://doi.org/10.3390/informatics5010009 - 6 Feb 2018
Cited by 8 | Viewed by 10918
Abstract
In public transport operations, vehicles tend to bunch together due to the instability of passenger demand and traffic conditions. Fluctuation of the expected waiting times of passengers at bus stops due to bus bunching is perceived as service unreliability and degrades the overall [...] Read more.
In public transport operations, vehicles tend to bunch together due to the instability of passenger demand and traffic conditions. Fluctuation of the expected waiting times of passengers at bus stops due to bus bunching is perceived as service unreliability and degrades the overall quality of service. For assessing the performance of high-frequency bus services, transportation authorities monitor the daily operations via Transit Management Systems (TMS) that collect vehicle positioning information in near real-time. This work explores the potential of using Automated Vehicle Location (AVL) data from the running vehicles for generating bus schedules that improve the service reliability and conform to various regulatory constraints. The computer-aided generation of optimal bus schedules is a tedious task due to the nonlinear and multi-variable nature of the bus scheduling problem. For this reason, this work develops a two-level approach where (i) the regulatory constraints are satisfied and (ii) the waiting times of passengers are optimized with the introduction of an evolutionary algorithm. This work also discusses the experimental results from the implementation of such an approach in a bi-directional bus line operated by a major bus operator in northern Europe. Full article
Show Figures

Figure 1

26 pages, 5280 KiB  
Article
Embracing First-Person Perspectives in Soma-Based Design
by Kristina Höök, Baptiste Caramiaux, Cumhur Erkut, Jodi Forlizzi, Nassrin Hajinejad, Michael Haller, Caroline C. M. Hummels, Katherine Isbister, Martin Jonsson, George Khut, Lian Loke, Danielle Lottridge, Patrizia Marti, Edward Melcer, Florian Floyd Müller, Marianne Graves Petersen, Thecla Schiphorst, Elena Márquez Segura, Anna Ståhl, Dag Svanæs, Jakob Tholander and Helena Tobiassonadd Show full author list remove Hide full author list
Informatics 2018, 5(1), 8; https://doi.org/10.3390/informatics5010008 - 1 Feb 2018
Cited by 148 | Viewed by 18553
Abstract
A set of prominent designers embarked on a research journey to explore aesthetics in movement-based design. Here we unpack one of the design sensitivities unique to our practice: a strong first person perspective—where the movements, somatics and aesthetic sensibilities of the designer, design [...] Read more.
A set of prominent designers embarked on a research journey to explore aesthetics in movement-based design. Here we unpack one of the design sensitivities unique to our practice: a strong first person perspective—where the movements, somatics and aesthetic sensibilities of the designer, design researcher and user are at the forefront. We present an annotated portfolio of design exemplars and a brief introduction to some of the design methods and theory we use, together substantiating and explaining the first-person perspective. At the same time, we show how this felt dimension, despite its subjective nature, is what provides rigor and structure to our design research. Our aim is to assist researchers in soma-based design and designers wanting to consider the multiple facets when designing for the aesthetics of movement. The applications span a large field of designs, including slow introspective, contemplative interactions, arts, dance, health applications, games, work applications and many others. Full article
(This article belongs to the Special Issue Tangible and Embodied Interaction)
Show Figures

Figure 1

34 pages, 5496 KiB  
Article
Internet of Tangible Things (IoTT): Challenges and Opportunities for Tangible Interaction with IoT
by Leonardo Angelini, Elena Mugellini, Omar Abou Khaled and Nadine Couture
Informatics 2018, 5(1), 7; https://doi.org/10.3390/informatics5010007 - 25 Jan 2018
Cited by 39 | Viewed by 11882
Abstract
In the Internet of Things era, an increasing number of everyday objects are able to offer innovative services to the user. However, most of these devices provide only smartphone or web user interfaces. As a result, the interaction is disconnected from the physical [...] Read more.
In the Internet of Things era, an increasing number of everyday objects are able to offer innovative services to the user. However, most of these devices provide only smartphone or web user interfaces. As a result, the interaction is disconnected from the physical world, decreasing the user experience and increasing the risk of user alienation from the physical world. We argue that tangible interaction can counteract this trend and this article discusses the potential benefits and the still open challenges of tangible interaction applied to the Internet of Things. After an analysis of open challenges for Human-Computer Interaction in IoT, we summarize current trends in tangible interaction and extrapolate eight tangible interaction properties that could be exploited for designing novel interactions with IoT objects. Through a systematic review of tangible interaction applied to IoT, we show what has been already explored in the systems that pioneered the field and the future explorations that still have to be conducted. In order to guide future work in this field, we propose a design card set for supporting the generation of tangible interfaces for IoT objects. The card set has been evaluated during a workshop with 21 people and the results are discussed. Full article
(This article belongs to the Special Issue Tangible and Embodied Interaction)
Show Figures

Figure 1

25 pages, 791 KiB  
Article
A Hybrid Approach to Recognising Activities of Daily Living from Object Use in the Home Environment
by Isibor Kennedy Ihianle, Usman Naeem, Syed Islam and Abdel-Rahman Tawil
Informatics 2018, 5(1), 6; https://doi.org/10.3390/informatics5010006 - 13 Jan 2018
Cited by 13 | Viewed by 8330
Abstract
Accurate recognition of Activities of Daily Living (ADL) plays an important role in providing assistance and support to the elderly and cognitively impaired. Current knowledge-driven and ontology-based techniques model object concepts from assumptions and everyday common knowledge of object use for routine activities. [...] Read more.
Accurate recognition of Activities of Daily Living (ADL) plays an important role in providing assistance and support to the elderly and cognitively impaired. Current knowledge-driven and ontology-based techniques model object concepts from assumptions and everyday common knowledge of object use for routine activities. Modelling activities from such information can lead to incorrect recognition of particular routine activities resulting in possible failure to detect abnormal activity trends. In cases where such prior knowledge are not available, such techniques become virtually unemployable. A significant step in the recognition of activities is the accurate discovery of the object usage for specific routine activities. This paper presents a hybrid framework for automatic consumption of sensor data and associating object usage to routine activities using Latent Dirichlet Allocation (LDA) topic modelling. This process enables the recognition of simple activities of daily living from object usage and interactions in the home environment. The evaluation of the proposed framework on the Kasteren and Ordonez datasets show that it yields better results compared to existing techniques. Full article
(This article belongs to the Special Issue Sensor-Based Activity Recognition and Interaction)
Show Figures

Figure 1

3 pages, 230 KiB  
Editorial
Acknowledgement to Reviewers of Informatics in 2017
by Informatics Editorial Office
Informatics 2018, 5(1), 5; https://doi.org/10.3390/informatics5010005 - 11 Jan 2018
Viewed by 6167
Abstract
Peer review is an essential part in the publication process, ensuring that Informatics maintains high quality standards for its published papers.[...] Full article
3 pages, 146 KiB  
Editorial
Ambient Assisted Living for Improvement of Health and Quality of Life—A Special Issue of the Journal of Informatics
by Shuai Zhang, Chris Nugent, Jens Lundström and Min Sheng
Informatics 2018, 5(1), 4; https://doi.org/10.3390/informatics5010004 - 9 Jan 2018
Cited by 7 | Viewed by 7768
Abstract
The demographic change with respect to the ageing of the population has been a worldwide trend[...] Full article
(This article belongs to the Special Issue Ambient Assisted living for Improvement of Health and Quality of Life)
26 pages, 4735 KiB  
Article
An Adaptable System to Support Provenance Management for the Public Policy-Making Process in Smart Cities
by Barkha Javed, Zaheer Khan and Richard McClatchey
Informatics 2018, 5(1), 3; https://doi.org/10.3390/informatics5010003 - 8 Jan 2018
Cited by 10 | Viewed by 9177
Abstract
Government policies aim to address public issues and problems and therefore play a pivotal role in people’s lives. The creation of public policies, however, is complex given the perspective of large and diverse stakeholders’ involvement, considerable human participation, lengthy processes, complex task specification [...] Read more.
Government policies aim to address public issues and problems and therefore play a pivotal role in people’s lives. The creation of public policies, however, is complex given the perspective of large and diverse stakeholders’ involvement, considerable human participation, lengthy processes, complex task specification and the non-deterministic nature of the process. The inherent complexities of the policy process impart challenges for designing a computing system that assists in supporting and automating the business process pertaining to policy setup, which also raises concerns for setting up a tracking service in the policy-making environment. A tracking service informs how decisions have been taken during policy creation and can provide useful and intrinsic information regarding the policy process. At present, there exists no computing system that assists in tracking the complete process that has been employed for policy creation. To design such a system, it is important to consider the policy environment challenges; for this a novel network and goal based approach has been framed and is covered in detail in this paper. Furthermore, smart governance objectives that include stakeholders’ participation and citizens’ involvement have been considered. Thus, the proposed approach has been devised by considering smart governance principles and the knowledge environment of policy making where tasks are largely dependent on policy makers’ decisions and on individual policy objectives. Our approach reckons the human dimension for deciding and defining autonomous process activities at run time. Furthermore, with the network-based approach, so-called provenance data tracking is employed which enables the capture of policy process. Full article
(This article belongs to the Special Issue Smart Government in Smart Cities)
Show Figures

Figure 1

22 pages, 1792 KiB  
Article
Modeling and Application of Customer Lifetime Value in Online Retail
by Pavel Jasek, Lenka Vrana, Lucie Sperkova, Zdenek Smutny and Marek Kobulsky
Informatics 2018, 5(1), 2; https://doi.org/10.3390/informatics5010002 - 6 Jan 2018
Cited by 23 | Viewed by 16586
Abstract
This article provides an empirical statistical analysis and discussion of the predictive abilities of selected customer lifetime value (CLV) models that could be used in online shopping within e-commerce business settings. The comparison of CLV predictive abilities, using selected evaluation metrics, is made [...] Read more.
This article provides an empirical statistical analysis and discussion of the predictive abilities of selected customer lifetime value (CLV) models that could be used in online shopping within e-commerce business settings. The comparison of CLV predictive abilities, using selected evaluation metrics, is made on selected CLV models: Extended Pareto/NBD model (EP/NBD), Markov chain model and Status Quo model. The article uses six online store datasets with annual revenues in the order of tens of millions of euros for the comparison. The EP/NBD model has outperformed other selected models in a majority of evaluation metrics and can be considered good and stable for non-contractual relations in online shopping. The implications for the deployment of selected CLV models in practice, as well as suggestions for future research, are also discussed. Full article
Show Figures

Figure 1

3792 KiB  
Article
Designing towards the Unknown: Engaging with Material and Aesthetic Uncertainty
by Danielle Wilde and Jenny Underwood
Informatics 2018, 5(1), 1; https://doi.org/10.3390/informatics5010001 - 26 Dec 2017
Cited by 8 | Viewed by 9060
Abstract
New materials with new capabilities demand new ways of approaching design. Destabilising existing methods is crucial to develop new methods. Yet, radical destabilisation—where outcomes remain unknown long enough that new discoveries become possible—is not easy in technology design where complex interdisciplinary teams with [...] Read more.
New materials with new capabilities demand new ways of approaching design. Destabilising existing methods is crucial to develop new methods. Yet, radical destabilisation—where outcomes remain unknown long enough that new discoveries become possible—is not easy in technology design where complex interdisciplinary teams with time and resource constraints need to deliver concrete outcomes on schedule. The Poetic Kinaesthetic Interface project (PKI) engages with this problematic directly. In PKI we use unfolding processes—informed by participatory, speculative and critical design—in emergent actions, to design towards unknown outcomes, using unknown materials. The impossibility of this task is proving as useful as it is disruptive. At its most potent, it is destabilising expectations, aesthetics and processes. Keeping the researchers, collaborators and participants in a state of unknowing, is opening the research potential to far-ranging possibilities. In this article we unpack the motivations driving the PKI project. We present our mixed-methodology, which entangles textile crafts, design interactions and materiality to shape an embodied enquiry. Our research outcomes are procedural and methodological. PKI brings together diverse human, non-human, known and unknown actors to discover where the emergent assemblages might lead. Our approach is re-invigorating—as it demands re-envisioning of—the design process. Full article
(This article belongs to the Special Issue Tangible and Embodied Interaction)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop