Information Technology: New Generations (ITNG 2017)

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: closed (23 February 2018) | Viewed by 41344

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical & Computer Engineering, University of Nevada, Las Vegas, NV, USA
Interests: image processing; data and image compression; gaming and statistics; information coding; sensor networks; reliability; applied graph theory; biometrics; bio-surveillance; computer networks; fault tolerant computing; parallel processing; interconnection networks
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, California State University, Fullerton, CA, USA
Interests: automatic dynamic decision-making; computational sensing; distributed algorithms; energy-efficient wireless networks; fault tolerant data structures; fault tolerant network coverage; graph embedding; multi-modal sensor fusion; randomized algorithms; routing and broadcasting in wireless networks; secure network communication; self-stabilizing algorithms; self-organizing ad-hoc networks; supervised machine learning; urban sensor networks; wireless sensor networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Information proposes a Special Issue on “Information Technology: New Generations” (ITNG). Contributors are invited to submit original papers dealing with state-of-the-art technologies pertaining to digital information and communications for publication in this Special Issue of the journal. The papers need to be submitted to the Guest Editor by email: [email protected]. Please follow the instructions available here regarding the number of pages and the page formatting. The research papers should reach us latest by June 30, 2017.

Dr. Shahram Latifi
Dr. Doina Bein
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Networking and wireless communications
  • Internet of Things (IoT)
  • Software Defined Networking
  • Cyber Physical Systems
  • Machine learning
  • Robotics
  • High performance computing
  • Software engineering and testing
  • Cybersecurity and privacy
  • Big Data
  • High performance computing
  • Cryptography
  • E-health
  • Sensor networks
  • Algorithms
  • Education

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

9 pages, 1677 KiB  
Article
Multiple Congestion Points and Congestion Reaction Mechanisms for Improving DCTCP Performance in Data Center Networks
by Prasanthi Sreekumari
Information 2018, 9(6), 139; https://doi.org/10.3390/info9060139 - 08 Jun 2018
Cited by 3 | Viewed by 3356
Abstract
For addressing problems such as long delays, latency fluctuations, and frequent timeouts in conventional Transmission Control Protocol (TCP) in a data center environment, Data Center TCP (DCTCP) has been proposed as a TCP replacement to satisfy the requirements of data center networks. It [...] Read more.
For addressing problems such as long delays, latency fluctuations, and frequent timeouts in conventional Transmission Control Protocol (TCP) in a data center environment, Data Center TCP (DCTCP) has been proposed as a TCP replacement to satisfy the requirements of data center networks. It is gaining more popularity in academic as well as industry areas due to its performance in terms of high throughput and low latency, and is widely deployed in data centers. However, according to the recent research about the performance of DCTCP, authors have found that most times the sender’s congestion window reduces to one segment, which results in timeouts. In addition, the authors observed that the nonlinear marking mechanism of DCTCP causes severe queue oscillation, which results in low throughput. To address the above issues of DCTCP, we propose multiple congestion points using double threshold and congestion reaction using window adjustment (DT-CWA) mechanisms for improving the performance of DCTCP by reducing the number of timeouts. The results of a series of simulations in a typical data center network topology using Qualnet network simulator, the most widely used network simulator, demonstrate that the proposed window-based solution can significantly reduce the timeouts and noticeably improves the throughput compared to DCTCP under various network conditions. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

14 pages, 3106 KiB  
Article
Hadoop Cluster Deployment: A Methodological Approach
by Ronaldo Celso Messias Correia, Gabriel Spadon, Pedro Henrique De Andrade Gomes, Danilo Medeiros Eler, Rogério Eduardo Garcia and Celso Olivete Junior
Information 2018, 9(6), 131; https://doi.org/10.3390/info9060131 - 29 May 2018
Cited by 2 | Viewed by 5377
Abstract
For a long time, data has been treated as a general problem because it just represents fractions of an event without any relevant purpose. However, the last decade has been just about information and how to get it. Seeking meaning in data and [...] Read more.
For a long time, data has been treated as a general problem because it just represents fractions of an event without any relevant purpose. However, the last decade has been just about information and how to get it. Seeking meaning in data and trying to solve scalability problems, many frameworks have been developed to improve data storage and its analysis. As a framework, Hadoop was presented as a powerful tool to deal with large amounts of data. However, it still causes doubts about how to deal with its deployment and if there is any reliable method to compare the performance of distinct Hadoop clusters. This paper presents a methodology based on benchmark analysis to guide the Hadoop cluster deployment. The experiments employed The Apache Hadoop and the Hadoop distributions of Cloudera, Hortonworks, and MapR, analyzing the architectures on local and on clouding—using centralized and geographically distributed servers. The results show the methodology can be dynamically applied on a reliable comparison among different architectures. Additionally, the study suggests that the knowledge acquired can be used to improve the data analysis process by understanding the Hadoop architecture. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

13 pages, 5858 KiB  
Article
Hybrid Visualization Approach to Show Documents Similarity and Content in a Single View
by Andre Luiz Dias Andreotti, Lenon Fachiano Silva and Danilo Medeiros Eler
Information 2018, 9(6), 129; https://doi.org/10.3390/info9060129 - 23 May 2018
Cited by 1 | Viewed by 4019
Abstract
Multidimensional projection techniques can be employed to project datasets from a higher to a lower dimensional space (e.g., 2D space). These techniques can be used to present the relationships of dataset instances based on distance by grouping or separating clusters of instances in [...] Read more.
Multidimensional projection techniques can be employed to project datasets from a higher to a lower dimensional space (e.g., 2D space). These techniques can be used to present the relationships of dataset instances based on distance by grouping or separating clusters of instances in the projected space. Several works have used multidimensional projections to aid in the exploration of document collections. Even though the projection techniques can organize a dataset, the user needs to read each document to understand the cluster generation. Alternatively, techniques such as topic extraction or tag clouds can be employed to present a summary of the document contents. To minimize the exploratory work and to aid in cluster analysis, this work proposes a new hybrid visualization to show both document relationship and content in a single view, employing multidimensional projections to relate documents and tag clouds. We show the effectiveness of the proposed approach in the exploration of two document collections composed by world news. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

13 pages, 1412 KiB  
Article
Analysis of Document Pre-Processing Effects in Text and Opinion Mining
by Danilo Medeiros Eler, Denilson Grosa, Ives Pola, Rogério Garcia, Ronaldo Correia and Jaqueline Teixeira
Information 2018, 9(4), 100; https://doi.org/10.3390/info9040100 - 20 Apr 2018
Cited by 28 | Viewed by 6071
Abstract
Typically, textual information is available as unstructured data, which require processing so that data mining algorithms can handle such data; this processing is known as the pre-processing step in the overall text mining process. This paper aims at analyzing the strong impact that [...] Read more.
Typically, textual information is available as unstructured data, which require processing so that data mining algorithms can handle such data; this processing is known as the pre-processing step in the overall text mining process. This paper aims at analyzing the strong impact that the pre-processing step has on most mining tasks. Therefore, we propose a methodology to vary distinct combinations of pre-processing steps and to analyze which pre-processing combination allows high precision. In order to show different combinations of pre-processing methods, experiments were performed by comparing some combinations such as stemming, term weighting, term elimination based on low frequency cut and stop words elimination. These combinations were applied in text and opinion mining tasks, from which correct classification rates were computed to highlight the strong impact of the pre-processing combinations. Additionally, we provide graphical representations from each pre-processing combination to show how visual approaches are useful to show the processing effects on document similarities and group formation (i.e., cohesion and separation). Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

34 pages, 3612 KiB  
Article
Experimental Analysis of Stemming on Jurisprudential Documents Retrieval
by Robert A. N. de Oliveira and Methanias C. Junior
Information 2018, 9(2), 28; https://doi.org/10.3390/info9020028 - 27 Jan 2018
Cited by 11 | Viewed by 6211
Abstract
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the [...] Read more.
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the effect of stemming when applied to dictionaries or textual bases of news. On the other hand, we have not found any studies analyzing the impact of radicalization on Brazilian judicial jurisprudence, composed of decisions handed down by the judiciary, a fundamental instrument for law professionals to play their role. Thus, this work presents two complete experiments, showing the results obtained through the analysis and evaluation of the stemmers applied on real jurisprudential documents, originating from the Court of Justice of the State of Sergipe. In the first experiment, the results showed that, among the analyzed algorithms, the RSLP (Removedor de Sufixos da Lingua Portuguesa) possessed the greatest capacity of dimensionality reduction of the data. In the second one, through the evaluation of the stemming algorithms on the legal documents retrieval, the RSLP-S (Removedor de Sufixos da Lingua Portuguesa Singular) and UniNE (University of Neuchâtel), less aggressive stemmers, presented the best cost-benefit ratio, since they reduced the dimensionality of the data and increased the effectiveness of the information retrieval evaluation metrics in one of analyzed collections. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

26 pages, 4737 KiB  
Article
Usability as the Key Factor to the Design of a Web Server for the CReF Protein Structure Predictor: The wCReF
by Vanessa Stangherlin Machado Paixão-Cortes, Michele Dos Santos da Silva Tanus, Walter Ritzel Paixão-Cortes, Osmar Norberto De Souza, Marcia De Borba Campos and Milene Selbach Silveira
Information 2018, 9(1), 20; https://doi.org/10.3390/info9010020 - 17 Jan 2018
Cited by 3 | Viewed by 5306
Abstract
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein [...] Read more.
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein structures, one of the main unsolved problems in bioinformatics. The challenge is to understand the relationship between the amino acid sequence of a protein and its three-dimensional structure, which is related to the function of these macromolecules. This article is an extended version of the article wCReF: The Web Server for the Central Residue Fragment-based Method (CReF) Protein Structure Predictor, published in the 14th International Conference on Information Technology: New Generations. In the first version, we presented the wCReF, a protein structure prediction server for the central residue fragment-based method. The wCReF interface was developed with a focus on usability and user interaction. With this tool, users can enter the amino acid sequence of their target protein and obtain its approximate 3D structure without the need to install all the multitude of necessary tools. In this extended version, we present the design process of the prediction server in detail, which includes: (A) identification of user needs: aiming at understanding the features of a protein structure prediction server, the end user profiles and the commonly-performed tasks; (B) server usability inspection: in order to define wCReF’s requirements and features, we have used heuristic evaluation guided by experts in both the human-computer interaction and bioinformatics domain areas, applied to the protein structure prediction servers I-TASSER, QUARK and Robetta; as a result, changes were found in all heuristics resulting in 89 usability problems; (C) software requirements document and prototype: assessment results guiding the key features that wCReF must have compiled in a software requirements document; from this step, prototyping was carried out; (D) wCReF usability analysis: a glimpse at the detection of new usability problems with end users by adapting the Ssemugabi satisfaction questionnaire; users’ evaluation had 80% positive feedback; (E) finally, some specific guidelines for interface design are presented, which may contribute to the design of interactive computational resources for the field of bioinformatics. In addition to the results of the original article, we present the methodology used in wCReF’s design and evaluation process (sample, procedures, evaluation tools) and the results obtained. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

23 pages, 1942 KiB  
Article
SeMiner: A Flexible Sequence Miner Method to Forecast Solar Time Series
by Sérgio Luisir Díscola Junior, José Roberto Cecatto, Márcio Merino Fernandes and Marcela Xavier Ribeiro
Information 2018, 9(1), 8; https://doi.org/10.3390/info9010008 - 04 Jan 2018
Cited by 4 | Viewed by 4889
Abstract
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we [...] Read more.
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we aim to extend this approach to predict future X-ray levels. Therefore, we developed the “SeMiner” method, which allows the prediction of future events. “SeMiner” processes X-rays into sequences employing a new algorithm called “Series-to-Sequence” (SS). It employs a sliding window approach configured by a specialist. Then, the sequences are submitted to a classifier to generate a model that predicts X-ray levels. An optimized version of “SS” was also developed using parallelization techniques and Graphical Processing Units, in order to speed up the entire forecasting process. The obtained results indicate that “SeMiner” is well-suited to predict solar X-rays and solar flares within the defined time range. It reached more than 90% of accuracy for a 2-day forecast, and more than 80% of True Positive (TPR) and True Negative (TNR) rates predicting X-ray levels. It also reached an accuracy of 72.7%, with a TPR of 70.9% and TNR of 79.7% when predicting solar flares. Moreover, the optimized version of “SS” proved to be 4.36 faster than its initial version. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

Review

Jump to: Research

558 KiB  
Review
Frequent Releases in Open Source Software: A Systematic Review
by Antonio Cesar Brandão Gomes da Silva, Glauco De Figueiredo Carneiro, Fernando Brito e Abreu and Miguel Pessoa Monteiro
Information 2017, 8(3), 109; https://doi.org/10.3390/info8030109 - 05 Sep 2017
Cited by 5 | Viewed by 5570
Abstract
Context: The need to accelerate software delivery, supporting faster time-to-market and frequent community developer/user feedback are issues that have led to relevant changes in software development practices. One example is the adoption of Rapid Release (RR) by several Open Source Software projects (OSS). [...] Read more.
Context: The need to accelerate software delivery, supporting faster time-to-market and frequent community developer/user feedback are issues that have led to relevant changes in software development practices. One example is the adoption of Rapid Release (RR) by several Open Source Software projects (OSS). This raises the need to know how these projects deal with software release approaches. Goal: Identify the main characteristics of software release initiatives in OSS projects, the motivations behind their adoption, strategies applied, as well as advantages and difficulties found. Method: We conducted a Systematic Literature Review (SLR) to reach the stated goal. Results: The SLR includes 33 publications from January 2006 to July 2016 and reveals nine advantages that characterize software release approaches in OSS projects; four challenge issues; three possibilities of implementation and two main motivations towards the adoption of RR; and finally four main strategies to implement it. Conclusion: This study provides an up-to-date and structured understanding of the software release approaches in the context of OSS projects based on findings systematically collected from a list of relevant references in the last decade. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Show Figures

Figure 1

Back to TopTop