Big Visual Data Processing and Analytics

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 April 2017) | Viewed by 20810

Special Issue Editors


E-Mail Website
Guest Editor
Centre of Mathematics and Data Science, School of Computing and Engineering, University of Huddersfield, Huddersfield HD1 3DH, UK
Interests: data science; big data analytics, geometry and topology of data and information; digital content analytics; computational science (including innovative models and paradigms from digital humanities and quantitative and qualitative social sciences); applications in engineering, psychoanalysis, astronomy and in many fields

E-Mail Website
Guest Editor
Centre for Quantum Computation & Intelligent Systems and the Faculty of Engineering and Information Technology, University of Technology, Sydney, 81 Broadway Street, Ultimo, NSW 2007, Australia
Interests: computer vision; image processing; data science; machine learning; neural networks

E-Mail Website
Guest Editor
Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230027, Anhui, China
Interests: image understanding; machine learning; multimedia information retrieval

Special Issue Information

Dear Colleagues,

There is a broad range of important applications that rely on accurate visual data processing and analytics. However, accurately understanding the visual data remains a highly challenging problem. Recently developed techniques in visual data collection, storage, and transmission, bring us an era of data deluge. The ever-increasing huge volume of visual data provides us with both challenges and opportunities for data analysis and image processing. On the one hand, though big visual data brings richer information, it is challenging to deal with big volumes of visual data to mine reliable and helpful knowledge from them. On the other hand, big data and images also provide us with the opportunities to address the traditional challenges by leveraging advanced machine learning tools, for example deep learning. Therefore, advanced techniques and methodologies are desired to better analyze and understand the big visual data.

This Special Issue aims at providing an opportunity for colleagues to share their high quality research articles that address broad challenges of big visual data processing and analytics. We invite colleagues to contribute original research articles, as well as review articles, that advance very significantly toward efficient techniques for deep understanding of big visual data.

The topics of interest of this Special Issue include, but are not limited to:

  • Techniques, processes, and methods for collecting and analyzing visual data
  • Statistical techniques for visual data analyzing
  • Systems for large-scale visual data
  • Visual data search and mining
  • Applications of visual data analysis: web, multimedia, finance, genomics, bioinformatics, social sciences and social networks
  • Image capturing and generation
  • Image analysis and interpretation
  • Image processing applications
  • Image coding analysis and recognition
  • Image representation
  • Image sensing, classification, retrieval, categorization and clustering approaches
  • Remote image sensing
  • Signal-processing aspects of image processing

Prof. Dr. Xinmei Tian
Prof. Dr. Fionn Murtagh
Prof. Dr. Dacheng Tao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

6276 KiB  
Article
3D Reconstructions Using Unstabilized Video Footage from an Unmanned Aerial Vehicle
by Jonathan Byrne, Evan O'Keeffe, Donal Lennon and Debra F. Laefer
J. Imaging 2017, 3(2), 15; https://doi.org/10.3390/jimaging3020015 - 22 Apr 2017
Cited by 24 | Viewed by 7013
Abstract
Structure from motion (SFM) is a methodology for automatically reconstructing three-dimensional (3D) models from a series of two-dimensional (2D) images when there is no a priori knowledge of the camera location and direction. Modern unmanned aerial vehicles (UAV) now provide a low-cost means [...] Read more.
Structure from motion (SFM) is a methodology for automatically reconstructing three-dimensional (3D) models from a series of two-dimensional (2D) images when there is no a priori knowledge of the camera location and direction. Modern unmanned aerial vehicles (UAV) now provide a low-cost means of obtaining aerial video footage of a point of interest. Unfortunately, raw video lacks the required information for SFM software, as it does not record exchangeable image file (EXIF) information for the frames. In this work, a solution is presented to modify aerial video so that it can be used for photogrammetry. The paper then examines how the field of view effects the quality of the reconstruction. The input is unstabilized, and distorted video footage obtained from a low-cost UAV which is then combined with an open-source SFM system to reconstruct a 3D model. This approach creates a high quality reconstruction by reducing the amount of unknown variables, such as focal length and sensor size, while increasing the data density. The experiments conducted examine the optical field of view settings to provide sufficient overlap without sacrificing image quality or exacerbating distortion. The system costs less than e1000, and the results show the ability to reproduce 3D models that are of centimeter-level accuracy. For verification, the results were compared against millimeter-level accurate models derived from laser scanning. Full article
(This article belongs to the Special Issue Big Visual Data Processing and Analytics)
Show Figures

Figure 1

6668 KiB  
Article
Visual Analytics of Complex Genomics Data to Guide Effective Treatment Decisions
by Quang Vinh Nguyen, Nader Hasan Khalifa, Pat Alzamora, Andrew Gleeson, Daniel Catchpoole, Paul J. Kennedy and Simeon Simoff
J. Imaging 2016, 2(4), 29; https://doi.org/10.3390/jimaging2040029 - 30 Sep 2016
Cited by 12 | Viewed by 6080
Abstract
In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore, [...] Read more.
In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore, when building a framework for personalised treatment, the complexity of the genome must be captured in meaningful and actionable ways. This paper presents a novel visual analytics framework that enables effective analysis of large and complex genomics data. By providing interactive visualisations from the overview of the entire patient cohort to the detail view of individual genes, our work potentially guides effective treatment decisions for childhood cancer patients. The framework consists of multiple components enabling the complete analytics supporting personalised medicines, including similarity space construction, automated analysis, visualisation, gene-to-gene comparison and user-centric interaction and exploration based on feature selection. In addition to the traditional way to visualise data, we utilise the Unity3D platform for developing a smooth and interactive visual presentation of the information. This aims to provide better rendering, image quality, ergonomics and user experience to non-specialists or young users who are familiar with 3D gaming environments and interfaces. We illustrate the effectiveness of our approach through case studies with datasets from childhood cancers, B-cell Acute Lymphoblastic Leukaemia (ALL) and Rhabdomyosarcoma (RMS) patients, on how to guide the effective treatment decision in the cohort. Full article
(This article belongs to the Special Issue Big Visual Data Processing and Analytics)
Show Figures

Figure 1

13483 KiB  
Article
VIIRS Day/Night Band—Correcting Striping and Nonuniformity over a Very Large Dynamic Range
by Stephen Mills and Steven Miller
J. Imaging 2016, 2(1), 9; https://doi.org/10.3390/jimaging2010009 - 14 Mar 2016
Cited by 20 | Viewed by 6966
Abstract
The Suomi National Polar-orbiting (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day-Night Band (DNB) measures visible and near-infrared light extending over seven orders of magnitude of dynamic range. This makes radiometric calibration difficult. We have observed that DNB imagery has striping, banding and [...] Read more.
The Suomi National Polar-orbiting (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day-Night Band (DNB) measures visible and near-infrared light extending over seven orders of magnitude of dynamic range. This makes radiometric calibration difficult. We have observed that DNB imagery has striping, banding and other nonuniformities—day or night. We identified the causes as stray light, nonlinearity, detector crosstalk, hysteresis and mirror-side variation. We found that these affect both Earth-view and calibration signals. These present an obstacle to interpretation by users of DNB products. Because of the nonlinearity we chose the histogram matching destriping technique which we found is successful for daytime, twilight and nighttime scenes. Because of the very large dynamic range of the DNB, we needed to add special processes to the histogram matching to destripe all scenes, especially imagery in the twilight regions where scene illumination changes rapidly over short distances. We show that destriping aids image analysts, and makes it possible for advanced automated cloud typing algorithms. Manual or automatic identification of other features, including polar ice and gravity waves in the upper atmosphere are also discussed. In consideration of the large volume of data produced 24 h a day by the VIIRS DNB, we present methods for reducing processing time. Full article
(This article belongs to the Special Issue Big Visual Data Processing and Analytics)
Show Figures

Graphical abstract

Back to TopTop