Multimedia Information Compression and Coding

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information and Communications Technology".

Deadline for manuscript submissions: closed (30 April 2016) | Viewed by 40906

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical Engineering, University of Nebraska-Lincoln, 209N Scott Engineering Center, P.O. Box 880511, Lincoln, NE 68588-0511, USA
Interests: data compression; joint source-channel coding; bioinformatics; teaching and information
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The area of compression can be viewed as a mature field. Huffman codes were introduced in the 1950s, while arithmetic coding and dictionary coding made their appearance in the 1970s. For lossy compression, predictive compression traces its history to the fifties, transform coding to the sixties, and wavelet-based compression was introduced in the nineties. While the basic techniques have been around for a while, recent years have seen the appearance of new modalities and new platforms for compression. The ubiquity of compression has also extended the use of compression to data types not present twenty years ago while this ubiquity has made security and privacy issues matters of concern. This Special Issue focuses on all these aspects of multimedia information and coding.

Prospective authors are invited to submit previously unpublished works in these areas. Topics of interest include but are not restricted to:

  • Video compression
  • High Efficiency Video Coding
  • Network compression.
  • Genomic compression.
  • Hyperspectral compression
  • Quantum compression
  • Compression and cryptography
  • Compression of biological signals
  • Compression over sensor networks
  • Compression and Big Data
  • Medical Image Compression

Prof. Dr. Khalid Sayood
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

413 KiB  
Article
Lazy Management for Frequency Table on Hardware-Based Stream Lossless Data Compression
by Koichi Marumo, Shinichi Yamagiwa, Ryuta Morita and Hiroshi Sakamoto
Information 2016, 7(4), 63; https://doi.org/10.3390/info7040063 - 31 Oct 2016
Cited by 7 | Viewed by 6158
Abstract
The demand for communicating large amounts of data in real-time has raised new challenges with implementing high-speed communication paths for high definition video and sensory data. It requires the implementation of high speed data paths based on hardware. Implementation difficulties have to be [...] Read more.
The demand for communicating large amounts of data in real-time has raised new challenges with implementing high-speed communication paths for high definition video and sensory data. It requires the implementation of high speed data paths based on hardware. Implementation difficulties have to be addressed by applying new techniques based on data-oriented algorithms. This paper focuses on a solution for this problem by applying a lossless data compression mechanism on the communication data path. The new lossless data compression mechanism, called LCA-DLT, provides dynamic histogram management for symbol lookup tables used in the compression and the decompression operations. When the histogram memory is fully used, the management algorithm needs to find the least used entries and invalidate these entries. The invalidation operations cause the blocking of the compression and the decompression data stream. This paper proposes novel techniques to eliminate blocking by introducing a dynamic invalidation mechanism, which allows achievement of a high throughput data compression. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Show Figures

Figure 1

372 KiB  
Article
A Survey on Data Compression Methods for Biological Sequences
by Morteza Hosseini, Diogo Pratas and Armando J. Pinho
Information 2016, 7(4), 56; https://doi.org/10.3390/info7040056 - 14 Oct 2016
Cited by 67 | Viewed by 13209
Abstract
The ever increasing growth of the production of high-throughput sequencing data poses a serious challenge to the storage, processing and transmission of these data. As frequently stated, it is a data deluge. Compression is essential to address this challenge—it reduces storage space and [...] Read more.
The ever increasing growth of the production of high-throughput sequencing data poses a serious challenge to the storage, processing and transmission of these data. As frequently stated, it is a data deluge. Compression is essential to address this challenge—it reduces storage space and processing costs, along with speeding up data transmission. In this paper, we provide a comprehensive survey of existing compression approaches, that are specialized for biological data, including protein and DNA sequences. Also, we devote an important part of the paper to the approaches proposed for the compression of different file formats, such as FASTA, as well as FASTQ and SAM/BAM, which contain quality scores and metadata, in addition to the biological sequences. Then, we present a comparison of the performance of several methods, in terms of compression ratio, memory usage and compression/decompression time. Finally, we present some suggestions for future research on biological data compression. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Show Figures

Figure 1

254 KiB  
Article
Efficient Software HEVC to AVS2 Transcoding
by Yucong Chen, Yun Zhou and Jiangtao Wen
Information 2016, 7(3), 53; https://doi.org/10.3390/info7030053 - 19 Sep 2016
Cited by 7 | Viewed by 5804
Abstract
The second generation of Audio and Video coding Standard (AVS) is developed by the IEEE 1857 Working Group under project 1857.4 and was standardized in 2016 by the AVS Working Group of China as the new broadcasting standard AVS2. High Efficient Video Coding [...] Read more.
The second generation of Audio and Video coding Standard (AVS) is developed by the IEEE 1857 Working Group under project 1857.4 and was standardized in 2016 by the AVS Working Group of China as the new broadcasting standard AVS2. High Efficient Video Coding (HEVC) is the newest global video coding standard announced in 2013. More and more codings are migrating from H.264/AVC to HEVC because of its higher compression performance. In this paper, we propose an efficient HEVC to AVS2 transcoding algorithm, which applies a multi-stage decoding information utilization framework to maximize the usage of the decoding information in the transcoding process. The proposed algorithm achieves 11×–17× speed gains over the AVS2 reference software RD 14.0 with a modest BD-rate loss of 9.6%–16.6%. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Show Figures

Figure 1

2696 KiB  
Article
Visually Lossless JPEG 2000 for Remote Image Browsing
by Han Oh, Ali Bilgin and Michael Marcellin
Information 2016, 7(3), 45; https://doi.org/10.3390/info7030045 - 15 Jul 2016
Cited by 20 | Viewed by 5277
Abstract
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of [...] Read more.
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Show Figures

Figure 1

Review

Jump to: Research

4364 KiB  
Review
Speech Compression
by Jerry D. Gibson
Information 2016, 7(2), 32; https://doi.org/10.3390/info7020032 - 3 Jun 2016
Cited by 33 | Viewed by 9833
Abstract
Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most [...] Read more.
Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Show Figures

Figure 1

Back to TopTop