Skip to main content Skip to navigation

Lee Prangnell: Postdoctoral Researcher - Computer Science

Overview

Postdoctoral researcher within the Signal and Information Processing (SIP) Lab. Currently develop visually lossless coding and perceptual quantisation algorithms for the High Efficiency Video Coding (HEVC) standard and the Versatile Video Coding (VVC) standard. HEVC (ISO/IEC 23008-2, ITU-T H.265) and VVC (ISO/IEC 23090-3, ITU-T H.266) are video compression platforms that have been internationally standardised by JCT-VC and JVET, respectively. As regards contributions for HEVC and VVC, the proposed algorithms can be applied to raw YCbCr and RGB data of various bit depths and sampling ratios. This raw data used for evaluating the proposed techniques includes medical image data in addition to screen content video data (i.e., natural content and animated content). My postdoctoral research follows on from the research that I conducted during my PhD degree in Computer Science.

 

First Author Publications and Working Papers

Lee Prangnell and Victor Sanchez, "Spectral-PQ: A Novel Spectral Sensitivity-Orientated Perceptual Compression Technique for RGB 4:4:4 Video Data", Cornell University arXiv, 2022 (PDF). Corpus: S2CID:246240695.

Lee Prangnell and Victor Sanchez, "HVS-Based Perceptual Color Compression of Image Data", IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, Ontario, Canada, 2021, DOI: 10.1109/ICASSP39728.2021.9414773 (PDF).

Lee Prangnell and Victor Sanchez, "Spatiotemporal Adaptive Quantization for the Perceptual Video Coding of RGB 4:4:4 Data", Cornell University arXiv, 2020 (PDF). Corpus: S2CID:218673851.

Lee Prangnell, "Frequency-Dependent Perceptual Quantisation for Visually Lossless Compression Applications", Cornell University arXiv, 2019 (PDF). Corpus: S2CID:182953231.

Lee Prangnell and Victor Sanchez, "JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC", IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Alberta, Canada, 2018, DOI: 10.1109/ICASSP.2018.8462327 (PDF).

Lee Prangnell, "Visually Lossless Coding in HEVC: A High Bit Depth and 4:4:4 Capable JND-Based Perceptual Quantisation Technique for HEVC", Elsevier Signal Processing: Image Communication Journal, April 2018, DOI: 10.1016/j.image.2018.02.007 (PDF).

Lee Prangnell, Miguel Hernández-Cabronero and Victor Sanchez, "Coding Block-Level Perceptual Video Coding for 4:4:4 Data in HEVC", IEEE International Conference on Image Processing, Beijing, China, 2017, DOI: 10.1109/ICIP.2017.8296730 (PDF).

Lee Prangnell, Miguel Hernández-Cabronero and Victor Sanchez, "Cross-Color Channel Perceptually Adaptive Quantization for HEVC", IEEE Data Compression Conference, Snowbird, Utah, USA, 2017, DOI: 10.1109/DCC.2017.66 (PDF).

Lee Prangnell, "Visible Light-Based Human Visual System Conceptual Model", Cornell University arXiv, 2016 (PDF). Corpus: S2CID:10909063.

Lee Prangnell and Victor Sanchez, "Adaptive Quantization Matrices for HD and UHD Resolutions in Scalable HEVC", IEEE Data Compression Conference, Snowbird, Utah, USA, 2016, DOI: 10.1109/DCC.2016.47 (PDF).

Lee Prangnell, Victor Sanchez and Rahul Vanam, "Adaptive Quantization by Soft Thresholding in HEVC", IEEE Picture Coding Symposium, Cairns, Queensland, Australia, 2015, DOI: 10.1109/PCS.2015.7170042 (PDF).

 

PhD Title: Visually Lossless Coding for the HEVC Standard — Efficient Perceptual Quantisation Contributions for HEVC (September, 2017)

In my PhD thesis, four perceptual quantisation techniques are proposed for the HEVC standard. These contributions are designed to maximise the levels of perceptual compression that can be applied to raw YCbCr sequences — of various bit depths, resolutions and sampling ratios — without incurring a discernible loss of visual quality. The proposed techniques give rise to significant improvements in terms of bitrate reductions, as measured in kilobits per second. Furthermore, by virtue of the design of the proposed methods, computational complexity is not increased. In terms of experimentation, the following coding efficiency and visual quality metrics are utilised to evaluate each contribution: Bjøntegaard Delta Rate, PSNR, SSIM and subjective assessments that follow the principles of ITU-T P.910 "Subjective Video Quality Assessment Methods". The proposed perceptual quantisation methods are as follows:

1) High Bit Depth Capable and 4:4:4 Capable JND-Based Coding Block (CB)-Level Perceptual Quantisation;

2) Coding Block (CB)-Level Full Colour Perceptual Quantisation for 4:4:4 Video Data;

3) Coding Unit (CU)-Level Cross-Colour Channel Perceptually Adaptive Quantisation;

4) Transform Coefficient-Level Perceptual Quantisation.

 

Thesis

Lee Prangnell, "Visually Lossless Coding for the HEVC Standard: Efficient Perceptual Quantisation Contributions for HEVC", PhD Thesis, Department of Computer Science, University of Warwick, September 2017 (PDF).