Coronavirus (Covid-19): Latest updates and information
Skip to main content Skip to navigation

Lee Prangnell: Postdoctoral Researcher - Computer Science

Overview

Postdoctoral researcher within the Signal and Information Processing (SIP) Lab in collaboration with Associate Professor Dr. Victor Sanchez. Currently develop visually lossless coding and perceptual quantisation contributions for the High Efficiency Video Coding (HEVC) standard and the Versatile Video Coding (VVC) standard. HEVC (ISO/IEC 23008-2, ITU-T H.265) and VVC (ISO/IEC 23090-3, ITU-T H.266) are video compression platforms that have been internationally standardised by JCT-VC and JVET, respectively. As regards contributions for HEVC and VVC, the proposed algorithms can be applied to raw YCbCr and RGB data of various bit depths and sampling ratios. This raw data used for evaluating the proposed techniques includes medical image data in addition to screen content video data (i.e., natural content and animated content). My postdoctoral research follows on from the research that I conducted during my PhD degree in Computer Science.

 

Postdoctoral Research Publications and Preprints (Working Papers)

Lee Prangnell and Victor Sanchez, "JNCD-Based Perceptual Compression of RGB 4:4:4 Image Data", Cornell University arXiv, 2020, Corpus: S2CID:218674230. (Preprint PDF).

Lee Prangnell and Victor Sanchez, "Spatiotemporal Adaptive Quantization for the Perceptual Video Coding of RGB 4:4:4 Data", Cornell University arXiv, 2020, Corpus: S2CID:218673851. (Preprint PDF).

Lee Prangnell, "Frequency-Dependent Perceptual Quantisation for Visually Lossless Compression Applications", Cornell University arXiv, 2020, Corpus: S2CID:182953231. (Preprint PDF).

Lee Prangnell and Victor Sanchez, "Spatiotemporal Adaptive Quantization for Video Compression Applications", Cornell University arXiv, 2019, Corpus: S2CID:218674192. (Preprint PDF).

Lee Prangnell and Victor Sanchez, "JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC", IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Alberta, Canada, 2018, DOI: 10.1109/ICASSP.2018.8462327. (PDF).

 

PhD Research Publications and Preprints (Working Papers)

Lee Prangnell, "Visually Lossless Coding in HEVC: A High Bit Depth and 4:4:4 Capable JND-Based Perceptual Quantisation Technique for HEVC", Elsevier Signal Processing: Image Communication Journal, April 2018, DOI: 10.1016/j.image.2018.02.007. (PDF).

Lee Prangnell, Miguel Hernández-Cabronero and Victor Sanchez, "Coding Block-Level Perceptual Video Coding for 4:4:4 Data in HEVC", IEEE International Conference on Image Processing, Beijing, China, 2017, DOI: 10.1109/ICIP.2017.8296730. (PDF).

Lee Prangnell, Miguel Hernández-Cabronero and Victor Sanchez, "Cross-Color Channel Perceptually Adaptive Quantization for HEVC", IEEE Data Compression Conference, Snowbird, Utah, USA, 2017, DOI: 10.1109/DCC.2017.66. (PDF).

Lee Prangnell, "Visible Light-Based Human Visual System Conceptual Model", Cornell University arXiv, 2016, Corpus: S2CID:10909063 (Preprint PDF).

Lee Prangnell and Victor Sanchez, "Adaptive Quantization Matrices for HD and UHD Resolutions in Scalable HEVC", IEEE Data Compression Conference, Snowbird, Utah, USA, 2016, DOI: 10.1109/DCC.2016.47. (PDF).

Lee Prangnell and Victor Sanchez, "Minimizing Compression Artifacts for High Resolutions with Adaptive Quantization Matrices for HEVC", Cornell University arXiv, 2016, Corpus: S2CID:10909063. (Preprint PDF).

Lee Prangnell, Victor Sanchez and Rahul Vanam, "Adaptive Quantization by Soft Thresholding in HEVC", IEEE Picture Coding Symposium, Cairns, Queensland, Australia, 2015, DOI: 10.1109/PCS.2015.7170042. (PDF).

 

PhD Title: Visually Lossless Coding for the HEVC Standard — Efficient Perceptual Quantisation Contributions for HEVC (September, 2017)

In my PhD thesis, four perceptual quantisation techniques are proposed for the HEVC standard. These contributions are designed to maximise the levels of perceptual compression that can be applied to raw YCbCr sequences — of various bit depths, resolutions and sampling ratios — without incurring a discernible loss of visual quality. The proposed techniques give rise to significant improvements in terms of bitrate reductions, as measured in kilobits per second. Furthermore, by virtue of the design of the proposed methods, computational complexity is not increased. In terms of experimentation, the following coding efficiency and visual quality metrics are utilised to evaluate each contribution: Bjøntegaard Delta Rate, PSNR, SSIM and subjective assessments that follow the principles of ITU-T P.910 "Subjective Video Quality Assessment Methods". The proposed perceptual quantisation methods are as follows:

1) High Bit Depth Capable and 4:4:4 Capable JND-Based Coding Block (CB)-Level Perceptual Quantisation;

2) Coding Block (CB)-Level Full Colour Perceptual Quantisation for 4:4:4 Video Data;

3) Coding Unit (CU)-Level Cross-Colour Channel Perceptually Adaptive Quantisation;

4) Transform Coefficient-Level Perceptual Quantisation.

 

Thesis

Lee Prangnell, "Visually Lossless Coding for the HEVC Standard: Efficient Perceptual Quantisation Contributions for HEVC", PhD Thesis, Department of Computer Science, University of Warwick, September 2017 (PDF).