Lee Prangnell: Postdoctoral Researcher - Computer Science
Overview
Worked as a postdoctoral research fellow (2018-2024) in the Signal and Information Processing (SIP) LabLink opens in a new window within the Department of Computer Science. I developed visually lossless coding and perceptual quantisation algorithms for the High Efficiency Video Coding (HEVC) standard and the Versatile Video Coding (VVC) standard. HEVC (ITU-T H.265) and VVC (ITU-T H.266) are video compression platforms that have been internationally standardised by JCT-VC and JVET, respectively. As regards contributions for HEVC and VVC, the proposed algorithms can be applied to raw YCbCr and RGB data of various bit depths and sampling ratios. This raw data used for evaluating the proposed techniques includes medical image data in addition to screen content video data (i.e., natural content and animated content). My postdoctoral research followed on from the research that I conducted during my PhD degree in Computer Science.
First Author Publications and Working Papers
Lee Prangnell and Victor Sanchez, "Spectral-PQ: A Novel Spectral Sensitivity-Orientated Perceptual Compression Technique for RGB 4:4:4 Video Data", Cornell University arXiv, 2022 (PDFLink opens in a new window). Corpus: S2CID:246240695Link opens in a new window.
Lee Prangnell and Victor Sanchez, "HVS-Based Perceptual Color Compression of Image Data", IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, Ontario, Canada, 2021, DOI: 10.1109/ICASSP39728.2021.9414773Link opens in a new window (PDFLink opens in a new window).
Lee Prangnell and Victor Sanchez, "Spatiotemporal Adaptive Quantization for the Perceptual Video Coding of RGB 4:4:4 Data", Cornell University arXiv, 2020 (PDFLink opens in a new window). Corpus: S2CID:218673851Link opens in a new window.
Lee Prangnell, "Frequency-Dependent Perceptual Quantisation for Visually Lossless Compression Applications", Cornell University arXiv, 2019 (PDFLink opens in a new window). Corpus: S2CID:182953231Link opens in a new window.
Lee Prangnell and Victor Sanchez, "JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC", IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Alberta, Canada, 2018, DOI: 10.1109/ICASSP.2018.8462327Link opens in a new window (PDFLink opens in a new window).
Lee Prangnell, "Visually Lossless Coding in HEVC: A High Bit Depth and 4:4:4 Capable JND-Based Perceptual Quantisation Technique for HEVC", Elsevier Signal Processing: Image Communication Journal, April 2018, DOI: 10.1016/j.image.2018.02.007Link opens in a new window (PDFLink opens in a new window).
Lee Prangnell, Miguel Hernández-Cabronero and Victor Sanchez, "Coding Block-Level Perceptual Video Coding for 4:4:4 Data in HEVC", IEEE International Conference on Image Processing, Beijing, China, 2017, DOI: 10.1109/ICIP.2017.8296730Link opens in a new window (PDFLink opens in a new window).
Lee Prangnell, Miguel Hernández-Cabronero and Victor Sanchez, "Cross-Color Channel Perceptually Adaptive Quantization for HEVC", IEEE Data Compression Conference, Snowbird, Utah, USA, 2017, DOI: 10.1109/DCC.2017.66Link opens in a new window (PDFLink opens in a new window).
Lee Prangnell, "Visible Light-Based Human Visual System Conceptual Model", Cornell University arXiv, 2016 (PDFLink opens in a new window). Corpus: S2CID:10909063Link opens in a new window.
Lee Prangnell and Victor Sanchez, "Adaptive Quantization Matrices for HD and UHD Resolutions in Scalable HEVC", IEEE Data Compression Conference, Snowbird, Utah, USA, 2016, DOI: 10.1109/DCC.2016.47Link opens in a new window (PDFLink opens in a new window).
Lee Prangnell, Victor Sanchez and Rahul Vanam, "Adaptive Quantization by Soft Thresholding in HEVC", IEEE Picture Coding Symposium, Cairns, Queensland, Australia, 2015, DOI: 10.1109/PCS.2015.7170042Link opens in a new window (PDFLink opens in a new window).
PhD Title: Visually Lossless Coding for the HEVC Standard — Efficient Perceptual Quantisation Contributions for HEVC (September, 2017)
In my PhD thesis, four perceptual quantisation techniques are proposed for the HEVC standard. These contributions are designed to maximise the levels of perceptual compression that can be applied to raw YCbCr sequences — of various bit depths, resolutions and sampling ratios — without incurring a discernible loss of visual quality. The proposed techniques give rise to significant improvements in terms of bitrate reductions, as measured in kilobits per second. Furthermore, by virtue of the design of the proposed methods, computational complexity is not increased. In terms of experimentation, the following coding efficiency and visual quality metrics are utilised to evaluate each contribution: Bjøntegaard Delta Rate, PSNR, SSIM and subjective assessments that follow the principles of ITU-T P.910 "Subjective Video Quality Assessment Methods". The proposed perceptual quantisation methods are as follows:
1) High Bit Depth Capable and 4:4:4 Capable JND-Based Coding Block (CB)-Level Perceptual Quantisation;
2) Coding Block (CB)-Level Full Colour Perceptual Quantisation for 4:4:4 Video Data;
3) Coding Unit (CU)-Level Cross-Colour Channel Perceptually Adaptive Quantisation;
4) Transform Coefficient-Level Perceptual Quantisation.
Thesis
Lee Prangnell, "Visually Lossless Coding for the HEVC Standard: Efficient Perceptual Quantisation Contributions for HEVC", PhD Thesis, Department of Computer Science, University of Warwick, September 2017 (PDFLink opens in a new window).