Abstract
Background: High-Efficiency Video Coding (HEVC) is a recent video compression standard. It provides better compression performance compared to its predecessor, H.264/AVC. However, the computational complexity of the HEVC encoder is much higher than that of H.264/AVC encoder. This makes HEVC less attractive to be used in real-time applications and in devices with limited resources (e.g., low memory, low processing power, etc.). The increased computational complexity of HEVC is partly due to its use of a variable size Transform Unit (TU) selection algorithm which successively performs transform operations using transform units of different sizes before selecting the optimal transform unit size. In this paper, a fast transform unit size selection method is proposed to reduce the computational complexity of an HEVC encoder.
Methods: Bayesian decision theory is used to predict the size of the TU during encoding. This is done by exploiting the TU size decisions at a previous temporal level and by modeling the relationship between the TU size and the Rate-Distortion (RD) cost values.
Results: Simulation results show that the proposed method achieves a reduction of the encoding time of the latest HEVC encoder by 16.21% on average without incurring any noticeable compromise on its compression efficiency. The algorithm also reduces the number of transform operations by 44.98% on average.
Conclusion: In this paper, a novel fast TU size selection scheme for HEVC is proposed. The proposed technique outperforms both the latest HEVC reference software, HM 16.0, as well as other state-of-the-art techniques in terms of time-complexity. The compression performance of the proposed technique is comparable to that of HM 16.0.
Keywords: Video compression, high efficiency video coding, transform unit, video compression standard, fast encoding technique, bayesian decision theory.
Graphical Abstract
[http://dx.doi.org/10.1109/TCSVT.2012.2221191]
[http://dx.doi.org/10.1109/TCSVT.2015.2477916]
[http://dx.doi.org/10.1109/TCSVT.2003.815165]
[http://dx.doi.org/10.1109/MCAS.2004.1286980]
[http://dx.doi.org/10.1109/TCSVT.2005.848356]
[http://dx.doi.org/10.1109/TCSVT.2009.2014014]
[http://dx.doi.org/10.1016/j.image.2012.12.010]
[http://dx.doi.org/10.1109/JSTSP.2013.2271421]
[http://dx.doi.org/10.1109/TCSVT.2013.2290578]
[http://dx.doi.org//10.1109/INMIC.2017.8289476]
[http://dx.doi.org/10.1109/TIP.2019.2924810] [PMID: 31265399]
[http://dx.doi.org/10.1007/s11554-016-0571-5]
[http://dx.doi.org/10.1049/iet-ipr.2018.6640]
[http://dx.doi.org/10.1155/2014/718189]
[http://dx.doi.org/10.2991/eeic-13.2013.106]
[http://dx.doi.org/10.1109/ICME.2014.6890293]
[http://dx.doi.org/10.1109/TCSVT.2015.2444672]
[http://dx.doi.org/10.1109/PCS.2013.6737736]
[http://dx.doi.org/10.1109/VCIP.2015.7457828]
[http://dx.doi.org/10.1109/TCSVT.2013.2249017]
[http://dx.doi.org/10.1109/JPROC.2010.2098830]
[http://dx.doi.org/10.1109/LSP.2013.2281607]
[http://dx.doi.org/10.1016/j.image.2015.01.008]
[http://dx.doi.org/10.4218/etrij.13.0112.0223]
[http://dx.doi.org/10.1109/IS3C.2016.203]
[http://dx.doi.org/10.1109/DCC.2018.00045]