• Aucun résultat trouvé

Segall, K. Misra (Sharp)]

Dans le document Td corrigé JCT-VC - ITU pdf (Page 166-170)

This contribution notes that the dequantization formula currently used in HM 4 has the property that the reconstructed values are not always symmetrically distributed about zero. A modification to the

dequantization formula is proposed which creates reconstructed values symmetrically distributed about zero under all conditions. The modified dequantization process operates on the absolute value of levels and then applies the sign rather than operating on signed levels. For quantization parameters of the common test conditions, the dequantization results are unchanged for any level or block size.

The dequantization formula presently used in HM is based on JCT-VC-E243. This formula is summarized below:

Definitions

B = source bit width (8 or 10 bit in the experiments described below) DB = B-8 (internal bit-depth increase with 8-bit input)

N = transform size M = log2(N)

Q = f(QP%6), where f(x) = {26214,23302,20560,18396,16384,14564}, x=0,…,5 IQ = g(QP%6), where g(x) = {40,45,51,57,64,72}, x=0,…,5

coeffQ = ((level*IQ << (QP/6)) + offset)>>(M-1+DB), offset = 1<<(M-2+DB)

It is asserted that the worst case dequant has a 17 bit signed multiply range.

It was remarked that this may imply that the quantized level values also have a 17 bit range, which seems troubling. The symmetric rounding approach does not solve that issue.

We may want a clipping at the input of the dequant operation (see discussion in G719).

We would seem to want such a scheme if we have adaptive offset addition.

It was noted that imposing symmetry at the input of the first stage of the inverse transform does not mean that we will have symmetric behaviour at the output of the final inverse transform.

Proposes to instead use

coeffQ = Sgn(level)*((|level|*IQ << (QP/6)) + offset)>>(M-1+DB), offset = 1<<(M-2+DB) Our understanding is that symmetric reconstruction is already what is proposed in the adaptive offset schemes.

It doesn’t seem clear that we would need this change if we do not use an adaptive offset scheme.

For further study.

JCTVC-G723 Support for finer QP granularity in HEVC [K. Panusopone, A. Luthra, Xue Fang, J. H. Kim, L. Wang (Motorola Mobility)]

Closely related to G721, does not need further detailed review.

5.11 Alternative coding modes 5.11.1 Lossless coding and I_PCM

JCTVC-G093 AHG22: Sample-based angular prediction (SAP) for HEVC lossless coding [M. Zhou (TI)]

Efficient lossless coding is required for real-word applications such as automotive vision, video conferencing and long-distance education. This contribution proposes to use sample-based angular intra prediction (SAP) in lossless coding mode for better coding efficiency. The proposed sample-based prediction is exactly same as the HM4.0 block-based angular prediction in terms of prediction angles and sample interpolation, requires no syntax or semantics changes, but differs in decoding process in terms of reference sample selection. In the proposed method a sample to be predicted uses its direct neighboring samples for better intra prediction accuracy. Compared to the HM4.0 anchor lossless method which bypasses transform, quantization, de-blocking filter, SAO and ALF, the proposed method provides an average gain of 6.71% in AI-HE, 7.83% in AI-LC, 2.29% in RA-HE, 2.64% in RA-LC, 1.57% in LB-HE and 1.85% in LB-LC. For class F sequences only, the average gain is 8.95% in AI-HE, 8.88% in AI-LC, 5.64% in RA-HE, 5.61% in RA-LC, 4.58% in LB-HE and 4.61% in LB-LC. SAP is fully parallelized on the encoder side, and can be executed at a speed of one row or one column per cycle on the decoder side.

Presentation not uploaded

The anchor is a modified HM with transform bypass, quant. bypass as described above, entropy coding is used as is, i.e. the contexts are becoming spatial. Directional prediction “as is” is included here for intra coding.

Goal is to use HM tool “as is” without major re-design.

JCTVC-G115 AHG22: Cross-verification of TI’s Sample-based angular prediction (SAP)

for HEVC lossless coding (JCTVC-G93) [K. Chono, H. Aoki (NEC)]

JCTVC-G118 Potential enhancement of signaling of I_PCM blocks [K. Chono, H. Aoki (NEC)]

This contribution presents the concept-level idea of a method for transmitting successive I_PCM blocks.

The method counts the number of successive I_PCM blocks in the Z-scan order in the same layer of the CTB and signals that number minus 1 at the PU header of the first I_PCM block. PCM sample data of the subsequent I_PCM blocks immediately follow that of the first I_PCM block, that is, signaling of the PU headers of the subsequence I_PCM blocks is skipped. Thereby the method enhances the throughput of transmitting PCM sample data of successive I_PCM block while reducing some bits of side-information.

Currently, an IPCM block consists of variable-length header data and fixed-length PCM payload.

Suggestion to bundle several headers and several payloads together, to increase the length of concatenated fixed-length data. Maximum number of concatenated IPCM blocks is 4.

Main advantage is for more deterministic buffer behaviour.

Better understanding of application needed – further study (AHG – lossless coding? Or other?)

JCTVC-G268 AHG22: Lossless Transforms for Lossless Coding [W. Dai, M. Krishnan, J.

Topiwala, P.Topiwala (FastVDO)]

This submission is based on proposal G266, “Lossless Core Transforms for HEVC,” and focuses on the lossless transforms themselves. This work is submitted in reference to AHG22 on Lossless Coding. A coding framework has been created in that AHG for development purposes, which at this time includes prediction, and entropy coding, but bypasses transform and quantization. It is the purpose of this proposal to supply lossless transforms, which can assist in data decorrelation, and potentially improve the

performance of the lossless coding framework. Such experiments will be conducted and reported in the next meeting cycle.

Explain ways how to construct lossless transforms: Lifting.

JCTVC-G664 AHG22: A lossless coding solution for HEVC [W. Gao, M. Jiang, Y. He, J.

Song, H. Yu (Huawei Technologies)]

This contribution proposes a lossless coding solution that only involves a few modifications to the current HEVC WD. To achieve lossless coding in both intra and inter coding operations, the transform and quantization modules are bypassed. Due to the nature of lossless coding, the existing intra predictions are extended to pixel based prediction (DPCM), taking into account that there is no transform applied to prediction residuals. Furthermore, since the statistical properties of prediction residuals are quite different from those of transform coefficients, the CABAC coding of intra prediction residuals are also modified accordingly.

No additional flag is introduced in this proposal, the lossless coding mode is signaled by QPY=0. As a result, no change to the HEVC syntax specification is needed, and the lossless mode can be applied to the entire picture or to individual CUs conveniently.

Combination of CABAC and Golomb-Rice is used.

Anchor is HM with QP=0 which is still lossy. Compared to that, the proposed method saves roughly 9.5%

for AI and roughly 7% for LD B

JCTVC-G885 Cross-check of JCTVC-G664 [B. Li (USTC), J. Xu (Microsoft)] [late]

Continue study in AHG.

Unclear whether lossless coding would be supported in a profile/level? If it can be done “for free” it may be attractive, but this may be in contradiction with bitrate limitations that are usually defined in levels.

5.11.2 Variable resolution coding

JCTVC-G264 AHG18: Adaptive Resolution Coding (ARC) [T. Davies (Cisco), P. Topiwala (FastVDO)]

This contribution reports the results of further investigations into resolution adaption. Adaptive Resolution Coding (ARC) is described, in which resolution is selected dynamically by means of a threshold test, and the compression performance investigated in a rapid bit-rate reduction/high QP scenario against simply increasing QP with and without pre-filtering. Mean luma BD-Rate gains range from 6.9-14.3% in Class A and 3.5-11.5% in Class B but performance depends greatly on picture content and configuration. Individual luma BD-rate gains can range from negligible up to 30%. Typically chroma has lower objective quality by a fixed penalty of around 0.5dB but this is asserted to have little subjective impact. The contribution includes modified Working Draft text.

Resolution can be changed in both directions (up or down) Heuristic criterion based on PSNR

Results for high QP

PSNR is computed at high resolution

Could the same be achieved by pre-filtering? It is said that some investigations on this were performed, but same performance was never achieved

Upsampling and downsampling filters would need to be normative (for the results, down- and upsampling filters were aligned) – in contrast to that, in scalable coding only upsampling needs to be defined

An application scenario is transmission with large variation of bandwidth

A worst case would be when resolution changes every frame, which would require a more complex decoder – would be necessary to restrict frequency of changes

Would the decoder need to keep both resolutions all the time? Can down- and upsampling be performed on the fly in any case of frame structure?

One expert mentions that upsampling and downsampling filters should be consistent with possible scalable extension. Put studying relation with scalable coding in mandates of AHG.

From Tue. 29th discussion:

Normative (in-loop) down- and upsampling is only necessary at the switching points (which should be rare). Therefore, it may not be necessary in the context of ARC to define sophisticated

decimation/interpolation filters.

One expert mentions that reduced DPB memory could also be an advantage, where only lower resolution references are stored, but prediction would be performed at high resolution.

JCTVC-G597 AHG 18: Cross-check of Adaptive Resolution Coding (ARC) (JCTVC-G264)

[Glenn Van Wallendael, Jan De Cock, Rik Van de Walle (Ghent Univ.),

David Flynn (BBC)] [late]

JCTVC-G329 AHG18: Comments on the Implementations of Resolution Adaption on HEVC [M. Li, P. Wu (ZTE)]

This document presents the analyses of signaling and potential decoding rules for motion compensated prediction (MCP) if resolution adaption (RA) is integrated into the current HEVC design. The designs in the existing H.263 and MPEG-4 Part 2 standards, especially with two modes involving resolution changes, i.e., reduced resolution update (RRU) and dynamic resolution conversion (DRC), are studied as the references for identifying the areas where the standard texts and the coding algorithms will require specific changes to enable the resolution adaption as a video coding feature. Therefore, firstly, some key check items on MCP are listed, and the changes in the syntax and coding rules brought by RRU and DRC to H.263 and MPEG-4 Part 2 are analyzed, respectively. Then some heuristic principles are described based on the common features distilled from the designs of RRU and DRC in order to achieve compact HEVC syntax design for RA. Finally, following these principles, all the key issues regarding MCP with RA integration into HEVC are elaborated with some recommendations.

Presented Tuesday 29th afternoon

Contribution meant as information document. It is suggested to change the resolution of the reference pictures (not of residual pictures as in the RRU of H.263) – similar as in G264.No presenter available Friday 2235 hours and Monday 28th 2025 hours

JCTVC-G715 AHG18/21: Absolute signaling for resolution switching [K. Misra, S.

Deshpande, L. Kerofsky, A. Segall (Sharp)] [late]

This document describes a technique which enables signaling of reference picture resolutions to be maintained in the decoded picture buffer.

presented Monday 28th evening

Relates to DPB signaling – which resolution of the reference picture should be stored in DPB: Original, subsampled or both resolutions.

Does not relate to how the resolution switching is managed at the coding layer.

JCTVC-G862 Advanced Resampling Filters For HEVC Applications [W. Dai, M.

Dans le document Td corrigé JCT-VC - ITU pdf (Page 166-170)