Age | Commit message (Collapse) | Author |
|
This includes trellis optimization, forward/inverse transform,
quantization, tokenization and stuffing functions.
Change-Id: Ibd34132e1bf0cd667671a57b3f25b3d361b9bf8a
|
|
Change-Id: I22aa803ffff330622cdb77277e7b196a9766f882
|
|
Some cleanups on the transform size and type selection logic.
Change-Id: If2e9675459482242cf83b4f7de7634505e3f6dac
|
|
Enable ADST/DCT of dimension 16x16 for I16X16 modes. This change provides
benefits mostly for hd sequences.
Set up the framework for selectable transform dimension.
Also allowing quantization parameter threshold to control the use
of hybrid transform (This is currently disabled by setting threshold
always above the quantization parameter. Adaptive thresholding can
be built upon this, which will further improve the coding performance.)
The coding performance gains (with respect to the codec that has all
other configuration settings turned on) are
derf: 0.013
yt: 0.086
hd: 0.198
std-hd: 0.501
Change-Id: Ibb4263a61fc74e0b3c345f54d73e8c73552bf926
|
|
Change-Id: I7524883fb29f42303fb46a5bc6772fbcf8781d1d
|
|
Further cases of inconsistent naming convention.
Change-Id: Id3411ecec6f01a4c889268a00f0c9fd5a92ea143
|
|
Also add warnings for undefined macros in the C pre-processor
Change-Id: I1ec30e57c5a49fb72151a4cf140d7eeb0fb1d779
|
|
Set on all 16x16 intra/inter modes
Features:
- Butterfly fDCT/iDCT
- Loop filter does not filter internal edges with 16x16
- Optimize coefficient function
- Update coefficient probability function
- RD
- Entropy stats
- 16x16 is a config option
Have not tested with experiments.
hd: 2.60%
std-hd: 2.43%
yt: 1.32%
derf: 0.60%
Change-Id: I96fb090517c30c5da84bad4fae602c3ec0c58b1c
|
|
Approximate the Google style guide[1] so that that there's a written
document to follow and tools to check compliance[2].
[1]: http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml
[2]: http://google-styleguide.googlecode.com/svn/trunk/cpplint/cpplint.py
Change-Id: Idf40e3d8dddcc72150f6af127b13e5dab838685f
|
|
1. block types
There are only three types of blocks for 8x8 transformed MBs, i.e. Y
block with DC does not exist for 8x8 transformed MBs as all MB using
8x8 transform have 2nd order haar transform. This commit introduced
a new macro BLOCK_TYPES_8X8 to reflect such fact.
2. context counters
This commit also fixed the mixed of context_counters between 4x4 and
8x8 transformed MBs. The mixed use of the counters leads me to think
the existing the context probabilities were not properly generated
from 8x8 transformed MBs.
3. redundant collecting in recoding
The commit also corrected the code that accumulates entropy stats by
making sure stats only collected for final packing, not during the
recode loop
Change-Id: I029f09f8f60bd0c3240cc392ff5c6d05435e322c
|
|
Removes a set of spurious declarations that were inadvertently checked
in.
Change-Id: I2f80b6b66d2ec9ea667c810eaf1a6e7d52478c67
|
|
Using contextual coding of the mkb_skip_coeff flag using the
values of this flag from the left and above. There is a small
improvement of about 0.15% on Derf:
http://www.corp.google.com/~debargha/vp8_results/mbskipcontext.html
Refactored to use pred_common.c by adding a new context type.
Results on HD set (about 0.66% improvement):
http://www.corp.google.com/~debargha/vp8_results/mbskipcontext_hd.html
Incliding missing refactoring to use the pred_common utilities.
Change-Id: I95373382d429b5a59610d77f69a0fea2be628278
|
|
The commit changed to compute UV intra RD estimates for 4x4 and 8x8
separately to be used in mode decision for MB modes associated with
the appropriate transform size respectively. Now finally after many
other changes related 8x8 quantizer zbin boost and zbin_mode_boost,
this change overall helps the HD(with 8x8) by around ~.13%.
(avg .13% glb .13% ssim .17%)
The commit also has a few changes for eliminating compiler warnings.
Change-Id: Ibab35dad44820c87e6b44799c66f8d519cc37344
|
|
Change-Id: I8e9b6b154e1a0d0cb42d596366380d69c00ac15f
|
|
With this fix, the experimental branch now builds and encodes correctly
with the following two configure options respectively:
--enable-experimental --enable-t8x8
--enable-experimental
Change-Id: I3147c33c503fe713a85fd371e4f1a974805778bf
|
|
Please refer to previous commit messages for detailed info:
https://on2-git.corp.google.com/g/#change,5940
https://on2-git.corp.google.com/g/#change,6045
Change-Id: I8b16992f2f69c5a808ad40a3e32ef589cce7c59d
|
|
There were many instances in the code of vp8_coef_tokens and
vp8_coef_tokens-1, which was a preprocessor macro despite the naming
convention. Replace these with MAX_ENTROPY_TOKENS and ENTROPY_NODES,
respectively.
Change-Id: I72c4f6c7634c94e1fa066cd511471e5592c748da
|
|
Allow compiling without adding vp8/{common,encoder,decoder} to the
include paths.
Change-Id: Ifeb5dac351cdfadcd659736f5158b315a0030b6c
|
|
Per John's previous change, shrink TOKENEXTRA from 20 to 8 bytes
original: b7b1e6fb
reverted: 41f4458a
Also drop unused field from vp8_extra_bit_struct
Update ARM ASM to deal with this change. In particular, Extra is signed
and needs to be sign-extended when loaded.
Change-Id: Ibd0ddc058432bc7bb09222d6ce4ef77e93a30b41
|
|
This reverts commit b7b1e6fb55c6b12ccd078a20cb9855f6734931b5. Previous
fix is incomplete, breaks ARM. Itchy submit finger.
Change-Id: I939dc0d3bf4173cf951c1d152338ab6ea2184bb9
|
|
Change the size of structure elements to reduce memory utilization.
Removed the 'section' member entirely, as it is set but never read.
Change-Id: Iad043830392fb4168cb3cd6075fb0eb70c7f691c
|
|
This patch reduces the size of the global tables maintained by the
tokenizer to 16k from 80k-96k. See issue #177.
Change-Id: If0275d5f28389af11ac83c5d929d1157cde90fbe
|
|
Changes 'The VP8 project' to 'The WebM project', for consistency
with other webmproject.org repositories.
Fixes issue #97.
Change-Id: I37c13ed5fbdb9d334ceef71c6350e9febed9bbba
|
|
Replace the exponential search for optimal rounding during
quantization with a linear Viterbi trellis and enable it
by default when using --best.
Right now this operates on top of the output of the adaptive
zero-bin quantizer in vp8_regular_quantize_b() and gives a small
gain.
It can be tested as a replacement for that quantizer by
enabling the call to vp8_strict_quantize_b(), which uses
normal rounding and no zero bin offset.
Ultimately, the quantizer will have to become a function of lambda
in order to take advantage of activity masking, since there is
limited ability to change the quantization factor itself.
However, currently vp8_strict_quantize_b() plus the trellis
quantizer (which is lambda-dependent) loses to
vp8_regular_quantize_b() alone (which is not) on my test clip.
Patch Set 3:
Fix an issue related to the cost evaluation of successor
states when a coefficient is reduced to zero. With this
issue fixed, now the trellis search almost exactly matches
the exponential search.
Patch Set 2:
Overall, the goal of this patch set is to make "trellis"
search to produce encodings that match the exponential
search version. There are three main differences between
Patch Set 2 and 1:
a. Patch set 1 did not properly account for the scale of
2nd order error, so patch set 2 disable it all together
for 2nd blocks.
b. Patch set 1 was not consistent on when to enable the
the quantization optimization. Patch set 2 restore the
condition to be consistent.
c. Patch set 1 checks quantized level L-1, and L for any
input coefficient was quantized to L. Patch set 2 limits
the candidate coefficient to those that were rounded up
to L. It is worth noting here that a strategy to check
L and L+1 for coefficients that were truncated down to L
might work.
(a and b get trellis quant to basically match the exponential
search on all mid/low rate encodings on cif set, without
a, b, trellis quant can hurt the psnr by 0.2 to .3db at
200kbps for some cif clips)
(c gets trellis quant to match the exponential search
to match at Q0 encoding, without c, trellis quant can be
1.5 to 2db lower for encodings with fixed Q at 0 on most
derf cif clips)
Change-Id: Ib1a043b665d75fbf00cb0257b7c18e90eebab95e
|
|
When the license headers were updated, they accidentally contained
trailing whitespace, so unfortunately we have to touch all the files
again.
Change-Id: I236c05fade06589e417179c0444cb39b09e4200d
|
|
Change-Id: Ieebea089095d9073b3a94932791099f614ce120c
|
|
|