Age | Commit message (Collapse) | Author |
|
|
|
|
|
The over quant code was added in VP8 post
bitstream freeze to allow compression to lower
data rates
In VP9 the real qualtizer range has been greatly
extended anyway.
Change-Id: I5d384fa5e9a83ef75a3df34ee30627bd21901526
|
|
This patch includes 4x4, 8x8, and 16x16 forward butterfly ADST/DCT
hybrid transform. The kernel of 4x4 ADST is sin((2k+1)*(n+1)/(2N+1)).
The kernel of 8x8/16x16 ADST is of the form sin((2k+1)*(2n+1)/4N).
Change-Id: I8f1ab3843ce32eb287ab766f92e0611e1c5cb4c1
|
|
Removing redundant 'extern' keywords. Moving VP9DX_BOOL_DECODER from .h
to .c file.
Change-Id: I5a3056cb3d33db7ed3c3f4629675aa8e21014e66
|
|
Allows the user to specify whether decode errors should be fatal or not.
Also makes mismatches optionally fatal.
Change-Id: I58cff4a82f3d42f5653b91cf348a7f669377e632
|
|
|
|
|
|
Removing redundant 'extern' keyword from function declarations and making
function arguments lower case.
Change-Id: Idae9a2183b067f2b6c85ad84738d275e8bbff9d9
|
|
The information is a duplicate of "eob" in BLOCKD.
Change-Id: Ia6416273bd004611da801e4bfa6e2d328d6f02a3
|
|
|
|
|
|
Refactors the switchable filter search in the rd loop to
improve encode speed.
Uses a piecewise approximation to a closed form expression to estimate
rd cost for a Laplacian source with a given variance and quantization
step-size.
About 40% encode time reduction is achieved.
Results (on a feb 12 baseline) show a slight drop:
derf: -0.019%
yt: +0.010%
std-hd: -0.162%
hd: -0.050%
Change-Id: Ie861badf5bba1e3b1052e29a0ef1b7e256edbcd0
|
|
|
|
The issue that potentially broke the encoding process was due to the fact
that the length of token link is calculated from the total number of tokens
coded, while it is possible, in high bit-rate setting, this length is
greater than the buffer length initially assigned to the cpi->tok.
This patch increases the initially allocated buffer length assigned to
cpi->tok from
(mb_rows * mb_cols * 24 * 16) to (mb_rows * mb_cols * (1 + 24 * 16)).
It resolves the buffer overflow problem.
Change-Id: I8661a8d39ea0a3c24303e3f71a170787a1d5b1df
|
|
|
|
|
|
Change-Id: I7c6e3bebd94856b24dbe2aded7f9e04ef8bb8c08
|
|
Change-Id: I7b7b8d4fda3a23699e0c920d727f8c15d37d43aa
|
|
Fixes to make Entropy stats code work again
Change-Id: I62e380481a4eb4c170076ac6ab36f0c2b203e914
|
|
- Using multiplication and shifting instead of division in
intra prediction.
- Maximum absolute difference is 1 for division statements
in d45, d27, d63 prediction modes. However, errors can
cumulate for large block sizes when using already predicted
values.
- Maximum number of non-matching result values in loops using
division are:
4x4 0/16
8x8 0/64
16x16 10/256
32x32 13/1024
64x64 122/4096
Overall PSNR
derf: 0.005
yt: -0.022
std-hd: 0.021
hd: -0.006
Change-Id: I3979a02eb6351636442c1af1e23d6c4e6ec1d01d
|
|
|
|
The issue was caused by a out-of-order merge, which leads to wrong
functions are called at lossless mode.
Change-Id: If157729abab62954c729e0377e7f53edb7db22ca
|
|
|
|
rebased.
This patch includes 16x16 butterfly inverse ADST/DCT hybrid
transform. It uses the variant ADST of kernel
sin((2k+1)*(2n+1)/4N),
which allows a butterfly implementation.
The coding gains as compared to DCT 16x16 are about 0.1% for
both derf and std-hd. It is noteworthy that for std-hd sets
many sequences gains about 0.5%, some 0.2%. There are also few
points that provides -1% to -3% performance. Hence the average
goes to about 0.1%.
Change-Id: Ie80ac84cf403390f6e5d282caa58723739e5ec17
|
|
|
|
experimental
|
|
The commit changes the coding mode to lossless whenever the lowest
quantizer is choosen.
As expected, test results showed no difference for cif and std-hd
set where Q0 is rarely used. For yt and yt-hd set, Q0 is used for
a number of clips, where this commit helped a lot in the high end.
Average over all clips in the sets:
yt: 2.391% 1.017% 1.066%
hd: 1.937% .764% .787%
Change-Id: I9fa9df8646fd70cb09ffe9e4202b86b67da16765
|
|
Change-Id: I13d8ae754827368755575dd699a087b3b11f5b16
|
|
The 32x32 value in case of splitmv was uninitialized. this leads to
all kind of erratic behaviour down the line. Also fill in dummy values
for superblocks in keyframes (the values are currently unused, but we
run into integer overflows anyway, which makes detecting bad cases
harder). Lastly, in case we did not find any RD value at all, don't
set tx_diff to INT_MIN, but instead set it to zero (since if we couldn't
find a mode, it's unlikely that any particular transform would have made
that worse or better; rather, it's likely equally bad for all tx_sizes).
Change-Id: If236fd3aa2037e5b398d03f3b1978fbbc5ce740e
|
|
experimental
|
|
|
|
This issue breaks the encoding process of the codebase. The effect
emerges only in particular test sequence at certain bit-rates and
frame limits.
Change-Id: I02e080f2a49624eef9a21c424053dc2a1d902452
|
|
Change-Id: Ie309cb1f683a51c5dfac405fb32e8e2d6ee143ed
|
|
Change-Id: I7a5314daca993d46b8666ba1ec2ff3766c1e5042
|
|
Since there is no Y2, these values are always zero. This changes the
bitstream results slightly, hence a separate commit.
Change-Id: I2f838f184341868f35113ec77ca89da53c4644e0
|
|
Change-Id: I4f46d142c2a8d1e8a880cfac63702dcbfb999b78
|
|
|
|
|
|
allowing the compiler to inline.
Change-Id: I66e5caf5e7fefa68a223ff0603aa3f9e11e35dbb
|
|
|
|
|
|
Used same algorithm as others.
Change-Id: Ifdac560762aec9735cb4bb6f1dbf549e415c38a0
|
|
experimental
|
|
These allow sending partial bitstream packets over the network before
encoding a complete frame is completed, thus lowering end-to-end
latency. The tile-rows are not independent.
Change-Id: I99986595cbcbff9153e2a14f49b4aa7dee4768e2
|
|
Since addition of the larger-scale transforms (16x16, 32x32), these
don't give a benefit at macroblock-sizes anymore. At superblock-sizes,
2nd-order transform was never used over the larger transforms. Future
work should test whether there is a benefit for that use case.
Change-Id: I90cadfc42befaf201de3eb0c4f7330c56e33330a
|
|
This patch abstracts the selection of the coefficient band
context into a function as a precursor to further experiments
with the coefficient context.
It also removes the large per TX size coefficient band structures
and uses a single matrix for all block sizes within the test function.
This may have an impact on quality (results to follow) but is only an
intermediate step in the process of redefining the context. Also the
quality impact will be larger initially because the default tables will
be out of step with the new banding.
In particular the 4x4 will in this case only use 7 bands. If needed we
can add back block size dependency localized within the function, but
this can follow on after the other changes to the definition of the
context.
Change-Id: Id7009c2f4f9bb1d02b861af85fd8223d4285bde5
|
|
Reverted part of change
I19981d1ef0b33e4e5732739574f367fe82771a84
That gives rise to an enc/dec mismatch.
As things stand the memsets are still needed.
Change-Id: I9fa076a703909aa0c4da0059ac6ae19aa530db30
|
|
This is an initial step to facilitate experimentation
with changes to the prior token context used to code
coefficients to take better account of the energy of
preceding tokens.
This patch merely abstracts the selection of context into
two functions and does not alter the output.
Change-Id: I117fff0b49c61da83aed641e36620442f86def86
|
|
|