[FFmpeg-devel] [PATCH] RV30/40 decoder
Michael Niedermayer
michaelni
Sat Nov 24 12:58:54 CET 2007
On Sun, Nov 18, 2007 at 11:11:24AM +0200, Kostya wrote:
> Well, it roughly the same feature-wise as it was,
> I just don't think I will improve it soon, yet
> it is playable (and maybe will attract samples
> and patches, I'm an optimist).
more reviewing, also your chances of seeing this applied would improve
if you splited it in maybe 10+ patches!
the problem is every time i look at it i find new issues but its too big
(400k uncompressed) to really review all at once, one inevitably becomes
tired so the quality of the review degrades and many issues are missed
and with the next iteration another subset of the issues is found and
so on ...
heres the rv34.c review:
> +/** Translation of RV40 macroblock types to lavc ones */
> +static const int rv34_mb_type_to_lavc[12] = {
> + MB_TYPE_INTRA, MB_TYPE_INTRA16x16, MB_TYPE_16x16, MB_TYPE_8x8,
> + MB_TYPE_16x16, MB_TYPE_16x16, MB_TYPE_SKIP, MB_TYPE_DIRECT2,
> + MB_TYPE_16x8, MB_TYPE_8x16, MB_TYPE_DIRECT2, MB_TYPE_16x16
> +};
the resulting types look invalid and incomplete, also the comment doesnt match
the variable name rv40 vs 34
> + /**
> + * Generate VLC from codeword lengths
> + */
> +static int rv34_gen_vlc(const uint8_t *bits2, int size, VLC *vlc)
you could explain what bits2 and size is in the doxy comment
> + size = realsize;
> + codes[0] = 0;
> + for(i = 0; i < 16; i++)
> + codes[i+1] = (codes[i] + counts[i]) << 1;
> + for(i = 0; i < realsize; i++)
> + cw[i] = codes[bits[i]]++;
> +
> + ret = init_vlc_sparse(vlc, FFMIN(maxbits, 9), size,
> + bits, 1, 1,
> + cw, 2, 2,
> + syms, 2, 2, INIT_VLC_USE_STATIC);
> + return ret;
the size = realsize; and ret are unneeded
and the return value is never uses or checked in any calls ...
> + rv34_gen_vlc(rv34_intra_cbppatvlc_pointers[i][j], CBPPAT_VLC_SIZE, &intra_vlcs[i].cbppattern[j]);
> + rv34_gen_vlc(rv34_intra_secondpatvlc_pointers[i][j], OTHERBLK_VLC_SIZE, &intra_vlcs[i].second_pattern[j]);
> + rv34_gen_vlc(rv34_intra_thirdpatvlc_pointers[i][j], OTHERBLK_VLC_SIZE, &intra_vlcs[i].third_pattern[j]);
can be vertically aligned
> +/**
> + * Real Video 4.0 inverse transform
> + * Code is almost the same as in SVQ3, only scaling is different
> + */
> +static void rv34_intra_inv_transform(DCTELEM *block, const int offset){
> + int temp[16];
> + int i;
> +
> + for(i=0; i<4; i++){
> + const int z0= 13*(block[offset+i+8*0] + block[offset+i+8*2]);
> + const int z1= 13*(block[offset+i+8*0] - block[offset+i+8*2]);
> + const int z2= 7* block[offset+i+8*1] - 17*block[offset+i+8*3];
> + const int z3= 17* block[offset+i+8*1] + 7*block[offset+i+8*3];
> +
> + temp[4*i+0]= z0+z3;
> + temp[4*i+1]= z1+z2;
> + temp[4*i+2]= z1-z2;
> + temp[4*i+3]= z0-z3;
> + }
> +
> + for(i=0; i<4; i++){
> + const int z0= 13*(temp[4*0+i] + temp[4*2+i]) + 0x200;
> + const int z1= 13*(temp[4*0+i] - temp[4*2+i]) + 0x200;
> + const int z2= 7* temp[4*1+i] - 17*temp[4*3+i];
> + const int z3= 17* temp[4*1+i] + 7*temp[4*3+i];
> +
> + block[offset+i*8+0]= (z0 + z3)>>10;
> + block[offset+i*8+1]= (z1 + z2)>>10;
> + block[offset+i*8+2]= (z1 - z2)>>10;
> + block[offset+i*8+3]= (z0 - z3)>>10;
> + }
> +
> +}
> +
> +/**
> + * RealVideo 4.0 inverse transform - special version
> + *
> + * Code is almost the same but final coefficients are multiplied by 1.5
> + * and have no rounding
> + */
> +static void rv34_intra_inv_transform_noround(DCTELEM *block, const int offset){
> + int temp[16];
> + int i;
> +
> + for(i=0; i<4; i++){
> + const int z0= 13*(block[offset+i+8*0] + block[offset+i+8*2]);
> + const int z1= 13*(block[offset+i+8*0] - block[offset+i+8*2]);
> + const int z2= 7* block[offset+i+8*1] - 17*block[offset+i+8*3];
> + const int z3= 17* block[offset+i+8*1] + 7*block[offset+i+8*3];
> +
> + temp[4*i+0]= z0+z3;
> + temp[4*i+1]= z1+z2;
> + temp[4*i+2]= z1-z2;
> + temp[4*i+3]= z0-z3;
> + }
> +
> + for(i=0; i<4; i++){
> + const int z0= 13*(temp[4*0+i] + temp[4*2+i]);
> + const int z1= 13*(temp[4*0+i] - temp[4*2+i]);
> + const int z2= 7* temp[4*1+i] - 17*temp[4*3+i];
> + const int z3= 17* temp[4*1+i] + 7*temp[4*3+i];
> +
> + block[offset+i*8+0]= ((z0 + z3)*3)>>11;
> + block[offset+i*8+1]= ((z1 + z2)*3)>>11;
> + block[offset+i*8+2]= ((z1 - z2)*3)>>11;
> + block[offset+i*8+3]= ((z0 - z3)*3)>>11;
> + }
> +
> +}
duplicate also the comments dont match the function names
> + table2 = rv34_count_ones[pattern];
s/table2/ones/ or something like that
> +
> + for(mask = 8; mask; mask >>= 1, curshift++){
> + if(!(pattern & mask)) continue;
> + t = get_vlc2(gb, vlc->cbp[table][table2].table, vlc->cbp[table][table2].bits, 1);
> + cbp |= rv34_cbp_code[t] << curshift[0];
> + }
the vlc can be changed so as to make rv34_cbp_code unneeded!
> + if(mod == 2){
> + if(quant < 19) quant += 10;
> + else if(quant < 26) quant += 5;
> + }
> + if(mod == 1)
> + if(quant < 26) quant += 5;
if(mod == 2 && quant < 19) quant += 10;
else if(mod && quant < 26) quant += 5;
> + int no_A = 1, no_B = 1, no_C = 1;
> + int i, j;
> + int mx, my;
> +
> + memset(A, 0, sizeof(A));
> + memset(B, 0, sizeof(B));
> + memset(C, 0, sizeof(C));
> + no_A = !r->avail[0];
> + no_B = !r->avail[1];
> + no_C = !r->avail[2];
double initalization
also the negation is silly, no_X generally gets used like !no_X later
thats a mess
> + no_C |= (subblock_no == 3);
statement with no effect
> + if(subblock_no == 2) no_C = 0;
thats really >= 2 which makes a later ==3 =0 line redundant
> + if(!r->avail[0])
> + no_A[0] = no_A[1] = 1;
> + else{
> + no_A[0] = no_A[1] = 0;
> + if(r->mb_type[mb_pos - 1] != RV34_MB_B_FORWARD && r->mb_type[mb_pos - 1] != RV34_MB_B_DIRECT)
> + no_A[0] = 1;
> + if(r->mb_type[mb_pos - 1] != RV34_MB_B_BACKWARD && r->mb_type[mb_pos - 1] != RV34_MB_B_DIRECT)
> + no_A[1] = 1;
> + if(!no_A[0]){
> + A[0][0] = s->current_picture_ptr->motion_val[0][mv_pos - 1][0];
> + A[0][1] = s->current_picture_ptr->motion_val[0][mv_pos - 1][1];
> + }
> + if(!no_A[1]){
> + A[1][0] = s->current_picture_ptr->motion_val[1][mv_pos - 1][0];
> + A[1][1] = s->current_picture_ptr->motion_val[1][mv_pos - 1][1];
> + }
> + }
> + if(!r->avail[1]){
> + no_B[0] = no_B[1] = 1;
> + }else{
> + no_B[0] = no_B[1] = 0;
> + if(r->mb_type[mb_pos - s->mb_stride] != RV34_MB_B_FORWARD && r->mb_type[mb_pos - s->mb_stride] != RV34_MB_B_DIRECT)
> + no_B[0] = 1;
> + if(r->mb_type[mb_pos - s->mb_stride] != RV34_MB_B_BACKWARD && r->mb_type[mb_pos - s->mb_stride] != RV34_MB_B_DIRECT)
> + no_B[1] = 1;
> + if(!no_B[0]){
> + B[0][0] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride][0];
> + B[0][1] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride][1];
> + }
> + if(!no_B[1]){
> + B[1][0] = s->current_picture_ptr->motion_val[1][mv_pos - s->b8_stride][0];
> + B[1][1] = s->current_picture_ptr->motion_val[1][mv_pos - s->b8_stride][1];
> + }
> + }
code duplication
> + switch(block_type){
> + case RV34_MB_B_FORWARD:
> + rv34_pred_b_vector(A[0], B[0], C[0], no_A[0], no_B[0], no_C[0], &mx[0], &my[0]);
> + r->dmv[1][0] = 0;
> + r->dmv[1][1] = 0;
> + break;
> + case RV34_MB_B_BACKWARD:
> + r->dmv[1][0] = r->dmv[0][0];
> + r->dmv[1][1] = r->dmv[0][1];
> + r->dmv[0][0] = 0;
> + r->dmv[0][1] = 0;
why are the delta mv not stored in the correct entry when read?
also why is the other dmv not 0 changing the delta mv during prediction
seems very hackish
> + rv34_pred_b_vector(A[1], B[1], C[1], no_A[1], no_B[1], no_C[1], &mx[1], &my[1]);
> + break;
> + case RV34_MB_B_DIRECT:
> + rv34_pred_b_vector(A[0], B[0], C[0], no_A[0], no_B[0], no_C[0], &mx[0], &my[0]);
> + rv34_pred_b_vector(A[1], B[1], C[1], no_A[1], no_B[1], no_C[1], &mx[1], &my[1]);
> + break;
> + default:
> + no_A[0] = no_A[1] = no_B[0] = no_B[1] = no_C[0] = no_C[1] = 1;
> + }
please read the mpeg4 and h264 specs about what direct mode is
this terminology is not correct
direct is NOT the same as bidirectional
> +/**
> + * Generic motion compensation function - hopefully compiler will optimize it for each case
> + *
> + * @param r decoder context
> + * @param block_type type of the current block
> + * @param xoff horizontal offset from the start of the current block
> + * @param yoff vertical offset from the start of the current block
> + * @param mv_off offset to the motion vector information
> + * @param width width of the current partition in 8x8 blocks
> + * @param height height of the current partition in 8x8 blocks
> + */
> +static inline void rv34_mc(RV34DecContext *r, const int block_type,
> + const int xoff, const int yoff, int mv_off,
> + const int width, const int height)
> +{
> + MpegEncContext *s = &r->s;
> + uint8_t *Y, *U, *V, *srcY, *srcU, *srcV;
> + int dxy, mx, my, uvmx, uvmy, src_x, src_y, uvsrc_x, uvsrc_y;
> + int mv_pos = s->mb_x * 2 + s->mb_y * 2 * s->b8_stride + mv_off;
> +
> + mx = s->current_picture_ptr->motion_val[0][mv_pos][0];
> + my = s->current_picture_ptr->motion_val[0][mv_pos][1];
> + srcY = s->last_picture_ptr->data[0];
> + srcU = s->last_picture_ptr->data[1];
> + srcV = s->last_picture_ptr->data[2];
> + src_x = s->mb_x * 16 + xoff + (mx >> 2);
> + src_y = s->mb_y * 16 + yoff + (my >> 2);
> + uvsrc_x = s->mb_x * 8 + (xoff >> 1) + (mx >> 3);
> + uvsrc_y = s->mb_y * 8 + (yoff >> 1) + (my >> 3);
> + srcY += src_y * s->linesize + src_x;
> + srcU += uvsrc_y * s->uvlinesize + uvsrc_x;
> + srcV += uvsrc_y * s->uvlinesize + uvsrc_x;
> + if( (unsigned)(src_x - !!(mx&3)*2) > s->h_edge_pos - !!(mx&3)*2 - (width <<3) - 3
> + || (unsigned)(src_y - !!(my&3)*2) > s->v_edge_pos - !!(my&3)*2 - (height<<3) - 3){
> + uint8_t *uvbuf= s->edge_emu_buffer + 20 * s->linesize;
> +
> + srcY -= 2 + 2*s->linesize;
> + ff_emulated_edge_mc(s->edge_emu_buffer, srcY, s->linesize, (width<<3)+4, (height<<3)+4,
> + src_x - 2, src_y - 2, s->h_edge_pos, s->v_edge_pos);
> + srcY = s->edge_emu_buffer + 2 + 2*s->linesize;
> + ff_emulated_edge_mc(uvbuf , srcU, s->uvlinesize, (width<<2)+1, (height<<2)+1,
> + uvsrc_x, uvsrc_y, s->h_edge_pos >> 1, s->v_edge_pos >> 1);
> + ff_emulated_edge_mc(uvbuf + 16, srcV, s->uvlinesize, (width<<2)+1, (height<<2)+1,
> + uvsrc_x, uvsrc_y, s->h_edge_pos >> 1, s->v_edge_pos >> 1);
> + srcU = uvbuf;
> + srcV = uvbuf + 16;
> + }
> + dxy = ((my & 3) << 2) | (mx & 3);
> + uvmx = mx & 6;
> + uvmy = my & 6;
> + Y = s->dest[0] + xoff + yoff*s->linesize;
> + U = s->dest[1] + (xoff>>1) + (yoff>>1)*s->uvlinesize;
> + V = s->dest[2] + (xoff>>1) + (yoff>>1)*s->uvlinesize;
> + if(block_type == RV34_MB_P_16x8){
> + s->dsp.put_h264_qpel_pixels_tab[1][dxy](Y, srcY, s->linesize);
> + Y += 8;
> + srcY += 8;
> + s->dsp.put_h264_qpel_pixels_tab[1][dxy](Y, srcY, s->linesize);
> + s->dsp.put_h264_chroma_pixels_tab[0] (U, srcU, s->uvlinesize, 4, uvmx, uvmy);
> + s->dsp.put_h264_chroma_pixels_tab[0] (V, srcV, s->uvlinesize, 4, uvmx, uvmy);
> + }else if(block_type == RV34_MB_P_8x16){
> + s->dsp.put_h264_qpel_pixels_tab[1][dxy](Y, srcY, s->linesize);
> + Y += 8 * s->linesize;
> + srcY += 8 * s->linesize;
> + s->dsp.put_h264_qpel_pixels_tab[1][dxy](Y, srcY, s->linesize);
> + s->dsp.put_h264_chroma_pixels_tab[1] (U, srcU, s->uvlinesize, 8, uvmx, uvmy);
> + s->dsp.put_h264_chroma_pixels_tab[1] (V, srcV, s->uvlinesize, 8, uvmx, uvmy);
> + }else if(block_type == RV34_MB_P_8x8){
> + s->dsp.put_h264_qpel_pixels_tab[1][dxy](Y, srcY, s->linesize);
> + s->dsp.put_h264_chroma_pixels_tab[1] (U, srcU, s->uvlinesize, 4, uvmx, uvmy);
> + s->dsp.put_h264_chroma_pixels_tab[1] (V, srcV, s->uvlinesize, 4, uvmx, uvmy);
> + }else{
> + s->dsp.put_h264_qpel_pixels_tab[0][dxy](Y, srcY, s->linesize);
> + s->dsp.put_h264_chroma_pixels_tab[0] (U, srcU, s->uvlinesize, 8, uvmx, uvmy);
> + s->dsp.put_h264_chroma_pixels_tab[0] (V, srcV, s->uvlinesize, 8, uvmx, uvmy);
> + }
> +}
> +
> +/**
> + * B-frame specific motion compensation function
> + *
> + * @param r decoder context
> + * @param block_type type of the current block
> + */
> +static inline void rv34_mc_b(RV34DecContext *r, const int block_type)
> +{
> + MpegEncContext *s = &r->s;
> + uint8_t *srcY, *srcU, *srcV;
> + int dxy, mx, my, uvmx, uvmy, src_x, src_y, uvsrc_x, uvsrc_y;
> + int mv_pos = s->mb_x * 2 + s->mb_y * 2 * s->b8_stride;
> +
> + if(block_type != RV34_MB_B_BACKWARD){
> + mx = s->current_picture_ptr->motion_val[0][mv_pos][0];
> + my = s->current_picture_ptr->motion_val[0][mv_pos][1];
> + srcY = s->last_picture_ptr->data[0];
> + srcU = s->last_picture_ptr->data[1];
> + srcV = s->last_picture_ptr->data[2];
> + }else{
> + mx = s->current_picture_ptr->motion_val[1][mv_pos][0];
> + my = s->current_picture_ptr->motion_val[1][mv_pos][1];
> + srcY = s->next_picture_ptr->data[0];
> + srcU = s->next_picture_ptr->data[1];
> + srcV = s->next_picture_ptr->data[2];
> + }
> + if(block_type == RV34_MB_B_INTERP){
> + mx += (s->next_picture_ptr->motion_val[0][mv_pos][0] + 1) >> 1;
> + my += (s->next_picture_ptr->motion_val[0][mv_pos][1] + 1) >> 1;
> + }
> + src_x = s->mb_x * 16 + (mx >> 2);
> + src_y = s->mb_y * 16 + (my >> 2);
> + uvsrc_x = s->mb_x * 8 + (mx >> 3);
> + uvsrc_y = s->mb_y * 8 + (my >> 3);
> + srcY += src_y * s->linesize + src_x;
> + srcU += uvsrc_y * s->uvlinesize + uvsrc_x;
> + srcV += uvsrc_y * s->uvlinesize + uvsrc_x;
> + if( (unsigned)(src_x - !!(mx&3)*2) > s->h_edge_pos - !!(mx&3)*2 - 16 - 3
> + || (unsigned)(src_y - !!(my&3)*2) > s->v_edge_pos - !!(my&3)*2 - 16 - 3){
> + uint8_t *uvbuf= s->edge_emu_buffer + 20 * s->linesize;
> +
> + srcY -= 2 + 2*s->linesize;
> + ff_emulated_edge_mc(s->edge_emu_buffer, srcY, s->linesize, 16+4, 16+4,
> + src_x - 2, src_y - 2, s->h_edge_pos, s->v_edge_pos);
> + srcY = s->edge_emu_buffer + 2 + 2*s->linesize;
> + ff_emulated_edge_mc(uvbuf , srcU, s->uvlinesize, 8+1, 8+1,
> + uvsrc_x, uvsrc_y, s->h_edge_pos >> 1, s->v_edge_pos >> 1);
> + ff_emulated_edge_mc(uvbuf + 16, srcV, s->uvlinesize, 8+1, 8+1,
> + uvsrc_x, uvsrc_y, s->h_edge_pos >> 1, s->v_edge_pos >> 1);
> + srcU = uvbuf;
> + srcV = uvbuf + 16;
> + }
> + dxy = ((my & 3) << 2) | (mx & 3);
> + uvmx = mx & 6;
> + uvmy = my & 6;
> + s->dsp.put_h264_qpel_pixels_tab[0][dxy](s->dest[0], srcY, s->linesize);
> + s->dsp.put_h264_chroma_pixels_tab[0] (s->dest[1], srcU, s->uvlinesize, 8, uvmx, uvmy);
> + s->dsp.put_h264_chroma_pixels_tab[0] (s->dest[2], srcV, s->uvlinesize, 8, uvmx, uvmy);
> +}
> +
> +/**
> + * B-frame specific motion compensation function - for direct/interpolated blocks
> + *
> + * @param r decoder context
> + * @param block_type type of the current block
> + */
> +static inline void rv34_mc_b_interp(RV34DecContext *r, const int block_type)
> +{
> + MpegEncContext *s = &r->s;
> + uint8_t *srcY, *srcU, *srcV;
> + int dxy, mx, my, uvmx, uvmy, src_x, src_y, uvsrc_x, uvsrc_y;
> + int mv_pos = s->mb_x * 2 + s->mb_y * 2 * s->b8_stride;
> +
> + mx = s->current_picture_ptr->motion_val[1][mv_pos][0];
> + my = s->current_picture_ptr->motion_val[1][mv_pos][1];
> + if(block_type == RV34_MB_B_INTERP){
> + mx -= s->next_picture_ptr->motion_val[0][mv_pos][0] >> 1;
> + my -= s->next_picture_ptr->motion_val[0][mv_pos][1] >> 1;
> + }
> + srcY = s->next_picture_ptr->data[0];
> + srcU = s->next_picture_ptr->data[1];
> + srcV = s->next_picture_ptr->data[2];
> +
> + src_x = s->mb_x * 16 + (mx >> 2);
> + src_y = s->mb_y * 16 + (my >> 2);
> + uvsrc_x = s->mb_x * 8 + (mx >> 3);
> + uvsrc_y = s->mb_y * 8 + (my >> 3);
> + srcY += src_y * s->linesize + src_x;
> + srcU += uvsrc_y * s->uvlinesize + uvsrc_x;
> + srcV += uvsrc_y * s->uvlinesize + uvsrc_x;
> + if( (unsigned)(src_x - !!(mx&3)*2) > s->h_edge_pos - !!(mx&3)*2 - 16 - 3
> + || (unsigned)(src_y - !!(my&3)*2) > s->v_edge_pos - !!(my&3)*2 - 16 - 3){
> + uint8_t *uvbuf= s->edge_emu_buffer + 20 * s->linesize;
> +
> + srcY -= 2 + 2*s->linesize;
> + ff_emulated_edge_mc(s->edge_emu_buffer, srcY, s->linesize, 16+4, 16+4,
> + src_x - 2, src_y - 2, s->h_edge_pos, s->v_edge_pos);
> + srcY = s->edge_emu_buffer + 2 + 2*s->linesize;
> + ff_emulated_edge_mc(uvbuf , srcU, s->uvlinesize, 8+1, 8+1,
> + uvsrc_x, uvsrc_y, s->h_edge_pos >> 1, s->v_edge_pos >> 1);
> + ff_emulated_edge_mc(uvbuf + 16, srcV, s->uvlinesize, 8+1, 8+1,
> + uvsrc_x, uvsrc_y, s->h_edge_pos >> 1, s->v_edge_pos >> 1);
> + srcU = uvbuf;
> + srcV = uvbuf + 16;
> + }
> + dxy = ((my & 3) << 2) | (mx & 3);
> + uvmx = mx & 6;
> + uvmy = my & 6;
> + s->dsp.avg_h264_qpel_pixels_tab[0][dxy](s->dest[0], srcY, s->linesize);
> + s->dsp.avg_h264_chroma_pixels_tab[0] (s->dest[1], srcU, s->uvlinesize, 8, uvmx, uvmy);
> + s->dsp.avg_h264_chroma_pixels_tab[0] (s->dest[2], srcV, s->uvlinesize, 8, uvmx, uvmy);
> +}
the B and P functions have plenty of similar code which could be factored out
> +/** Mapping of RV40 intra prediction types to standard H.264 types */
> +static const int ittrans[9] = {
> + DC_PRED, VERT_PRED, HOR_PRED, DIAG_DOWN_RIGHT_PRED, DIAG_DOWN_LEFT_PRED,
> + VERT_RIGHT_PRED, VERT_LEFT_PRED, HOR_UP_PRED, HOR_DOWN_PRED,
> +};
> +
> +/** Mapping of RV40 intra 16x16 prediction types to standard H.264 types */
> +static const int ittrans16[4] = {
> + DC_PRED8x8, VERT_PRED8x8, HOR_PRED8x8, PLANE_PRED8x8,
> +};
this is rv34.c not rv40
> + for(i = 0; i < 4; i++, cbp >>= 1, YY += 4){
> + no_topright = no_up || (i==3 && j) || (i==3 && !j && (s->mb_x-1) == s->mb_width);
> + rv34_pred_4x4_block(r, YY, s->linesize, ittrans[intra_types[i]], no_up, no_left, i || (j==3), no_topright);
> + no_left = 0;
> + if(!(cbp & 1)) continue;
> + rv34_add_4x4_block(YY, s->linesize, s->block[(i>>1)+(j&2)], (i&1)*4+(j&1)*32);
> + }
could you deobfuscate this slightly?
> +/**
> + * Table for obtaining quantizer difference
> + */
> +static const int8_t rv34_dquant_tab[] = {
> + 0, 0, 2, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1,
> + -1, 1, -1, 1, -1, 1, -2, 2, -2, 2, -2, 2, -2, 2, -2, 2,
> + -2, 2, -2, 2, -2, 2, -2, 2, -2, 2, -3, 3, -3, 3, -3, 3,
> + -3, 3, -3, 3, -3, 3, -3, 3, -3, 3, -3, 2, -3, 1, -3, -5
> +};
isnt this the same as modified_quant_tab ?
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
The greatest way to live with honor in this world is to be what we pretend
to be. -- Socrates
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20071124/1d6e939c/attachment.pgp>
More information about the ffmpeg-devel
mailing list