[MPlayer-dev-eng] [PATCH] mf png fix

Joey Parrish joey at nicewarrior.org
Sat Mar 22 22:10:12 CET 2003


Okay, I'm going to go through each part of this patch to explain.
There's a reason behind every change, it's not just random hacking.
Please read all the way through before complaining.

> --- etc/codecs.conf	5 Mar 2003 10:38:53 -0000	1.268
> +++ etc/codecs.conf	18 Mar 2003 22:10:22 -0000
> @@ -58,7 +58,7 @@
>    fourcc mpng,MPNG
>    dll "libpng"
>    driver mpng
> -  out BGR32,BGR24
> +  out BGR24
>  
>  videocodec mtga
>    info "TGA images decoder"

All my truecolor png's were being shown wrong by mplayer.  After some
debugging, I discovered that libpng was producing 24-bit output and
mplayer was expecting 32-bit color.  That's why I chose to change the
line in codecs.conf.  I don't believe there is a less invasive way to
fix this, because vd_mpng does not (can not) announce the color depth of
it's input stream ahead of time because it still needed to read each png
to find out what depth it was.  More on this later.

> --- libmpcodecs/vd_mpng.c	22 Dec 2002 16:31:04 -0000	1.5
> +++ libmpcodecs/vd_mpng.c	18 Mar 2003 22:10:23 -0000
> @@ -24,11 +24,8 @@
>  
>  LIBVD_EXTERN(mpng)
>  
> -static unsigned int out_fmt=0;
> -

out_fmt is removed here because it was not being used effectively.
(AFAICT)  I might be mistaken about the reasons, so forgive me if I'm
wrong, but from what I can tell:  The filter chain is set up between
init call and first decode call.  So, changing the requested format in
each call to mpcodecs_get_image had no effect.  At least, when I tried
changing this, it had no effect.  (I got 32-bit mpi no matter what,
until I changed codecs.conf.)  So I'm making some assumptions about the
causes, but this fixed my problem.

>  static int last_w=-1;
>  static int last_h=-1;
> -static int last_c=-1;

Removing last_c here because as you will see below, I have libpng
produce 24-bit output no matter what the input png's format.  Thus, the
colorspace will never change, and there's no need to save the old one.
 
>  // to set/get/query special features/parameters
>  static int control(sh_video_t *sh,int cmd,void* arg,...){
> @@ -45,8 +42,6 @@
>  static void uninit(sh_video_t *sh){
>  }
 
This is cosmetics, sorry:
 
> -//mp_image_t* mpcodecs_get_image(sh_video_t *sh, int mp_imgtype, int mp_imgflag, int w, int h);
> -
>  static int    pngPointer;
>  static int    pngLength;
>  
> @@ -63,7 +58,6 @@
>      png_structp     png;
>      png_infop       info;
>      png_infop       endinfo;
> -//    png_bytep       data;
>      png_bytep     * row_p;
>      png_uint_32     png_width=0,png_height=0;
>      int             depth,color;
> @@ -82,56 +76,36 @@
>   png_set_sig_bytes( png,8 );
>   png_read_info( png,info );
>   png_get_IHDR( png,info,&png_width,&png_height,&depth,&color,NULL,NULL,NULL );
> -
 png_set_bgr( png );
> + png_set_bgr( png );

Now, since out_fmt wasn't doing anything as far as I can tell, I removed
this block of code that sets it based on the png input type.  Also, the
png input data will be expanded to 24-bit BGR by libpng anyway.

> - switch( info->color_type ) {
> -   case PNG_COLOR_TYPE_GRAY_ALPHA:
> -      mp_msg( MSGT_DECVIDEO,MSGL_INFO,"Sorry gray scaled png with alpha channel not supported at moment.\n" );
> -      break;
> -   case PNG_COLOR_TYPE_GRAY:
> -   case PNG_COLOR_TYPE_PALETTE:
> -      out_fmt=IMGFMT_BGR8;
> -      break;
> -   case PNG_COLOR_TYPE_RGB_ALPHA:
> -      out_fmt=IMGFMT_BGR32;
> -      break;
> -   case PNG_COLOR_TYPE_RGB:
> -      out_fmt=IMGFMT_BGR24;
> -      break;
> -   default:
> -      mp_msg( MSGT_DECVIDEO,MSGL_INFO,"Sorry, unsupported PNG colorspace: %d.\n" ,info->color_type);

This will check for paletted png input and expand it.  Then, the info
struct is updated based on the changes.
> + if ( info->color_type == PNG_COLOR_TYPE_PALETTE) {
> +	 png_set_expand(png);
> +	 png_read_update_info(png, info);
> + }

This will check for alpha png input and strip the alpha channel.  I do
this rather than expanding all 24-bit to 32-bit because for some reason
the libpng call that expands 24-bit to 32-bit segfaulted mplayer during
my tests.  I gave up investigating this.  Again, the info is updated.
> + if ( info->color_type & PNG_COLOR_MASK_ALPHA) {
> +	 png_set_strip_alpha(png);
> +	 png_read_update_info(png, info);
> + }

This will check for grayscale png and expand it into 24-bit color, then
update the info struct.
> + if ( info->color_type == PNG_COLOR_TYPE_GRAY) {
> +	 png_set_expand(png);
> +	 png_read_update_info(png, info);
>   }
 
Change the check here because last_c and out_fmt were removed.
>   // (re)init libvo if image parameters changed (width/height/colorspace)
> - if(last_w!=png_width || last_h!=png_height || last_c!=out_fmt){
> -    last_w=png_width; last_h=png_height; last_c=out_fmt;
> -    if(!out_fmt) return NULL;
> -    if(!mpcodecs_config_vo(sh,png_width,png_height,out_fmt)) return NULL;
> + if(last_w!=png_width || last_h!=png_height){
> +    last_w = png_width; last_h = png_height;
> +    if(!mpcodecs_config_vo(sh,png_width,png_height,IMGFMT_BGR24)) return NULL;
>   }
 
Removed this block because 1) it was commented out and 2) it did
nothing and 3) the information it reported was wrong (no
stripping/conversion/etc was being done at all before) and 4) by now in
the code the data has been expanded and the info updated to say
PNG_COLOR_TYPE_RGB every time.
> -#if 0
> - switch( info->color_type )
> -  {
> -   case PNG_COLOR_TYPE_GRAY_ALPHA: printf( "[png] used GrayA -> stripping alpha channel\n" ); break;
> -   case PNG_COLOR_TYPE_GRAY:       printf( "[png] used Gray -> rgb\n" ); break;
> -   case PNG_COLOR_TYPE_PALETTE:    printf( "[png] used palette -> rgb\n" ); break;
> -   case PNG_COLOR_TYPE_RGB_ALPHA:  printf( "[png] used RGBA -> stripping alpha channel\n" ); break;
> -   case PNG_COLOR_TYPE_RGB:        printf( "[png] read rgb datas.\n" ); break;
> -  }
> -#endif
> -

Finally, more cosmetics.

>      mpi=mpcodecs_get_image(sh, MP_IMGTYPE_TEMP, MP_IMGFLAG_ACCEPT_STRIDE, 
>  	png_width,png_height);
>      if(!mpi) return NULL;
>  
>  // Let's DECODE!
>   row_p=(png_bytep*)malloc( sizeof( png_bytep ) * png_height );
> -//png_get_rowbytes( png,info ) 
>   for ( i=0; i < png_height; i++ ) row_p[i]=mpi->planes[0] + mpi->stride[0]*i;
>   png_read_image( png,row_p );
>   free( row_p );
> - 
> - //png_get_PLTE( png,info,(png_colorp*)&pal,&cols );
>   
>   png_read_end( png,endinfo );
>   png_destroy_read_struct( &png,&info,&endinfo );

Okay, so in conclusion, it's a big change.  Also, ideally we would be
able to handle the input stream of png's in it's native format (such as
displaying 8-bit video for 8-bit input, etc.) but that leaves us with
some questions IMHO.  If the colorspace changes, does the filter chain
change to fit this?  From what I can tell it does not, so then we would
have to convert them all to one format.  This means either the format of
the first png (which may not be enough info to be true to future pngs)
or BGR24 or BGR32.  BGR32 would be the most info, but as I said before,
the call to libpng crashed on me and I don't yet know why.

So, I hope this doesn't get too many flames.  If you don't like what
I've done, please explain why.  There's a lot I could learn about
mplayer internals, but there's a lot of confusing source to read.
I really appreciate everyone's help and support on this project.

Thanks,
--Joey


More information about the MPlayer-dev-eng mailing list