[FFmpeg-devel] [RFC] special "broken DV" demuxer

Roman V Shaposhnik rvs
Wed Mar 11 16:47:42 CET 2009


On Wed, 2009-03-11 at 09:06 +0100, Reimar D?ffinger wrote:
> On Tue, Mar 10, 2009 at 04:51:15PM -0700, Roman V Shaposhnik wrote:
> > On Tue, 2009-03-10 at 20:04 +0100, Reimar D?ffinger wrote:
> > > On Tue, Mar 10, 2009 at 11:58:57AM -0700, Roman V Shaposhnik wrote:
> > > > > > > Since it seems all DV decoders and demuxers including ours have no
> > > > > > > error checking whatsoever it still plays "fine".
> > > > > > > Unfortunately, the recently added autodetection, that also allows to
> > > > > > > play badly cut DV files, can not handle it.
> > > > > > 
> > > > > > Right. So the problem would be a regression, as far as I can tell.
> > > > > > Thus the question: can the autodetection code be fixed?
> > > > > 
> > > > > What do you mean?
> > > > 
> > > > I meant, that the file was playable before the change to the 
> > > > autodetection.
> > > 
> > > My suggestion with the separate demuxer will restore exactly the same
> > > behaviour, except that it will also play badly cut files if the have a
> > > header section and in addition will never require seeking.
> > 
> > So the only issue at hand is how to support autodetection, right? 
> > Would it be possible to read a 1st frame and feed it to the decoder?
> 
> How do you know know where a frame starts and where it ends?

Well, raw DV only has a handful of sizes to choose from ;-)

> Also you can't use the DV decoder to detect if something is a DV frame,
> the MPlayer DV demuxer does that and it detect almost any random crap as
> valid DV. It even decodes "fine", except that it is random crap.
> I am not sure how many bits are actually checked but I'd expect 16 bits
> would be a high estimate. Often 20 or more frames were played before an
> error (sometimes even before a resolution change), so for a reliable
> autodetection my estimate is that you would have to read at least 50 DV
> frames.

Tough. Well, another alternative would be to cycle through all the
potential headers inside a single frame and pick the data that the
majority of them agrees upon.

> > > Then someone should simplify our DV encoder because it is full of stuff
> > > like
> > >                    0xc;       /* reserved -- always b1100 */
> > > If that is not necessary that is an extreme amount of bloat.
> > 
> > Well, sticking to the spec does add complexity. Which is, I suspect,
> > the reason quite a few hardware DV encoders don't give a rip about
> > some of these. Michael had a good summary of how the specs in general
> > are supposed to be implemented.
> 
> There is a difference between "reserved stuff may be whatever it wants"
> and "we should try to support wrong values for reserved stuff". What you
> said sounded like the former.
> And yes it makes a difference, because in the second case

Mmmm. Perhaps I have my modality wrong.

> 1) it's ok (and IMO advisable) to call those files broken
> 2) it is unreasonable to support those broken files too much at the expense of
> the correct ones.

All true. Now, go convince the owners of Sony/Panasonic/etc. to petition
the vendors ;-)

Besides, the redundancy of metadata in DV was specifically designed to
combat tape volatility. *Some* of the identical data structures in
a single DV frame could be damaged. That doesn't make the entire
frame broken.

> I can justify different demuxers differently: for "correct" DV files

Lets not talk about DV files, but rather DV frames. The "correct" DV
frame in my book is something that has enough redundant data left to
fully reconstruct it.

> proper autodetection is simple and reliable, needs reading only a few
> bytes and does not needs seeking nor relevant amount of buffering.
> When you allow anything non-critical to be missing, allow reserved
> values to have any value etc. autodetection will require reading and
> analyzing huge amounts of data 

I don't see how reading one frame (even a DVCPRO HD one) and cycling
through all the know locations for the redundant copies of the
dv_sect_header would be "analyzing huge amounts of data".

Although, as always -- it is up to Michael to issue a final call on
this one.

Thanks,
Roman.





More information about the ffmpeg-devel mailing list