[NUT-devel] CVS: main/DOCS/tech mpcf.txt,1.117,1.118

Rich Felker dalias at aerifal.cx
Thu Mar 2 23:16:23 CET 2006


On Thu, Mar 02, 2006 at 08:00:42PM +0100, Michael Niedermayer wrote:
> > > > As for what do we gain - more freedom to the muxer, and the ability of 
> > > > limiting for ex. audio to a very small size and not increasing overhead, 
> > > > and limiting video to slightly higher to avoid additional overhead...
> > > 
> > > agree but only if we store all these max_size/distance stuff in u(16) so
> > > they cannot be arbitrary large
> > 
> > NO! They really need to be arbitrarily large! If you have a file where
> > each frame is at least 5 megs, max_distance is useless unless it's
> > well over 5 megs!
> 
> are you drunk? if all frames are 5megs then max_distance==5mb will give
> you exactly the same file as any smaller max_distance, now if max_distance is

It's not about size or overhead, but usefulness. Distance between
syncpoints being > max_distance should be a special case, not the
general case. The demuxer implementation in principle has to do more
to check for validity in this case. Maybe all that can be eliminated
and it's not such a big deal, but I'm still against putting particular
physical units in NUT. Today 64k is large. 10-15 years from now it may
be trivial.

> now i hope that the average windows kid will be smarter then rich and not
> come to the same conclusion that larger average frames somehow need larger
> max_distance, or otherwise we can forget the error recovering capabilities of
> nut in practice

:)

> sorry but i wont continue these disscussions, expect me to fork if this
> continues, proposals should be judged on their effects (like overhead,
> complexity, error robustness, amount of memory or computations or delay
> , ...) but not on philosophical ones, nonexisting codecs, nonexisting demuxer
> architectures, nonexisiting kernels and so on

Michael, please have some patience. When you spring a bunch of new
things on us all the sudden, there will be resistence, just like when
our roles were reversed with the per-stream back_ptr/pts stuff.
Forking and ignoring the concerns of the other people involved is
possible but it does much less to improve the overall design. At that
time I was adamantly against calling a vote (even though I probably
could have 'won' the vote, as you said) or doing other things to
polarize the situation. This has always been about making NUT as good
as possible, not about people's personal egos, and where my ideas have
turned out to be bad I've abandoned them. I hope we can reasonably
discuss the remaining issues you've raised and reach a consensus on
what the best design is rather than flaming and forking and such.

> my proposed header compression, which has negligible complexity would reduce
> the overhead by ~1% and was rejected based on nonexistant kernel and demuxer
> architectures

Scratch kernel; the kernel architecture for it already exists. It's in
POSIX and called posix_madvise. There is no demuxer to do zerocopy
demuxing, but in the case where decoded frames fit in L2 cache easily,
but the compressed frame is very large (i.e. high quality, high
bitrate files -- the very ones where performance is a problem)
zerocopy will make a significant improvement to performance.
Sacrificing this to remove 1% codec overhead in crappy codecs is not a
good tradeoff IMO. It would be easier to just make "MN custom MPEG4"
codec that doesn't have the wasted bytes to begin with...

> so either 0.0003815% is significnat that my header compression will go into
> the spec or arbirary sized max_distance leaves it, choose, but stop to change
> the rules for each thing depening upon if you like it or not

My objection to upper bound on max_distance had nothing to do with
size. I'm sorry I wasn't clear.

Rich




More information about the NUT-devel mailing list