[NUT-devel] CVS: main/DOCS/tech mpcf.txt,1.117,1.118
Michael Niedermayer
michaelni at gmx.at
Fri Mar 3 00:30:22 CET 2006
Hi
On Thu, Mar 02, 2006 at 05:16:23PM -0500, Rich Felker wrote:
[...]
> > sorry but i wont continue these disscussions, expect me to fork if this
> > continues, proposals should be judged on their effects (like overhead,
> > complexity, error robustness, amount of memory or computations or delay
> > , ...) but not on philosophical ones, nonexisting codecs, nonexisting demuxer
> > architectures, nonexisiting kernels and so on
>
> Michael, please have some patience. When you spring a bunch of new
> things on us all the sudden, there will be resistence, just like when
> our roles were reversed with the per-stream back_ptr/pts stuff.
> Forking and ignoring the concerns of the other people involved is
> possible but it does much less to improve the overall design. At that
> time I was adamantly against calling a vote (even though I probably
> could have 'won' the vote, as you said) or doing other things to
> polarize the situation. This has always been about making NUT as good
> as possible, not about people's personal egos, and where my ideas have
> turned out to be bad I've abandoned them. I hope we can reasonably
> discuss the remaining issues you've raised and reach a consensus on
> what the best design is rather than flaming and forking and such.
you ask someone else to stop flaming? ;)
>
> > my proposed header compression, which has negligible complexity would reduce
> > the overhead by ~1% and was rejected based on nonexistant kernel and demuxer
> > architectures
>
> Scratch kernel; the kernel architecture for it already exists. It's in
> POSIX and called posix_madvise. There is no demuxer to do zerocopy
> demuxing, but in the case where decoded frames fit in L2 cache easily,
> but the compressed frame is very large (i.e. high quality, high
> bitrate files -- the very ones where performance is a problem)
> zerocopy will make a significant improvement to performance.
> Sacrificing this to remove 1% codec overhead in crappy codecs is not a
> good tradeoff IMO. It would be easier to just make "MN custom MPEG4"
> codec that doesn't have the wasted bytes to begin with...
this reminds me of my little mpeg1 experiment, replacing the entropy
coder with an ac coder gave ~10% lower bitrate ...
>
> > so either 0.0003815% is significnat that my header compression will go into
> > the spec or arbirary sized max_distance leaves it, choose, but stop to change
> > the rules for each thing depening upon if you like it or not
>
> My objection to upper bound on max_distance had nothing to do with
> size. I'm sorry I wasn't clear.
and what is the terrible thing which will/could happen if there where
an upper limit? i mean from the user or developer perspecitve (complexity,
speed, overhead, ...) not the idealist/philosopher perspective (its wrong,
bad design, this is like <infamous container> does it, it will make
<extreemly rare use case> slightly slower, ...)
[...]
--
Michael
More information about the NUT-devel
mailing list