[Ffmpeg-devel] Increasing Audio Buffer Size
Cyril Zorin
cyril.zorin
Tue May 9 21:10:46 CEST 2006
On 9-May-06, at 2:56 PM, Michael Niedermayer wrote:
> Hi
>
> On Tue, May 09, 2006 at 01:05:20PM -0400, Cyril Zorin wrote:
> [...]
>>> heres my thought on how it could be done (comments welcome...)
>>>
>>> -int avcodec_decode_audio(AVCodecContext *avctx, int16_t *samples,
>>> +int avcodec_decode_audio(AVCodecContext *avctx, AVFrame *avframe,
>>>
>>> optionally the above can be done in a way which doesnt break
>>> compatibility
>>> be adding a new function and keeping the old ...
>>>
>>>
>>> the audio decoders decode():
>>> calls avctx.release_buffer(avctx, avframe) if a previous
>>> buffer isnt
>>> needed anymore
>>>
>>> calls avctx.get_buffer(avctx, avframe)
>>>
>>> audio sample i of channel c is stored in
>>> avframe.data[c][ i*avframe.linesize[c] ] cast to the format
>>> (always
>>> int16_t currently)
>>
>>
>> Would it be correct to say that currently (samples) is an array of
>> interlaced channel data?
>
> yes
>
>
>> If many audio decoders already output interlaced
>> channel audio data, then they'd have to be modified to support the
>> proposed
>
> nonsense, nothing needs to be modified, the new system supports
> interleaved as
> well as non interleaved output, the later makes some sense as it
> might be
> closer to the internal format and it might be easier to filter /
> encode
Right now then, without this glorious modification that you're about
to make, how would I specify
that my audio output data is not interleaved?
>
>
>> avframe.data[channel][sample_index] storage.
>
> avframe.data[c][ i*avframe.linesize[c] ] not avframe.data[c][i]
Yeah yeah.
>
>
>> In that case, who interlaces
>> the audio data later on?
>
> if the user needs a format different from what a decoder output ...
> well
> of course the user will need to covert it, lavc might provide some
> code
> to help but its really just a 3 line for loop ...
>
>
>>
>> I think it'd be better to take the analogous approach that video
>> decoding
>
> i do, you just dont seem to understand it
No, it's __you__ who's misunderstanding my point. You can't say
"comments welcome" and
then mouth off to people -- are comments welcome, or not? If you
think you're a genius and
have the liberty to treat people like idiots, then go ahead and
implement it yourself, but don't
pretend to be "open minded" and say shit like "comments welcome."
>
>
>> takes, insofar that at a certain point an "Audio Frame" is just a
>> free-area
>> of crap that the decoder can fill in, without organizing it by
>> "channel" or
>> whatever.
>
> for video its organized by color components and lines, so your free
> form
> crap is not analogous
I meant "free form" in that I specify how I'll organize my color
data. Take the mental leap there before
you start calling things "crap", in order to impress... who?
>
>
>>
>> Also, if it were up to me, I'd leave the AVFrame struct well alone
>> and make
>> an "AFrame" or otherwise something for "audio frame". I wouldn't
>> want to
>> clutter AVFrame any more.
>
> thats certainly a possibility, if its better i dont know, what i do
> know
> is it will be more work and possibly more complicated code ...
> we would need AVFrame, AFrame, VFrame later 2 would be "subclasses"
> of AVFrames (= their first fields match the ones from AVFrame)
Subclasses? What for? Shit. If you think that breaking things down
and therefore separating concerns is
making code "complicated", then I think I lost the debate before I
got into it.
Go ahead, stick it in AVFrame.
>
> [...]
>
> --
> Michael
>
> In the past you could go to a library and read, borrow or copy any
> book
> Today you'd get arrested for mere telling someone where the library is
>
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at mplayerhq.hu
> http://mplayerhq.hu/mailman/listinfo/ffmpeg-devel
More information about the ffmpeg-devel
mailing list