[Libav-user] Creating Panned MP3 Clips

Paul B Mahol onemda at gmail.com
Sun Jan 1 19:20:48 EET 2023


On Sun, Jan 1, 2023 at 5:21 PM Andrew Randrianasulu <randrianasulu at gmail.com>
wrote:

>
>
> On Sun, Jan 1, 2023 at 12:57 PM Paul B Mahol <onemda at gmail.com> wrote:
>
>>
>>
>> On Sun, Jan 1, 2023 at 7:36 AM Andrew Randrianasulu <
>> randrianasulu at gmail.com> wrote:
>>
>>>
>>>
>>> вс, 1 янв. 2023 г., 09:10 Terry Corbet <tcorbet at ix.netcom.com>:
>>>
>>>> I have recently discovered how to use the Audacity Envelope Tool to
>>>> turn
>>>> a standard stereo MP3 file into a modified one in which throughout the
>>>> entire duration of the clip the apparent source of the sounds will
>>>> traverse from left to right.
>>>
>>>
>>> may be pan filter can do something by altering volumes of individual
>>> channels but as far as I can see you can't change its parameters at runtime?
>>>
>>>
>>> https://ffmpeg.org/ffmpeg-filters.html#Changing-options-at-runtime-with-a-command
>>>
>>> ====
>>> Filter pan
>>>   Remix channels with coefficients (panning).
>>>     Inputs:
>>>        #0: default (audio)
>>>     Outputs:
>>>        #0: default (audio)
>>> pan AVOptions:
>>>    args              <string>     ..F.A......
>>> ===
>>>
>>> no T , as you can see (ffmpeg 5.1)
>>>
>>> I wonder if our software (cinelerra-gg, video editor, so a bit
>>> heavyweight) can do this via built-in keyframing .. I'll ask on our
>>> maillist.
>>>
>>>
>> Nope, your software can't do it.
>>
>> Use ffmpeg's stereotools filter with asendcmd. Supports runtime changing
>> of parameters.
>>
>
>
> Thanks for suggestion! Yes, fully automatic panning on variable length
> clips probably not easy to automate in CinGG
> (even in bath mode). But I opened said filter (stereotools) and apparently
> I can set cingg plugin keyframes for its internal parameters ..
>
> I do not think we have timeline support for ff filters, but does this
> system offer any advantage in our case?
>


 Timeline and runtime changeable parameters are different things, they are
not same.

Timeline just disables/bypass processing in certain time frames. While
runtime parameters can be changed at any time frame.
But parameters can also be slowly interpolated so that no artifacts appear
upon changes.


>
>>
>>>
>>>
>>> While I could use that workflow to
>>>> manually perform the same transformation on multiple files, for my own
>>>> use as well as to help other family members [who generally have limited
>>>> computer skills] I want to automate that workflow.
>>>>
>>>> Over the past four days I have played as much catch-up on the many
>>>> topics and toolkits which appear might permit me to engineer a software
>>>> solution to this requirement.  As a newbie, I probably will not
>>>> correctly summarize what I believe to be the possible tools and
>>>> approaches, so please forgive any misuse to the correct terminology.  I
>>>> hope/believe that I might be able to state my concepts/questions in a
>>>> manner which will be most considerate of the time of those who
>>>> participate in this mailing list and most quickly help me move closer
>>>> to
>>>> a good approach to the challenge.
>>>>
>>>> 01.  I have managed to download the libraries which are used for the
>>>> maintenance of the ffmpeg, ffprobe and ffplay triumvirate of tools.
>>>>
>>>> 02.  I have managed to successfully build some sample C programs [taken
>>>> from the doc\examples sub-directory and other miscellaneous snippets
>>>> found by following the wonderful links from your Wiki] using the
>>>> CodeBlocks IDE framework.
>>>>
>>>> 03.  I have squirreled my way through the parts of the Doxygen
>>>> documentation which seem like they would be most apropos.
>>>>
>>>> What I did not discover was any functions or examples of what I assumed
>>>> I would be needing to do, which essential would be to process the audio
>>>> frames of the FrontLeft [FL]  and FrontRight [FR] channels of coming
>>>> out
>>>> of a stream of packets.  That caused me to think that perhaps I would
>>>> find examples of that processing by searching the Audacity sources to
>>>> learn when and how they use the ffmpeg libraries.  And somewhere
>>>> between
>>>> the Audacity and FFmpeg sites I stumbled upon some sources and some
>>>> documentation concerning what I suppose are two reasonable libraries
>>>> devoted to "resampling" -- soxr and swr.
>>>>
>>>> It was about at that point that I concluded that my modification of the
>>>> sampled frames probably does not fall within the ambit of what is meant
>>>> by resampling at all and that led to an investigation of what Nyquist
>>>> was all about.  Wow, what a guy Mr. Dannenberg must be.  The 2007
>>>> Nyquist Reference Manual is a jaw-dropping read.
>>>>
>>>> I think that is enough background/context.  Here's were I would
>>>> appreciate any suggestions:
>>>>
>>>> A.  Would it be possible to accomplish the steps necessary to achieve
>>>> the desired result just using ffmpeg.exe?  I imagine that, using the
>>>> command line tool and an appropriate shell scripting language, it might
>>>> be necessary to make multiple passes of the original .mp3 file and/or
>>>> the two separate channels.  I am not concerned about that loss of
>>>> throughput; it will always be far faster than any manual procedure.
>>>>
>>>> B.  Nonetheless, there are some advantages that would accrue from
>>>> accomplishing the work entirely in an application .exe with a little
>>>> GUI
>>>> glitter to help the user be able to attempt some trial-and-error
>>>> [preview] with slight changes in some of the parameters of the task
>>>> depending upon the nature of the audio content and the manner in which
>>>> the user will eventually play the output on different devices in
>>>> different environments.  Since I will not have the capabilities for
>>>> building an Envelope in the manner that Nyquist [Lisp] accomplishes
>>>> that, can anyone point me to any sample code doing that in C with the
>>>> eight ffmpeg .dll libraries?
>>>>
>>>> C.  Or -- and I appreciate that it is not fair to ask this of this mail
>>>> group -- but I would appreciate any experience/advice as to whether the
>>>> solution really ought to be accomplished by some scripting and/or macro
>>>> facilities wrapped around Audacity?
>>>>
>>>> Thank you so much for the fantastic capabilities you have provided with
>>>> the entire FFmpeg effort and for your patience in reading through my
>>>> questions as the bell is about to strike on the New Year.
>>>>
>>>> _______________________________________________
>>>> Libav-user mailing list
>>>> Libav-user at ffmpeg.org
>>>> https://ffmpeg.org/mailman/listinfo/libav-user
>>>>
>>>> To unsubscribe, visit link above, or email
>>>> libav-user-request at ffmpeg.org with subject "unsubscribe".
>>>>
>>> _______________________________________________
>>> Libav-user mailing list
>>> Libav-user at ffmpeg.org
>>> https://ffmpeg.org/mailman/listinfo/libav-user
>>>
>>> To unsubscribe, visit link above, or email
>>> libav-user-request at ffmpeg.org with subject "unsubscribe".
>>>
>> _______________________________________________
>> Libav-user mailing list
>> Libav-user at ffmpeg.org
>> https://ffmpeg.org/mailman/listinfo/libav-user
>>
>> To unsubscribe, visit link above, or email
>> libav-user-request at ffmpeg.org with subject "unsubscribe".
>>
> _______________________________________________
> Libav-user mailing list
> Libav-user at ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/libav-user
>
> To unsubscribe, visit link above, or email
> libav-user-request at ffmpeg.org with subject "unsubscribe".
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://ffmpeg.org/pipermail/libav-user/attachments/20230101/a1242b29/attachment.htm>


More information about the Libav-user mailing list