[Ffmpeg-devel-irc] ffmpeg.log.20120827
burek
burek021 at gmail.com
Tue Aug 28 02:05:01 CEST 2012
[00:02] <cbsrobot> !command
[00:07] <t4nk918> ffmpeg -i jellies.mp4 -vf ass=jellies.ass jellies_sub.mp4
[00:08] <t4nk918> In the output: [Parsed_ass_0 @ 00000000048f8940] Neither PlayResX nor PlayResY defined. Assumin g 384x288
[00:09] <DelphiWorld> healthy nighty everyone
[00:12] <cbsrobot> t4nk918: something is wrong with your ass file
[00:12] <cbsrobot> can you pastebin the first few lines ?
[00:32] <t4nk918> but my ass file is created from ffmpeg
[00:32] <t4nk918> with command: ffmpeg -i jellies.srt jellies.ass
[00:33] <ubitux> can you share the .srt?
[00:33] <t4nk918> yes...
[00:34] <t4nk918> I download from: http://www.storiesinflight.com/js_videosub/
[00:34] <t4nk918> http://www.storiesinflight.com/js_videosub/jellies.srt
[00:35] <ubitux> ./ffmpeg -i jellies.srt jellies.ass
[00:35] <ubitux> ./ffplay -f lavfi testsrc,ass=jellies.ass
[00:35] <ubitux> this works for me
[00:35] <ubitux> (i can see the subtitles)
[00:36] <ubitux> you can replace "testsrc" with "color", to see them better
[00:37] <t4nk918> I dont see it
[00:37] <t4nk918> [Parsed_ass_1 @ 044fe5c0] Added subtitle file: 'jellies.ass' (2 styles, 6 events ) [Parsed_ass_1 @ 044fe5c0] Fontconfig disabled, only default font will be used. [Parsed_ass_1 @ 044fe5c0] Neither PlayResX nor PlayResY defined. Assuming 384x28 8 [lavfi @ 044fcb80] Estimating duration from bitrate, this may be inaccurate Input #0, lavfi, from 'testsrc,ass=jellies.ass':
[00:38] <ubitux> i have this as well, but the subtitles appear
[00:38] <ubitux> t4nk918: is your ffmpeg build with libfreetype?
[00:38] <t4nk918> I dont know
[00:39] <ubitux> it's displayed on top of the output
[00:39] <ubitux> configuration line
[00:39] <t4nk918> I use windows and try with:
[00:39] <t4nk918> ffmpeg version N-43804-g780bf75 Copyright (c) 2000-2012 the FFmpeg developers built on Aug 21 2012 21:16:09 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runt ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass - -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-l ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --
[00:39] <ubitux> your output is truncated but ok
[00:40] <ubitux> i don't have windows i can't help you reproducing this issue
[00:40] <t4nk918> ok
[00:43] <t4nk918> Thanks
[01:39] <Bombo> hi
[01:40] <Bombo> i got 0000.png to 0250.png, i encode it with 'ffmpeg -y -i %%04d.png -b 5000k -r 25 0000.m2v' but is it possible to reverse the order? so that 0250.png is the first frame in the video?
[02:01] <t4nk918> solved: http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=10&t=318&start=20
[02:02] <t4nk918> I create a fonts dir in ffmpeg installation and copy fonts.conf in fonts dir
[02:03] <t4nk918> Thanks a lot
[02:03] <t4nk918> bye
[02:19] <grepper> Bombo: on linux I would probably create an array in bash and use that, or create symlinks. Not sure what you would do on windows.
[02:20] <Bombo> grepper: hmyes, i got bash.exe ;)
[02:21] <grepper> for i in {250..1}; do array+=( $(printf %04d%s $i .png ) ); done
[02:21] <grepper> or somesuch
[02:21] <tdr> yeah, use POSIX tools, forget the windows scripts
[02:21] <grepper> assuming no spaces in files, quote it otherwise
[02:21] <tdr> well bash isn't technically POSIX, but closer than iwndows is
[02:22] <Bombo> this worked: $ newcount=0;for i in $(seq -w 250 -1 1); do newcount=$((newcount+1)); newcountzero=$(printf "%04d" $newcount);echo mv 0$i.png rev/$newcountzero.png;done
[02:23] <Bombo> i just wondered if it can be done in ffmpeg directly
[02:28] <grepper> well, the way I mentioned you wouldn't have to rename them, just use ffmpeg -i "${array[@]}" ...
[02:29] <Bombo> oh thats a nice trick then, thx
[02:30] <grepper> np
[04:16] <Nanobot> I'm new to video editing. I have an mkv using h.264/flac/ass subtitles, and I want to trim just a few frames from the beginning of the video (adjusting audio and subtitle timing accordingly). The cutting point is between two keyframes. I'd like to only reencode the frames up to the next keyframe, and have the rest of the video copied losslessly. Any advice on how to do this?
[04:20] <FelipeS> hey all, I'm kind of lost trying to use the ffmpeg api. For example I may create an AVInputFormat with av_find_input_format() but then, am I suppose to free it? docs: http://cekirdek.pardus.org.tr/~ismail/ffmpeg-docs/avformat_8h.html#7d2f532c6653c2419b17956712fdf3da
[04:20] <FelipeS> the docs merely describe each function, but is there a 'synopsis' of the overall API anywhere?
[04:29] <FelipeS> also, so far I'm having this same problem as op in following link, except I'm trying with h264 encoded files http://stackoverflow.com/questions/11144954/cannot-open-file-using-avformat-open-input-returns-cannot-open-invalid-data
[04:31] <FelipeS> erm
[04:31] <FelipeS> maybe I should ask, If I want to build ffmpeg for decoding h264 files in say mpeg4-14 containers, which options should I enable
[04:32] <FelipeS> I'm still not very familiar with the terminology & what muxers/demuxers/parsers are when it comes to processing videos
[04:58] <FelipeS> anyone?
[06:09] <FelipeS> So I finally managed to build an ffmpeg static lib that opened an h264 mpegts file, but here's the output when I open it Could not find codec parameters for stream 1 (Audio: aac ([15][0][0][0] / 0x000F), 0 channels): unspecified sample rate
[06:09] <FelipeS> Consider increasing the value for the 'analyzeduration' and 'probesize' options
[06:10] <FelipeS> ffmpeg -i does show the sampling rate (22.05 KHz)
[06:22] <NonaSuomy> Hi guys anyone know of a way to make a sort of walkie/talkie for a website so you could have a microphone with a button rip live audio only on demand then some how stream it on a site so people watching a live video feed could hear that audio and then have a button to again on demand send audio back to that user
[06:24] <NonaSuomy> so the people in the room where the video is can choose if they want to be live audio streamed
[06:25] <NonaSuomy> we have 7 cameras running through ffserver right now so was hoping there would be an ability to make so the audio is on personal demand instead of always streaming
[08:20] <fling> do I need PAL or NTSC?
[08:21] <cbreak-work> both suck
[08:21] <fling> right
[08:22] <fling> I can choose fps with my camera: 29,97; 25; 23,976
[08:22] <fling> do I need 25?
[08:24] <cbreak-work> what's your target?
[08:24] <fling> I want to capture fine video, idk which framerate is better to use
[08:24] <cbreak-work> if possible, chose a non-interlaced mode.
[08:25] <cbreak-work> 25 is pal framerate (half the pal field rate), 30000/1001 is NTSC Framerate (half of the NTSC Field Rate)
[08:25] <cbreak-work> and 24000/1001 is NTSC FILM framerate
[08:39] <fling> still need to choose proper fps
[11:22] <buhman> I'm attempting to reencode some video created by a crappy video camera
[11:27] <buhman> ffmpeg manages to desynchronize the audio and video completey
[11:28] <buhman> mplayer claims the length of the video is 1:21 (after seeking to the end; I guess the metadata for its actual length is missing)
[11:28] <buhman> ffmpeg then creates a 3:10 video (with the length in the container)
[11:29] <buhman> the audio plays at normal speed, and the video is roughly 2x slower than it should?
[11:29] <buhman> would anyone like the source and output?
[11:29] <buhman> ffmpeg -i CAPTURE-HD-RM164_2012-08-20_13_21_28.ts -vcodec libx264 -quality best -preset veryslow -crf 28 -filter:v yadif -acodec aac -b:a 96k -strict experimental -threads 0 crf28-fast.mp4
[11:30] <buhman> source: http://buhman.org/CAPTURE-HD-RM164_2012-08-20_13_21_28.ts (1.5G)
[11:31] <buhman> output: http://buhman.org/crf28-fast.mp4 (255M)
[11:31] <buhman> what erm happened?
[11:41] <fling> buhman: I have the same problem since 0.6.90
[11:50] <cbsrobot> buhman: I get a 403 for the source file
[11:56] <buhman> cbsrobot: O.o
[11:57] <cbsrobot> HTTP request sent, awaiting response... 403 Forbidden
[11:57] <buhman> I see it
[11:57] <buhman> try now
[11:58] <buhman> fling: oh?
[11:58] <fling> buhman: uh?
[12:05] <buhman_> apologies
[12:08] <buhman> last I heard was 04:58 < fling> buhman: uh?
[12:08] <fling> 16:58 < buhman> fling: oh?
[12:08] <fALSO> Hi there!
[12:09] <buhman> fling: what an aweful timezone that is
[12:09] <fALSO> anyone knows some up-to-date instructions to convert videos to PSP format ?
[12:09] <fALSO> all the pages i find are from 2006 and stuff like that
[12:09] <fALSO> most of the ffmpeg options have been changed since then
[12:09] <buhman> fALSO: well, what is PSP format?
[12:10] <fALSO> i know its mp4... so its h264 surely
[12:10] <fALSO> but i dont know naything about resolutions and bitrates and stuff like that
[12:12] <buhman> fALSO: do you have any videos that currently work?
[12:12] <fALSO> yap... but not here , just at home
[12:12] <fALSO> im trying to find more info
[12:13] <buhman> fALSO: if you find a video that works, it would be trivial to spit out the ffmpeg arguments you'd want to do that
[12:13] <fALSO> ok!
[12:13] <fALSO> theres some problems i think, because of the resolutions and stuff
[12:13] <fALSO> to keep aspect ratio and stuff like that
[12:13] <buhman> -aspect foo:bar
[12:15] <fALSO> found some info in japanese
[12:15] <buhman> lovely
[12:15] <fALSO> http://d.hatena.ne.jp/knaka20blue/20120720/1342754824
[12:15] <fALSO> let me now see if i can get a windows build of ffmpeg that supports all those codecs, etc
[12:17] <buhman> fALSO: -vpre doesn't exist anymore fwiw
[12:18] <buhman> erm, at least that implies one of the older versions of ffmpeg that didn't use the libx264 presets
[12:18] <buhman> you'll want something like "-preset slow -quality best"
[12:19] <fALSO> ok
[12:21] <fALSO> buhman, do you recommend any "special" build of ffmpeg for windows?
[12:22] <buhman> fALSO: I reccomend against windows
[12:22] <fALSO> heheh
[12:22] <fALSO> :-P
[12:26] <fALSO> looks like the one i used doesnt have libfaac support
[12:26] <fALSO> going to try to find another
[12:28] <fALSO> hehe crashed ffmpeg
[12:47] <fALSO> well it seems that it also doesnt support libvo-aacenc
[12:47] <fALSO> weird...
[12:47] <fALSO> also ffmpeg.org is down
[14:01] <Spamoi> hi, i didn't found how to fix rpath for ffmpeg in a "proper way", any suggestions ?
[15:38] <FelipeS> maybe I should ask, If I want to build ffmpeg for decoding h264 files in say mpeg4 containers, which options should I enable at compile time?
[15:40] <JEEB> unless you want a minimalistic build, you only need the default ./configure set-up and that's it
[15:40] <JEEB> (everything LGPL enabled)
[15:41] <FelipeS> JEEB yeah I'd prefer the minimal build. I'm compiling for iOS
[15:41] <FelipeS> not for putting up on app store, just a research project
[15:42] <JEEB> run the ./configure once first to get a listing of video/audio codecs and containers (formats) etc.
[15:42] <JEEB> then check --help output of the configure script
[15:42] <JEEB> to check if it was --enable-decoder-X or whate
[15:42] <JEEB> and then --disable-everything --enable-shit-you-need
[15:43] <FelipeS> JEEB, right. I suppose for reading an mp4 h264 file I would need protocol=file, demuxer=mp4, parser=h264, decoder=h264 right?
[15:44] <JEEB> something like that
[15:48] <FelipeS> JEEB, ok well I managed to open up instead an mpegts file (h264 & aac), using the API and it reports Could not find codec parameters for stream 1 (Audio: aac ([15][0][0][0] / 0x000F), 0 channels): unspecified sample rate
[15:48] <FelipeS> Consider increasing the value for the 'analyzeduration' and 'probesize' options
[15:51] <FelipeS> JEEB, are you familiar with the API? I'm guessing it's taking too long to guess the video/audio encoding? I saw you can just specifiy the input format using av_find_input_format but that seems to be just for the container? Can't seem to find a way to specify stream format too
[17:03] <Bombo> grepper: i tried this now: for i in {250..1}; do array+=( $(printf %04d%s $i .png ) ); done; ffmpeg -i "${array[@]}" -b 5000k -r 25 0000.m2v unfortunately there were too less " i guess, so i got all the .png overwritten with 0250.png
[17:05] <relaxed> {250..001}
[17:07] <Bombo> relaxed: %04d
[17:07] <relaxed> okay, {0250..0001}
[17:07] <Bombo> doesn't matter ;)
[17:08] <Bombo> the %04d fixes it
[17:12] <Bombo> i really should have backed up the ~5h rendered pngs before trying that out
[17:22] -:#ffmpeg- [freenode-info] if you're at a conference and other people are having trouble connecting, please mention it to staff: http://freenode.net/faq.shtml#gettinghelp
[17:24] <Bombo> yes, but i'm too tired to think or to learn lessons ;)
[17:25] <grepper> {0250..001} is bash 4, you're okay if that is what your cygwin uses
[17:27] <Bombo> oh right, i tested that in linux with bash 4
[17:27] <Bombo> in mingw/msys i got bash 3
[17:27] <Bombo> so zero zeroes
[17:27] <grepper> use the other one then
[17:28] <Bombo> right ;)
[17:36] <Bombo> how about this: for i in {250..1}; do echo $i; array+=( $(printf -- "-i %04d.png" $i ) ); done
[17:37] <Bombo> and then ffmpeg "${array[@]}" -b 5000k -r 25 0000.m2v
[17:37] <Bombo> loads the pngs, but the m2v is empty... hmmm
[17:39] <relaxed> Explain what you're trying to do. What is your input?
[17:39] <Bombo> input are frames from 0000.png to 0250.png
[17:40] <Bombo> i want to be 0250.png frame1 in the video
[17:40] <Bombo> to have the animation backwards
[17:46] <Bombo> [buffersink @ 0284cbc0] No opaque field provided
[17:46] <Bombo> does that matter?
[17:46] <Bombo> ffmpeg seemd to just take the first frame
[17:52] <grepper> guess you'll have to create symlinks or rename them
[17:52] Action: grepper hides
[17:53] <Bombo> hehe, yep, did that (renameing)
[17:59] <grepper> guess I've been using mjpeg tools too much where giving a list is just fine, sorry
[18:01] <Bombo> its ok, was worth a try
[18:12] <grepper> on a better day I would have given you a command to symlink them to %06d or such instead.
[18:13] Action: grepper off for haircut and laundry
[18:14] <zmuser3> I want to take 5 video files and 3 songs and combine them. during two of the videos the audio from the video should be muted. during the others, the song volume should go down to about 25% and then come back up during the next video. the song should fade out at the end of the last video. can I automate this with ffmpeg? the audio part I have no idea how to handle
[18:29] <relaxed> I'm not sure if ffmpeg can handle the audio the way you want but SoX probably can.
[18:31] <zmuser3> ok I will check it out
[18:33] <relaxed> sounds like you need video editing software
[18:35] <zmuser3> when I land I have just a few minutes to pack my parachute and edit a video and burn a dvd. while it is possible, it is a mad rush. I am trying to automate it so I can just relax and pack
[18:36] <zmuser3> then maybe I can package it up and sell it to other videographers
[18:37] <relaxed> Anything is possible with scripting+ffmpeg. Though the time it takes to write it may take a while.
[18:42] <tiborfabian> hi, i'm new here. i'm using ffmpeg to compress a bunch of video files, can i normalize the audio track using ffmpeg itself?
[20:20] <samon_nerd> is it possible to create my own shared lib and then compile it into ffmpeg? What would need to happen for me to do this? Do I just make reference to it in the .config file that is used at compile time? The use case here is that I want to make a new lib that just adds the possibility of passing a new argument to the ffmpeg CLI (named '-howManyGoats') that causes the ffmpeg library to call out to my library to add a image, of my c
[20:20] <samon_nerd> before it gets compressed by some compression codec
[20:21] <samon_nerd> (in this case, the image , would be added to the raw frames just before it was passed to something like the x264 codec for compression
[20:23] <samon_nerd> please excuse the newb-ness of my quuery. :)
[20:23] <guest1234> So, I have a few videos with one video stream and three (or so) audio streams, any way to specify that I want to extract the video and only one audio (specified by name)?
[20:24] <samon_nerd> you could use -map
[20:24] <guest1234> I'm not sure which stream # it will be for each video
[20:24] <samon_nerd> ya, you'd have to write logic in your code to figure that out first
[20:25] <samon_nerd> perhaps parse some mediainfo output
[20:25] <samon_nerd> I think media info can output .csv
[20:25] <guest1234> Hmm. So there is no way to specify name directly?
[20:27] <konfoo> does anyone know where in the code (or in ffmpeg.c) the cmdline output statistics like size= time= etc. are? i need to adjust them for a wrapper app
[20:27] <guest1234> konfoo: grep it?
[20:27] <konfoo> already did
[20:29] <konfoo> ah nm its in print_report
[20:29] <samon_nerd> @ guest1234 if there was a way to specify name , how would you know what that name is if you don't know what tracks you are looking for?
[20:30] <samon_nerd> you would have to preprocess your files to have the names
[20:30] <guest1234> I know the title of the track, just no idea which # it will be (hence the initial question about name)
[20:30] <samon_nerd> regardless, mediainfo may also give you the names
[20:30] <samon_nerd> if they exist
[20:30] <guest1234> ffprobe shows them too
[20:31] <samon_nerd> perhaps try to parse that output and find the name you are looking for and then get the track number that is associated with that name
[20:32] <samon_nerd> I believe it should be something like #0:5 NAME bitrate:1000k ....
[20:32] <samon_nerd> just off the top of my head
[21:18] <samon_nerd> anyone with an idea about my "shared libs" question
[21:35] <Sashmo_at_work> can anyone tell me what the best settings for encoding live sports at about 2Mbps? in h264
[22:45] <hypnocat> does anyone know how to get ffplay output to jack?
[22:56] <saste> hypnocat: no way, we don't have a jack output device
[22:56] <hypnocat> ah, too bad
[22:56] <hypnocat> thanks anyway
[23:22] <saste> hypnocat: patches/feature requests are welcome
[23:22] <hypnocat> actually, i just figured out how to do it
[23:22] <hypnocat> through changes in my .asoundrc
[23:22] <hypnocat> will give a detailed description of the process in a bit..
[23:23] <saste> hypnocat: well ffplay uses SDL, which may in turn support jack i suppose
[23:23] <hypnocat> i didn't use any direct jack capability of ffplay
[23:23] <hypnocat> these are plain alsa rounting procedures
[23:52] <Iszak> When/will ffmpeg get intel quicksync support?
[23:52] <JEEB> interested in coding it?
[23:53] <Iszak> I would, but i wouldn't know where to start!
[23:54] <JEEB> go check up on whatever API you'd be using for it, and then go check out libavcodec stuff
[23:54] <JEEB> that'd be the beginning of it
[23:55] <Iszak> but that's to say no support currently exists and non is planned?
[23:56] <JEEB> people usually work on what they're interested in. the quicksync encoder is just a black box with some switches that you stuff raw video into, and that doesn't even match libx264 in most cases if you have a CPU that can use it (sandy/ivy)
[23:56] <JEEB> the decoder might or might not already have patches via va-api or whatever it is
[23:56] <JEEB> once again, I don't remember seeing anyone having interest in it, and you can check if anything got in by checking the libavcodec folder's contents
[23:58] <Iszak> I'd just be better to get a better CPU.
[00:00] --- Tue Aug 28 2012
More information about the Ffmpeg-devel-irc
mailing list