[Ffmpeg-devel-irc] ffmpeg.log.20121228
burek
burek021 at gmail.com
Sat Dec 29 02:05:01 CET 2012
[00:19] <RSD> which encoder is best for aac
[00:19] <JEEBsv> usable via ffmpeg? fdk-aac
[00:19] <RSD> yes
[00:19] <RSD> the libvo_aacenc is not good?
[00:19] <JEEBsv> nope
[00:20] <RSD> and the experimental one?
[00:20] <JEEBsv> it's on par or even worse than the internal ffaac encoder in libavcodec (codec name 'aac')
[00:20] <JEEBsv> the experimental one is pretty bad, but probably better than vo_aacenc
[00:21] <RSD> fdk-aac is the best?
[00:21] <JEEBsv> at the moment, yes
[00:21] <JEEBsv> fraunhofer's encoder, after all
[00:24] <RSD> and if I want librtmp on centos 6.3
[00:24] <RSD> if I do ./configure --enable-gpl --enable-libmp3lame --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-libfdk-aac --enable-librtmp it says it can find librtmp
[00:25] <JEEBsv> what do you want to do with librtmp?
[00:25] <RSD> read rtmp streams
[00:25] <RSD> and output
[00:25] <JEEBsv> ok, reading
[00:25] <JEEBsv> hmm
[00:25] <JEEBsv> I thought enabling the librtmp would disable the internal rtmp(e) implementation?
[00:25] <RSD> I have installed librtmp in /usr/local/lib
[00:26] <JEEBsv> which would disable the rtmp(e) output features
[00:26] <RSD> so I better remove librtmp
[00:26] <JEEBsv> well, you just don't enable it and try like that
[00:27] <RSD> I also got that message whiletrying to get opus working
[00:27] <RSD> ERROR: librtmp not found
[00:27] <JEEBsv> you probably had --enable-librtmp there too then :P
[00:27] <JEEBsv> because opus doesn't need rtmp
[00:30] <RSD> no not at that time
[00:30] <RSD> it syas it can;t find the lib while it is there
[00:30] <JEEBsv> it will not tell you librtmp not found if you didn't enable librtmp
[00:30] <JEEBsv> as far as I know
[00:31] <JEEBsv> it should tell libopus not found if it's not found
[00:32] <JEEBsv> and in that case you should either use PKG_CONFIG_PATH to add your prefix's lib/pkgconfig into the pkg-config path, or use --extra-cflags and --extra-ldflags to add -I/your/prefix/include and -L/your/prefix/lib to compiler's and linker's flags
[00:36] <RSD> also if i configure with disable-shared?
[00:40] <RSD> opus.pc is in the pkgconfig dir
[00:57] <an3k> RSD: I'm using Nero AAC Encoder. It's free and also used by MeGUI
[01:57] <brx_> can ffmpeg handle overlaying text in a black box over a video across the bottom of a vid or will i have to break it into jpgs and do this programmatically within my app code, and then reencode the images back to a video?
[02:13] <impy> brx_: watermark or subtiles? (in either case the answer is yes though)
[02:13] <brx_> watermark impy
[02:14] <brx_> well, a black box at the bottom with a date in it
[02:15] <brx_> yes as in ffmpeg can do it alone?
[02:16] <impy> brx_: yeah, you have to look it the -vf (video filter) option
[02:16] <impy> then you can add as arguments the picture of the blackbox with text and the overlay offset
[02:17] <brx_> ok great
[02:17] <brx_> theres 2 ways i could do this. 1) break into jpgs, draw onto each fram in java and then merge the jpgs again with the audio
[02:17] <brx_> 2)video-filter
[02:17] <brx_> which should i choose impy ?
[02:18] <impy> video filter
[02:18] <brx_> ok thanks
[02:27] <ubitux> vf drawtext or ass
[03:04] <brx_> ubitux impy, ffmpeg -i 2012-12-27.mp4 -vf "movie=bb.png [movie]; [in] [movie] overlay=0:0 [out]" -vcodec copy -acodec copy out.mp4
[03:04] <brx_> bb.png must be in the current working directory or ffmpeg directory?
[03:04] <brx_> i never see any examples referenceing a full path with that parameter
[03:04] <brx_> and the image never overlays
[03:09] <ubitux> you can't use -vcodec copy with -vf
[03:10] <brx_> ok i just realized that, it works with ffmpeg -y -i 2012-12-27.mp4 -vf "movie=bb.png [watermark]; [in][watermark] overlay=main_w-overlay_w-50:main_h-overlay_h-10 [out]" -vcodec libx264 -crf 20.0 -acodec copy out.mp4
[03:10] <brx_> thanks ubitux
[03:13] <roboman2444> you guys might know about this....
[03:13] <roboman2444> i need a slim audio tool that can output to stdout
[03:14] <roboman2444> something that can manipulate microphone and audio files on the fly, kindof like a radio operator's mixing board
[03:14] <roboman2444> oh, and a console or gtk or whatever ui is fine
[03:17] <ubitux> ffmpeg can push to stdout
[03:17] <ubitux> if you want a slim ffmpeg, play with the configure (hint: --disable-everything)
[03:22] <roboman2444> no no no
[03:22] <roboman2444> i mean some sort of mixing software
[03:23] <roboman2444> that allows me to mix multiple inputs, and manage them on the fly
[03:23] <sisco-19> hi all
[03:23] <roboman2444> and pipe the output (in any form that that ffmpeg can handle... so any) to stdout
[03:24] <sisco-19> i have a question for you is there any script or a program that get the stream statut from ffmpeg if it stops where the input is a live source ????
[03:25] <sisco-19> :)
[03:26] <p4plus2> Does ffmpeg have any sort of noise filtering for audio -- it would be sufficient if it only ran during silence to make it true silence
[03:26] <p4plus2> more ideally would be the ability to control which channel the filter was applied to but this is optional
[03:26] <p4plus2> I did find the silence detector but nothing for denoising (if that is even a word)
[03:27] <sisco-19> r u trying to put another audio on the video ?
[03:27] <p4plus2> no
[03:27] <sisco-19> to make silent?
[03:27] <sisco-19> it*
[03:28] <p4plus2> I am only attempting to silent areas of dead air during the video
[03:29] <sisco-19> correct me if im wrong, you have video and lets say a part of this video there's some noises and u wanna make it silent?
[03:30] <p4plus2> no
[03:31] <p4plus2> the video itself has full audio -- speech to more precise
[03:31] <p4plus2> but in sections of dead air I would like a denoise filter to reduce background hum
[03:33] <roboman2444> college radio
[03:33] <roboman2444> "dead air, um, dead air"
[03:34] <sisco-19> i think you need third party tool like Audacity
[03:35] <p4plus2> sisco-19: perhaps -- thats how I do it right now
[03:35] <p4plus2> but I was asking the question in hopes of potentially streamlining the process a bit more
[03:36] <sisco-19> did you try -nr ?
[03:56] <p4plus2> isn't -nr a video option?
[03:59] <tats> hello
[04:01] <tats> I'm having a strange issue with a format context: I get nb_streams = 0 even though the av_dump_format() outputs many streams
[04:01] <tats> I looked that up on the internet but only found questions without answers
[04:01] <sisco-19_> yes it is and its for noise reduction
[04:02] <tats> I'm on ubuntu 12.04. I compiled my own libraries following these instructions: https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide
[04:03] <tats> (I mean: I compiled ffmpeg / x264 / fdk-aac / libvpx latest version with extra compilation flags for codec supports)
[05:50] <Abhijit> hi
[05:51] <Abhijit> i am getting this error while trying to compile libavg http://paste.opensuse.org/70004890
[05:51] <Abhijit> ffmpeg has no member age
[05:51] <Abhijit> help please
[06:31] <Abhijit> hi
[06:31] <Abhijit> i am getting error AVFrame has no member named age
[06:31] <Abhijit> help please
[08:50] <kdns> Hi all
[08:52] <kdns> I hope sameone can give me some advice... i have two webcams and jack audio, which I'd like to combine into picture in picture image, with jack audio as a video which i can save to disk and stream to the web in realtime.
[09:10] <kcm1700> is it legal to put new sps pps information packet into the h264 decoder in order to decode other h264 video?
[10:18] <RSD> I am trying to install libopus on centos 6.3
[10:18] <RSD> but it can find the libs
[10:18] <RSD> it says
[10:18] <RSD> ERROR: opus not found
[10:18] <RSD> I tried to install libpus with --prefix=/usr
[10:18] <RSD> I tried it without prefix
[10:19] <RSD> I added /usr/local/lib to PATH
[10:19] <RSD> In ld.so.conf is /usr/local/lib
[10:19] <RSD> please help
[10:20] <sacarasc> Did you compile it yourself or install using your package manager?
[11:03] <RSD> compiled it myself
[11:16] <RSD> installed libopus from repo butstill not working?
[11:16] <RSD> why?
[11:29] <SirDarius> hmm any reason ffmpeg would generate FLV (AVC/AAC) files with non monotonical PTS ? it seems the very last video tag in some files is presented before the previous audio tags, that's troublesome...
[11:29] <SirDarius> and it's only five bytes long... 17 02 00 00 00
[11:34] <SirDarius> hmmm that's an AVC end-of-sequence, i should probably omit it altogether from the stream
[11:44] <RSD> no one on libopus in centos or isn it necessaryu?
[11:49] <SirDarius> is the .so correctly installed ?
[11:50] <SirDarius> (or .a)
[11:50] <RSD> yes they are in the dir
[11:51] <SirDarius> /usr/lib, or /usr/local/lib ?
[12:04] <RSD> but yes
[14:23] <norbert_> is there a mjpeg variant that accepts png images?
[14:23] <norbert_> for -vcodec input, I mean
[14:40] <norbert_> http://ffmpeg.org/trac/ffmpeg/wiki/Create%20a%20video%20slideshow%20from%20images
[14:41] <norbert_> I'm going that "or for png images" thing there
[14:41] <norbert_> but all ffmpeg gives me is: Error while decoding stream #0.0
[14:43] <SirDarius> @Mavrik in the end, skipping the AVC Sequence-start / End-of-sequence packets makes streaming work fine
[14:43] <norbert_> what I'm doing is straight from the wiki (URL above): cat *.png | ffmpeg -f image2pipe -r 1 -vcodec png -i - -vcodec libx264 out.mp4
[14:43] <Mavrik> yeah
[14:43] <Mavrik> give us the full output.
[14:44] <SirDarius> thanks for the useful pointer ;)
[14:44] <SirDarius> (even though it's been a few days since i've asked)
[14:44] <norbert_> and the result is: http://pastebin.com/aJQfTNF9
[14:47] <norbert_> Mavrik: done pasting :) any ideas?
[14:47] <klaxa> update your version first dude
[14:49] <Mavrik> yeah
[14:49] <klaxa> either compile latest git from source or grab a static build
[14:49] <Mavrik> norbert_: and why aren't you just using image2 demuxer?
[14:49] <Mavrik> this one: http://ffmpeg.org/ffmpeg-formats.html#image2-2
[14:53] <norbert_> here is something I cannot understand, and I'm pretty sure it's because ffmpeg made a terrible choice
[14:53] <norbert_> why does the order of parameters matter
[14:53] <norbert_> why can't we have -vincodec to indicate a video in codec or something
[14:54] <norbert_> why does everything change when I move a -r to the front or the back of an ffmpeg command
[14:54] <norbert_> maybe that makes sense to developers, who prefer to use only one -r, but it would be so much easier for users if you could just say -rout and -rin
[14:54] <norbert_> or whatever
[14:54] <Mavrik> preference.
[14:55] <klaxa> pretty much
[14:55] <klaxa> generally it's like ffmpeg -f <format/filter> <video input settings> -i <input> <video output settings> outputfile.<container>
[14:56] <Mavrik> [input 1 parameters][input 1][input 2 parameters][input 2] ... [output1 parameters][output1][output2 parameters][output2]
[14:56] <Mavrik> it's still rather logical ;)
[14:57] <norbert_> I hope one of you can help me out with the following
[14:57] <norbert_> I currently have -i 00%04d.png which works fine
[14:57] <norbert_> but I would like to pass "cat $(ls -r *.png) | ffmpeg" to ffmpeg
[14:58] <norbert_> I'm not sure how to replace the -i part with what I'm piping to it with cat
[14:58] <norbert_> I know I could do -vcodec mjpeg, but that would be for .jpg files
[14:58] <norbert_> and I tried .png, but get the error I pasted at pastebin.com
[14:59] <norbert_> I tried backquoting the result but ffmpeg said there were too many input files
[14:59] <Mavrik> yes, because there's no video format that would be built from sequental png files
[14:59] <norbert_> ah, ok
[14:59] <norbert_> hm...
[14:59] <Mavrik> that's what confuses ffmpeg
[14:59] <norbert_> but jpeg is not lossless
[14:59] <Mavrik> also when your piping
[14:59] <Mavrik> probably for you the easiest would be to rename files to something you can pass to image2 demuxer
[14:59] <norbert_> I'm trying to use png images because I did it with jpg before and even at the highest quality the end result didn't look as good as the source
[15:00] <norbert_> but the example for image2 demuxer also mentions ffmpeg -start_number 100 -i 'img-%03d.jpeg' -r 10 out.mkv
[15:00] <norbert_> the first example there is ffmpeg -i 'img-%03d.jpeg' -r 10 out.mkv
[15:01] <norbert_> I would still need to pipe the png's I have to that command somehow
[15:02] <Mavrik> huh?
[15:02] <Mavrik> you're not really reading the docs.
[15:03] <norbert_> well, I'm pretty much past that, since I've been looking at docs for 2 days now
[15:03] <klaxa> when i execute what's written in the wiki it works
[15:03] <klaxa> cat shot* | ffmpeg -f image2pipe -r 1 -c:v png -i - -c:v libx264 shots.mp4
[15:03] <klaxa> that produces a playable video for me
[15:04] <klaxa> the files are mplayer2 screenshots, all png
[15:04] <klaxa> i strongly recommend you update your ffmpeg
[15:04] <norbert_> updating my ffmpeg would be a pain
[15:05] <norbert_> it breaks my whole system
[15:05] <klaxa> i highly doubt it, what system are you on? (distro, architecture?)
[15:05] <klaxa> for linux x64 http://dl.dropbox.com/u/24633983/ffmpeg/index.html
[15:07] <norbert_> klaxa: isn't there a lot on my system that relies on the ffmpeg version I have?
[15:07] <norbert_> like, would kdenlive from my distro still work
[15:08] <klaxa> you can use a static build totally independently from anything on your system
[15:08] <klaxa> you put it in some directory, literally /any/ directory
[15:08] <norbert_> ok
[15:08] <klaxa> and call the binary
[15:09] <SirDarius> you can even call it zzmpeg ;)
[15:09] <norbert_> do you have a 32 bit version of that static thing?
[15:09] <norbert_> because I tried that command, but c:v is an unrecognized option for my ffmpeg
[15:09] <sacarasc> That just means your version is old, norbert_.
[15:10] <norbert_> yes, maybe a static ffmpeg that's new could help
[15:11] <norbert_> wait, I think I found it http://ffmpeg.gusari.org/static/32bit/
[15:11] <klaxa> you could always compile from source
[15:11] <knoch> hello, can ffmpeg copy all streams except those of unknown type ?
[15:13] <knoch> from a live stream
[15:15] <klaxa> what streams would you need?
[15:15] <klaxa> more like what streams except for video and audio are available?
[15:15] <knoch> all video, all audio and all subtitles
[15:15] <klaxa> have a look at -map
[15:17] <knoch> ok
[15:18] <knoch> I tried ffmpeg -i udp://239.100.0.0:1234 -t 10 -map 0:v -map 0:a -map 0:s -c copy test.ts
[15:18] <knoch> and it worked
[15:18] <knoch> thank you!
[15:21] <norbert_> wow, you guys, thanks a lot
[15:22] <norbert_> works exactly as I wanted it for 2 days now
[15:22] <norbert_> I can now rest in peace (after living the rest of my life)
[15:25] <knoch> klaxa: from the live stream I get Subtitle: dvb_teletext ([6][0][0][0] / 0x0006) and after running the command above, ffmpeg -i test.ts gives me this: Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
[15:25] <norbert_> what klaxa wrote did the trick (with the latest static ffmpeg) and I also added -crf 0 to get perfect video quality
[16:02] <norbert_> thanks again all
[16:47] <naquad> hi
[17:09] <naquad> what's the video/audio format that can be played by both html5 <video> and flash?
[17:20] <sacarasc> H264 and AAC in MP4? But it might not be available on all browsers...
[17:24] <Plorkyeran> there's nothing with universal support atm
[17:29] <D4rkSilver> naquad: imho just drop flash, who needs flash these days ;)
[17:29] <SirDarius> alas a few people do, like those stuck with RTMP :P
[17:29] <naquad> D4rkSilver, atm thats the only thing working everywhere except mobile devises (android / ios)
[17:30] <naquad> with html5 player badly configurable, lagging all the time (video is ok, checked with vlc) and absolutely ugly controls flash still seems better :\
[17:30] <Plorkyeran> dropping flash doesn't gain you anything compatibility-wise
[17:31] <Plorkyeran> and yeah, none of the html5 players I've used have actually been very good
[17:31] <JEEB> html5 live streaming stuff isn't exactly that far either :/
[17:31] <JEEB> unfortunately
[17:32] <JEEB> I wonder when we'll finally get protocols that let the encoder specify maxrate/bufsize tho :V
[17:32] <JEEB> and transfer the information to the client(s)
[17:33] <SirDarius> i suppose the mpeg-dash thingie does not address this point ?
[17:33] <JEEB> haven't looked at that yet
[17:33] <JEEB> but not much uses that :s
[17:34] <SirDarius> i assisted to a webinar with people working with bbc and other big names
[17:34] <SirDarius> they seemed to be rather excited
[17:35] <SirDarius> by the DRM :)
[17:35] <JEEB> haha
[17:35] <SirDarius> this seems to be their primary concern
[17:42] <D4rkSilver> heh
[17:50] <Diogo> hi anyone know if youtube use 2twopass or crf?
[17:50] <Diogo> thanks?
[17:50] <Diogo> where i can find more information about youtube encoding details...
[17:51] <zap0> ask poo tube
[17:51] <klaxa> not sure, but i think encoding details are embedded as metadata in videostreams, maybe look there?
[17:53] <JEEB> nah, they cleaned up the SEI in around 2008
[17:53] <JEEB> but to be honest
[17:53] <JEEB> you DON'T want to copy youtube
[17:53] <JEEB> I repeat, you DON'T want to copy youtube
[17:53] <klaxa> i support that statement
[17:54] <JEEB> you should decide what kind of service you want to do, and set the preset as well as possible vbv limits depending what you are wishing to do
[17:55] <starPause> anyone have an idea on this stack overflow question about creating multiple output videos by splitting on scenes? http://stackoverflow.com/questions/13967437/ffmpeg-creating-multiple-output-videos-splitting-on-gtscene-x
[17:58] <Diogo> why..i'm using a old version of ffmpeg
[17:58] <Diogo> (with 5 years)
[17:59] <Diogo> and the encoding is working but i need to upgrade to thle last version off ffmpeg and i need to investigate what exist in the web..
[17:59] <Diogo> JEEB:i don't want to copy i only want some informations..
[17:59] <JEEB> let's just say that using their settings won't give you a fuck of good, to be honest. They've set their settings to something for whatever reason, and create X various clips for various needs. Also I think by now they might be handling a single clip with multiple instances too, but the hell I know because nothing is released
[17:59] <saste> starPause, two step process, use the select+scenedetect to select cut points, then use segment muxer with -segment_times
[18:00] <JEEB> Diogo, let's just say that finding out anything about their settings will not be useful to you
[18:00] <JEEB> you should decide by yourself whether you want strict 2pass with vbv, crf with vbv or 1st pass as crf and vbv, and second pass with a bit rate and vbv if the 1st pass file was too big for you
[18:01] <JEEB> they're all valid alternatives
[18:01] <Diogo> ok :)
[18:01] <Diogo> thanks JEEB..
[18:04] <starPause> saste: thanks, reading a bit more now!
[18:06] <starPause> saste: is it possible in one line or will i need to run ffmpeg twice and pipe the output from select+scenedetect job to the segment_times job?
[18:18] <saste> starPause, no right now you need to process the file two times, this can be improved but it is not currently possible to do it in a single pass
[18:20] <starPause> saste: ok thanks, just got roped into some work work so i'll give this a try in a bit, appreciate the direction!
[18:37] <asturel> hi, i know this isnt rly related to ffmpeg, but somebody will know, is mp4 container support subtitles?
[18:38] <JEEB> 3GPP Timed Text should work in "MP4"
[18:38] <Mavrik> asturel: yes
[18:39] <asturel> and simply just -i .srt ?
[18:40] <JEEB> no idea if ffmpeg can convert to or from 3gpp timed text (I think it's called mov text or whatever in libavformat/-codec)
[18:42] <Mavrik> I usually used Handbrake on Windows for that
[18:43] <asturel> d:\movies\New.Girl.S01.720p.WEB-DL.DD5.1.H.264-NFHD>c:\ffmpeg\bin\ffmpeg.exe -i "New Girl S01E01 720p WEB-DL DD5.1 H.264-NFHD.mkv" -i "New Girl S01E01.720p WEB-DL DD5.1.H.264-NFHD.srt" -vcodec copy -acodec mp3 -ac 2 -ab 160000 asd.mp4
[18:43] <asturel> didnt worked
[18:44] <asturel> ah i had to copy the srt to the dir too
[18:44] <asturel> but cant i make it to 'builtin' ?
[18:44] <asturel> bah, it was just my mplayer :D
[18:45] <asturel> forgot that i changed subtitle fuzziness
[18:45] <Mavrik> ^^
[18:47] <JEEB> if ffmpeg can convert srt internally to mov text / 3gpp timed text you could do it like that :P Otherwise you'll just have to do it otherwise
[18:47] <JEEB> and yes, if it didn't work then it wasn't able to convert
[18:47] <JEEB> I hate GPAC/mp4box but I think that could at least convert srt to 3gpp timed text
[18:47] <JEEB> methinks
[18:48] <asturel> what if mov text / 3ggp timed text?
[18:48] <asturel> is*
[18:48] <JEEB> the format of subtitles that is "supported" in "MP4"
[18:48] <asturel> scodec copy -map 0:0 -map 0:1 -map 1:0 -y
[18:48] <asturel> found this
[18:48] <JEEB> you can't really do scodec copy or c:s copy with mp4/mov
[18:49] <JEEB> because just muxing srt into "mp4" won't help you
[18:49] <asturel> yeah noticed :D
[18:49] <asturel> what about subtitle filter?
[18:50] <asturel> http://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20burn%20subtitles%20into%20the%20video
[18:50] <JEEB> that would make you re-encode
[18:50] <JEEB> the video
[18:51] <asturel> doesnt seems like that
[18:51] <JEEB> because that would "burn" the subtitles onto the video surface
[18:51] <asturel> 900fps
[18:51] <JEEB> then you're not re-encoding
[18:51] <asturel> yeah
[18:51] <asturel> didnt worked :D
[18:51] <JEEB> yeah, no effing wonder
[18:51] <JEEB> just encode with ffmpeg and then add 3gpp timed text with mp4box/GPAC
[18:51] <JEEB> it should be able to read srt
[18:52] <asturel> but i already use a batch script to encode :D
[18:52] <asturel> for %i in (*.mkv) do ( c:\ffmpeg\bin\ffmpeg -i "%i" -i "%~ni.srt" -threads 3 -vcodec copy -acodec mp3 -ac 2-ab 160000 "%~ni".mp4 )
[18:52] <JEEB> then you just add the final step to it to add the 3gpp timed text to it :P
[18:53] <JEEB> well, now that you know that you can't convert srt to 3gpp timed text with ffmpeg
[18:55] <asturel> bah, i think i just copy the srt and load manually..:D
[19:00] <asturel> thanks anyway:d
[19:03] <Mavrik> hmm, is there a tool that would inspect a stream and output if frames are encoded interlaced?
[19:25] <CoveGeek> I am looking for advice on encoding file into mp4 to support http adaptive streaming; so far I have been unable to get the file to properly fragment internally.
[19:40] <edgy> Hi, I am looking at this adobe page http://www.adobe.com/devnet/adobe-media-server/articles/dynstream_live/popup.html
[19:41] <edgy> and it mentions a relation between 352×288 and 288×216 320×24
[19:41] <edgy> I couldn't understand this, if 352×288 is the video size what's 320×240
[20:04] <DL> hey guys
[20:05] <DL> any developers/experts i have ffmpeg windows question
[20:05] <sacarasc> If you ask the question, it is more likely to get answered.
[20:06] <DL> :) ok here it goes
[20:07] <DL> i'm using ffmpeg to grab the internet video stream and creting the m3u8 file and the ts segments
[20:07] <DL> running from command prompt just fine. but when i copy the command in to the batch file
[20:07] <DL> it gives me an error
[20:08] <DL> let me copy and paste the command that i'm using
[20:08] <DL> and the error message
[20:09] <DL> ffmpeg -loglevel quiet -i url_streamsource -codec copy -f segment -segment_list playlist.m3u8 -map 0 -segment_format mpegts -segment_list_size 20 -segment_wrap 30 -segment_list_flags +live out%03d.ts
[20:09] <DL> this works fine on the command prompt
[20:11] <DL> batch file name : copystream.bat
[20:11] <DL> error message is: Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
[20:13] <DL> and it creates the output file with the batch files name in it.
[20:13] <DL> first and only ts file is called outcopystream.ts
[20:14] <DL> any thoughts
[20:14] <sacarasc> Maybe you have to escape the %03d for the script?
[20:15] <DL> not sure what you mean by that
[20:15] <DL> as far as i know that part for the sequential segment numbers and i dont think i can change that
[20:23] <CoveGeek> Has anyone here worked with encoding a video to a "segmented" mp4 file for http streaming?
[20:41] <CoveGeek> sacarasc <- What Are the hours of the experts in this room?
[20:41] <sacarasc> When they're at their computers.
[20:41] <CoveGeek> Do you have experience fragmenting a mp4 with ffmpeg?
[20:41] <sacarasc> Nope.
[20:51] <DL> covegeek
[20:52] <DL> i've been experimenting that for last 3-4 days
[21:53] <roboman2444> http://www.ustream.tv/channel/gamezgalaxy
[22:02] <loblik_> hi guys. i made a time-lapse video from webcam pictures. it's not bad but a little bit choppy. i would like two neighbouring pictures to be interpolated and result inserted inbetween. is this possible?
[22:06] <roboman2444> loblik_, yes it is possible
[22:06] <roboman2444> but i dont know the commandline for it
[22:08] <mpfundstein_home> you will have to use avisynth i think. but thats just a guess, maybe ffmpeg can do that as well
[22:52] <diverdude> Hello, i have a rawdata pointer which is updated realtime from an external lib. the data pointed to by the data is an image. I need to build a streaming server which can convey this stream and serve it as an mpeg stream for a webapplication. Would ffmpeg be a good candidate for that?
[00:00] --- Sat Dec 29 2012
More information about the Ffmpeg-devel-irc
mailing list