[Ffmpeg-devel-irc] ffmpeg-devel.log.20161108

burek burek021 at gmail.com
Wed Nov 9 03:05:04 EET 2016


[00:02:43 CET] <llogan> do we have a coverity access policy? I tell random users "no" (in a nice way) but i'm not sure about what others do. I assumed we let only developers have access but the list shows many users i've never heard of.
[00:07:32 CET] <michaelni> some people with access are gsoc/outreachy students, some people i dont know and dont know why they have access though either
[00:10:02 CET] <DHE> Zeranoe: holy crap, 2 TB a day average? (roughly)
[00:11:23 CET] <Zeranoe> DHE: something was hitting a specific build and racking up traffic.
[00:13:03 CET] <rcombs> bintray has a free host for FOSS packages, but it's limited to 1TB/mo
[00:13:40 CET] <michaelni> Zeranoe, ask Raz- or try something like SF for hosting large binaries
[00:13:50 CET] <Zeranoe> If you remove the traffic for that file it was roughly 15TB
[00:13:53 CET] <Zeranoe> 25TB *
[00:14:09 CET] <Zeranoe> I'm going to avoid SF
[00:15:06 CET] <rcombs> SF's under new management and they're less evil now
[00:15:22 CET] <rcombs> Zeranoe: wait, 12TB with that file and 25TB without?
[00:16:12 CET] <Zeranoe> rcombs: Sorry, that's 12TB for Nov (including that file), and 25TB for Oct excluding it
[00:16:26 CET] <rcombs> ah
[00:17:25 CET] <rcombs> might want to poke bintray and see if they'd be willing to do a higher limit; I think homebrew has an arrangement like that
[00:19:12 CET] <Zeranoe> I poked my current host telling them I removed that file, but if they don't play ball I'll need to move. I'm not terribly concerned about losing old builds, unless someone else is.
[00:19:34 CET] <Zeranoe> The forum is backed up
[00:21:57 CET] <rcombs> CloudFlare isn't technically supposed to be a binary download host, but they're generally pretty cool about people using them that way
[00:23:03 CET] <llogan> michaelni: maybe ill purge the list of unknowns.
[00:24:30 CET] <michaelni> llogan, send me the list you want to purge privatly so i can double check 
[00:24:52 CET] <Zeranoe> Fosshub has reached out to me in the past. They could probably hook me up too
[00:25:04 CET] <llogan> michaelni: sure. may not get to it until later in week.
[00:25:13 CET] <michaelni> no hurry
[00:27:35 CET] <llogan> Zeranoe: i was always curious what your numbers would be but didn't think it would be that high
[00:28:20 CET] <Zeranoe> llogan: I doubt that's all organic. I bet people program to download from it
[00:30:46 CET] <llogan> i wonder how it compares with builds from relaxed
[00:31:16 CET] <Zeranoe> relaxed builds?
[00:31:32 CET] <llogan> yes, what the traffic is
[00:31:50 CET] <Zeranoe> I'm not familiar with his builds?
[00:32:15 CET] <llogan> http://johnvansickle.com/ffmpeg/
[00:32:49 CET] <Zeranoe> Ah
[01:06:53 CET] <Compn> Zeranoe : i was curious how much people were downloading your builds and how you were hosting it monetarily wise
[01:06:54 CET] <Compn> ehe
[01:07:11 CET] <Compn> 10tb+ month, thats crazy...
[01:07:32 CET] <Compn> i remember when we were a smaller project :P
[01:13:48 CET] <Zeranoe> Like I said though, a lot of that is clearly automated downloads
[01:14:09 CET] <Compn> 3rd party apps directly downloading your builds ?
[01:14:11 CET] <Compn> you think?
[01:14:15 CET] <Compn> probably yes
[01:14:49 CET] <Compn> you could probably dissuade downloads like that by enforcing referrers
[01:14:54 CET] <Compn> at least temporarily
[01:14:54 CET] <Zeranoe> Well I doubt 130k visitors a month are accounting for all that :)
[01:15:42 CET] <nevcairiel> i wouldnt be surprised if some third party sites deep link directly, or even worse, put links into their apps for client-local downloads or stuff like that
[01:15:59 CET] <Zeranoe> Maybe, but there is always ways around that stuff unless you enforce IP session blocking or other complex countermeasures. Even a simple curl command can fake a server to think it's a browser requesting.
[01:16:19 CET] <Compn> right
[01:16:53 CET] <Compn> are there any web based free mirrors still left? like nyud.net ? :P
[01:16:56 CET] <nevcairiel> well sure, but maybe they are not aware its a bad thing :p
[01:17:28 CET] <Compn> .nyud.net:8090
[01:18:02 CET] <Zeranoe> Using something like reCAPTCHA could be a solution, but I don't really want to block people downloading them directly from a console app.
[01:20:01 CET] <Zeranoe> All of this depends on my hosting providers response. I can do nothing but wait right now.
[01:20:05 CET] <Compn> i hate recaptcha
[01:21:48 CET] <Zeranoe> Everyone does
[01:24:28 CET] <Zeranoe> OVH has some interesting plans....
[01:26:25 CET] <Zeranoe> Unlimited bandwidth but only 40GB of storage.. I'd have to remove older builds
[01:32:26 CET] <haasn> Huh. I've added a breakpoint on matroska_read_header and webm_dash_manifest_read_header but neither of them get hit when playing a webm file in mpv
[01:33:50 CET] <haasn> oh nvm, I think mpv uses its own demuxer here
[01:44:22 CET] <rcombs> haasn: if you're trying to test the demuxer, you can force it with an option (I forget the name, though)
[01:52:22 CET] <haasn> rcombs: more importantly, where the heck is the matroska EBML tree documented?
[01:52:34 CET] <haasn> I can see lots of hard-coded magic numbers in the libavformat matroska demuxer
[01:52:42 CET] <haasn> but I can't find the specification that defines any of these
[01:52:54 CET] <haasn> for example that the video color track has ID 0x55B0
[01:52:57 CET] <haasn> seems to just come out of nowhere
[01:53:55 CET] <rcombs> that's from https://matroska.org/technical/specs/index.html
[01:54:06 CET] <haasn> `55B0` no hits :(
[01:54:15 CET] <haasn> oh
[01:54:15 CET] <rcombs> [55][B0]
[01:54:18 CET] <haasn> they're extremely autistic
[01:54:33 CET] <haasn> thanks
[01:54:34 CET] <TD-Linux> or more recently the matroskaorg github / ietf draft
[01:56:14 CET] <cone-023> ffmpeg 03Rostislav Pehlivanov 07master:0cf685380467: aacenc: quit when the audio queue reaches 0 rather than keeping track of empty frames
[01:56:56 CET] <cone-023> ffmpeg 03Steven Liu 07master:acd87dfc05b5: cmdutils: add show_demuxers and show_muxers
[02:59:58 CET] <llogan> Zeranoe: https://lists.ffmpeg.org/pipermail/ffmpeg-user/2016-November/034253.html
[03:00:21 CET] <llogan> (you can click on the sender's email address link to reply if you want to)
[03:18:10 CET] <CFS-MP3> Is there a non-hackish way for a filter to find out the name of the output file?
[03:18:25 CET] <CFS-MP3> Suppose I want to add the output filename to showinfo, specifically
[03:19:12 CET] <Zeranoe> llogan: I think I might just change the 403 page to read: "zeranoe.com is down due to HostGator being terrible. Please contact me with new hosting offers."
[03:19:45 CET] <Zeranoe> Until they resolve it there's nothing I can do...
[03:36:59 CET] <llogan> Anything with the name "gator" in it is usually terrible
[03:38:06 CET] <CFS-MP3> Zeranoe why not just use a leaseweb or OVH server? Those really have unlimited traffic and are cheap
[03:38:19 CET] <Zeranoe> llogan: Haven't had an issue since the beginning until now.
[03:38:53 CET] <Zeranoe> CFS-MP3: I talked about that here before. I think I might move to an OVH VPS when this settles down, or doesn't resolve.
[03:41:28 CET] <llogan> maybe you can talk hostgator into sponsoring the site.
[03:44:44 CET] <Zeranoe> I'm already taking steps to get away from them. This issue is enough for me to run the other direction.
[03:53:11 CET] <CFS-MP3> if it's just static file storage you can also use amazon S3, which solves lots of problems too
[03:54:25 CET] <Zeranoe> CFS-MP3: Isn't that pretty pricy? I need bandwidth in the magnitude of ~30TB
[03:59:08 CET] <llogan> its actually not that bad.
[04:01:59 CET] <Zeranoe> Wow, apparently you get faster support via Twitter... https://twitter.com/Zeranoe/status/795820067359432704
[04:07:39 CET] <c_14> 30TB traffic on amazon will cost you 2-3000 US (according to the price lists and the amazon easy usage calculator)
[04:10:10 CET] <rcombs> yeah, companies tend to react quickly to public shaming
[04:10:35 CET] <Zeranoe> c_14: Oh is that all? Pass
[04:12:39 CET] <Zeranoe> rcombs: The reply was quick, but we shall see how fast the resolution is.
[06:10:40 CET] <Zeranoe> Looks like we're back, for now.
[11:42:49 CET] <kierank> Gramner: ping
[11:44:27 CET] <Compn> Zeranoe : its annoying when one form of support works quickly (twitter) while email goes 48 hours lol
[11:44:51 CET] <BtbN> public Twitter is also the best way to get support from my ISP
[11:45:13 CET] <BtbN> Tweet at them that stuff is slow, and as they already know who I am, stuff gets fixed
[11:46:12 CET] <nevcairiel> presumably different support people, the interactive nature of twitter messaging might make it more real-time
[11:46:51 CET] <BtbN> than a phone call?
[11:46:52 CET] <funman> Gramner: http://pastebin.com/H2bKxj8G how can we make that fast ?
[11:47:18 CET] <BtbN> If you call them and tell them stuff is broken they get defensive and blame shit on you
[11:47:34 CET] <nevcairiel> like every support person, they think you are an idiot
[11:47:42 CET] <funman> presumably since you can tweet they assume you're not mad because no internet at all
[11:47:43 CET] <nevcairiel> which is probably true for 99.9% of thjeir cases anyway
[11:47:50 CET] <BtbN> I also have an ongoing issue with IPv6 not working 75% of the time
[11:48:18 CET] <BtbN> They even sent a technician, who connected his iPhone to the Wifi, did a speedtest, and told me if I call again they will bill me for his time.
[11:49:00 CET] <BtbN> And by not working I mean their router does hand out addresses, but no data goes through. So it messed stuff up quite badly
[11:49:38 CET] <nevcairiel> when I enable v6 my phone randomly loses connectivity over wifi, i'm still not sure if the router/ap is to blame or the phone, because there is some shit going in on the phone where it drops broadcast announces in the wifi firmware to "save power"
[11:50:19 CET] <BtbN> Still upset with that technician. He didn't even bother trying to reproduce the issue. Just blamed it on our equipment.
[11:55:00 CET] <Compn> an iphone isnt the best tool to test problems with :P
[11:55:01 CET] <Compn> ehe
[11:57:07 CET] <BtbN> dude didn't even bring a laptop, and had no idea what I was talking about when I said I suspect a routing misconfiguration on their end.
[12:38:20 CET] <cone-199> ffmpeg 03Reynaldo H. Verdejo Pinochet 07master:311107a65d01: ffserver: check for codec match using AVStream.codecpar
[12:38:20 CET] <cone-199> ffmpeg 03Reynaldo H. Verdejo Pinochet 07master:689f648a9596: ffserver: use .codecpar instead of .codec in print_stream_params()
[12:38:21 CET] <cone-199> ffmpeg 03Reynaldo H. Verdejo Pinochet 07master:1323349befd3: ffserver: get time_base from AVStream in print_stream_params()
[12:38:22 CET] <cone-199> ffmpeg 03Reynaldo H. Verdejo Pinochet 07master:afcbadf0eda3: ffserver: use AVStream.codecpar in find_stream_in_feed()
[12:38:23 CET] <cone-199> ffmpeg 03Reynaldo H. Verdejo Pinochet 07master:822e3e2ddb8a: ffserver: user AVStream.codecpar in compute_status()
[12:38:25 CET] <cone-199> ffmpeg 03Reynaldo H. Verdejo Pinochet 07master:6f0a1710d77d: ffserver: use AVStream.codecpar in open_input_stream()
[12:40:08 CET] <wm4> wait what does this mean
[12:45:35 CET] <BtbN> where to these even come from?
[14:07:59 CET] <cone-199> ffmpeg 03Vittorio Giovara 07master:a765ba647d3d: avformat/mov: Read multiple stsd from DV
[15:18:56 CET] <Gramner> kierank: what about the "divide and conquer" (e.g. libavutil/crc or whatever redis does) method? with some modifications. or does that cause issues with non-power of 2? y+c can be SIMD:ed with avx2 gathers, but I'm not sure if that even gains anyting if L1 latency is the bottleneck due to dependencies between iterations
[15:19:10 CET] <kierank> Gramner: doesn't work I believe because it's a 10-bit crc
[15:19:17 CET] <cone-199> ffmpeg 03Rostislav Pehlivanov 07master:0660a09dd1d1: opus: move all tables to a separate file
[15:19:18 CET] <cone-199> ffmpeg 03Rostislav Pehlivanov 07master:317be31eaf4f: opus: move the entropy decoding functions to opus_rc.c
[15:20:38 CET] <kierank> Gramner: hmm not sure actually
[15:20:39 CET] <kierank> might work
[15:21:31 CET] <Gramner> I'd try to get that working first
[15:21:39 CET] <Gramner> probably the most potential gain
[15:22:04 CET] <Gramner> that method should be SIMD:able as well in the same fashion with gathers
[15:22:22 CET] <kierank> i thought avx2 gather was not a simd operation
[15:22:25 CET] <kierank> until future processors
[15:22:35 CET] <Gramner> gather is avx2, scatter is avx-512
[15:22:56 CET] <Gramner> it's not faster than doing individual loads, but you can do the address calculation in SIMD
[15:23:02 CET] <kierank> oh
[15:23:03 CET] <Gramner> which is the entire point of gathers, really
[15:24:11 CET] <Gramner> but as I said, if the load latency is the bottleneck then reducing the number of arith ops in half wont really make any difference
[15:29:17 CET] <kierank> the load of the pixel data or from the LUT?
[15:30:14 CET] <Gramner> the non-divide-and-conquer method could probably be SIMD:ed with something like this (completely untested, even for syntax errors) http://pastebin.com/x5t1DMta and the same concept would apply to a potentially better algorithm as well.
[15:30:52 CET] <Gramner> the lut. the index is dependant on the previous interation so it can't be OOE:ed
[15:36:25 CET] <Gramner> the "dec" should obviously be "inc" btw
[17:45:22 CET] <cone-199> ffmpeg 03James Almer 07master:70c6a1bcf021: avformat/matroskadec: fix DiscardPadding element parsing
[17:49:14 CET] <atomnuker> woot, more vp9 avx2
[17:49:25 CET] <atomnuker> BBB: what are the performance gains like?
[18:00:36 CET] <wm4> BBB: does vp9 support colorspace and mastering bitstream flags?
[18:00:50 CET] <wm4> asking because youtube is apparently using dumb webm container tags
[18:01:06 CET] <wm4> not even sure whether they're standardized in matroska and I sure wouldn't expect them in webm
[18:01:12 CET] <BBB> wm4: yes, but it merges trc+space+matrix together in one item instead of specifying all 3 separately
[18:01:30 CET] <BBB> so I guess they decided to also put it in the container in case people want them separated?
[18:01:37 CET] <BBB> atomnuker: not much b/c adst is intra-only
[18:01:59 CET] <nevcairiel> vp9 only has the "normal" flags for the 3 variables, not mastering data
[18:02:04 CET] <nevcairiel> hence why it goes in the container
[18:02:07 CET] <wm4> lol
[18:02:09 CET] <wm4> thanks
[18:02:55 CET] <nevcairiel> someone asked me to support that, I should poke that some day
[18:03:42 CET] <wm4> or you could just use libavformat (same goes to me)
[18:04:28 CET] <nevcairiel> i kinda do! my custom demuxer is inside there =p
[18:04:41 CET] <nevcairiel> just need to copy-paste the metadata handling over there
[18:04:42 CET] <nevcairiel> :D
[18:05:08 CET] <nevcairiel> but my real problem is communicating that up the chain from the demuxer to the renderer, usually there is no direct metadata line there for me
[18:05:42 CET] <BBB> atomnuker: checkasm numbers:
[18:05:42 CET] <BBB> vp9_inv_adst_adst_16x16_add_8_avx: 726.2
[18:05:43 CET] <BBB> vp9_inv_adst_adst_16x16_add_8_avx2: 393.8
[18:05:48 CET] <nevcairiel> neat
[18:05:50 CET] <BBB> compared to idct:
[18:05:50 CET] <BBB> vp9_inv_dct_dct_16x16_add_8_avx: 466.8
[18:05:51 CET] <BBB> vp9_inv_dct_dct_16x16_add_8_avx2: 271.2
[18:05:57 CET] <BBB> adst is slower, which is expected
[18:06:07 CET] <BBB> overall impact is low b/c intra-only, like I just said
[18:09:22 CET] <wm4> nevcairiel: you can't transport packet side data?
[18:10:18 CET] <cone-199> ffmpeg 03Tom Butterworth 07master:0a2458758874: avcodec/hap: pass texture-compression destination as argument, not in context
[18:28:31 CET] <jamrial> wm4: they are going to be standarized in webm. see https://www.webmproject.org/docs/container/
[18:53:18 CET] <wm4> jamrial: do you know how they are going to specify container and decoder metadata conflicts?
[18:56:25 CET] <jamrial> wm4: no, sorry
[19:13:39 CET] <nevcairiel> wm4: isnt that stream global data, i would probably need to hook something up for that
[19:13:47 CET] <nevcairiel> also the side data probably doesnt cover the matrix, or does it?
[19:14:40 CET] <wm4> you can inject global stream side data into packets
[19:14:53 CET] <wm4> maybe there's even API for that
[19:16:41 CET] <nevcairiel> probably
[19:16:48 CET] <nevcairiel> i downloaded on the youtube streams for testing
[19:16:52 CET] <nevcairiel> +of
[19:17:03 CET] <nevcairiel> probably my first 10-bit stream of vp9 ever =p
[19:18:57 CET] <BBB> I encoded one at some point
[19:35:08 CET] <jamrial> youtube is using 10bit vp9 now?
[19:35:15 CET] <nevcairiel> for HDR content, yes
[19:35:42 CET] <nevcairiel> https://www.youtube.com/playlist?list=PLyqf6gJt7KuGArjMwHmgprtDeY8WDa8YX format 337 with youtube-dl
[19:37:09 CET] <nevcairiel> (although their sdr conversions are a tad bit over-saturated)
[19:38:03 CET] <wm4> mpv already supports it lol
[19:38:30 CET] <nevcairiel> chromecast ultra presumably does as well if you have a hdr display
[19:47:11 CET] <gnafu> nevcairiel: Neat!
[19:47:37 CET] <gnafu> I don't even have hardware capable of playing back UHD right now, much less 10-bit HDR, but it's fun to look at such a file :-).
[19:48:04 CET] <gnafu> (Actually, my phone might be able to do 8-bit UHD VP9 in hardware, but I'm not sure if it's not quite new enough or not.)
[19:48:21 CET] <gnafu> (I know it does hardware VP9 decoding, but it might only be up to 1080p.)
[19:50:19 CET] Action: gnafu downloads 315, the high-bitrate 8-bit version, to test.
[19:52:36 CET] <gnafu> Wait, looks like mine can do UHD in some format, but not VP9.
[19:52:46 CET] <gnafu> https://en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_devices#Snapdragon_800_and_801
[19:53:53 CET] <BBB> so I suppose you need new type of videocards etc. to display 10bit as-is?
[19:54:01 CET] <BBB> like, for example, if I have a nice sweet macbook pro
[19:54:10 CET] <BBB> (and Im displaying stuff using opengl, in floats)
[19:54:15 CET] <BBB> does it support 10bit? or not?
[19:54:40 CET] <nevcairiel> for nvidia you need a maxwell or pascal card, and i dont know about opengl, opengl 10-bit has in the past required "pro" cards, ie quadro or firepro
[19:54:59 CET] <BBB> oh you need a video display mode I guess, right?
[19:54:59 CET] <nevcairiel> for direct3d you use fp16 surfaces with 1.0 at 80 nits
[19:55:22 CET] <nevcairiel> and the driver converts that to the proper hdr hdmi format
[20:02:20 CET] <wm4> yeah what you actually need is signaling the HDR metadata
[20:02:57 CET] <nevcairiel> nvidia defined some in their NvAPI package
[20:03:20 CET] <nevcairiel> https://developer.nvidia.com/displaying-hdr-nuts-and-bolts
[20:04:03 CET] <nevcairiel> (actual metadata setting not shown, but its part of NV_HDR_COLOR_DATA)
[20:21:27 CET] <cone-199> ffmpeg 03Andreas Cadhalpun 07master:ff100c9dd97d: matroskadec: fix NULL pointer dereference in webm_dash_manifest_read_header
[20:31:38 CET] <gnafu> Confirmed: My LG G3 cannot play the 8-bit UHD version of that video, neither in the stock video player nor VLC.
[20:32:02 CET] <gnafu> I believe I have successfully played 1080p VP9 ony my phone, but that was presumably software-decoded (which is pretty neat).
[20:32:12 CET] <gnafu> s/ony/on/
[22:28:57 CET] <cone-199> ffmpeg 03Andreas Cadhalpun 07master:1bbb18fe82fc: mpegts: prevent division by zero
[22:42:29 CET] <nevcairiel> the last commit in the avconv merge batch is annoying .. the ffmpeg filtering code is way different, and has an architectural conflict - in short, the commit is trying to delay initializing filters until input was decoded (which makes sense, only then do we have the real info) - but our code tries to probe the filters to find out which files need to be read further, so clearly delaying their creation stands in direct competition to 
[22:42:29 CET] <nevcairiel> that
[22:43:31 CET] <JEEB> ugh
[22:43:36 CET] <JEEB> yeah, that's problematic
[22:47:48 CET] <nevcairiel> will probably need proper re-implementation instead of just merging. maybe we should just skip that one for now and work on that in parallel instead of blocking on it, it shouldn't block any further merges except a few "fixups" to that code
[22:49:51 CET] <nevcairiel> anyway all the commits in the batch up to that one are here: https://github.com/Nevcairiel/FFmpeg/commits/merge-avconv .. fate passes with one tiny change (timebase precision changes)
[22:50:06 CET] <nevcairiel> (top 3)
[23:02:54 CET] <jkqxz> Does that mean that ffmpeg would still have the lavfi reinit on hardware transcode?
[23:03:19 CET] <nevcairiel> i guess?
[23:03:21 CET] <jkqxz> Making cuvid or qsv work in the intermediate state would probably be a pain.
[23:03:29 CET] <nevcairiel> feel free to fix it quicker =p
[23:03:42 CET] <jkqxz> Maybe just ignore that...
[23:04:12 CET] <BtbN> cuvid hwtranscode is already broken on master
[23:07:46 CET] <nevcairiel> hm maybe i can hack around the limitation
[23:13:20 CET] <nevcairiel> or maybe not
[23:16:38 CET] <nevcairiel> well it works for some tests, many other still broken
[23:16:41 CET] <nevcairiel> but i guess its progress
[23:18:25 CET] <nevcairiel> put the WIP also on the branch, will try more tomorrow
[23:20:56 CET] <wm4> "which files need to be read further" wat?
[23:21:29 CET] <nevcairiel> its for multiple-input things, it tries to figure out from the filter graph which files need to be read next, by checking which inputs are starving
[23:21:53 CET] <nevcairiel> it really makes sense, just conflicts a bit with this idea of lazy graph init
[23:23:22 CET] <nevcairiel> but i worked around that by just assuming that a input which wasnt initialized yet is by definition starving
[23:25:19 CET] <dkc> in AVPixelFormat, there are some definitions for planar YUV with various bpp, but those variants doesn't exist for packed YUV. In my case I'd like to support UYVY with 20bpp, how should I handle that?
[23:25:36 CET] <wm4> nevcairiel: that sounds like a proper solution
[23:29:31 CET] <nevcairiel> dkc: support in what? usually pixel formats are created as there is a need for them, and apparently noone needed this one
[23:30:31 CET] <dkc> the use case is to receive raw video through RTP (RFC 4175: https://tools.ietf.org/html/rfc4175)
[23:32:23 CET] <dkc> there are different combinations of sampling and depth
[23:33:37 CET] <wm4> treat it as raw codec and repack to sane formats manually
[23:34:28 CET] <nevcairiel> those formats dont really represent typical pixel formats, that the 422 case happens to match UYVY layout is probably more of an accident then design - none of the others match any common formats whatsoever
[23:40:30 CET] <dkc> what do you mean? The point of this RFC is more or less to send over IP sensors' outputs, so I think it's by design
[23:41:34 CET] <nevcairiel> the 420 format just looks very bizzare, never seen it in a YYYYCbCr layout before
[23:41:46 CET] <nevcairiel> and the 444 version also doesnt look like a component order i have seen before
[23:42:20 CET] <TD-Linux> you are going to need lots of copies anyway because there are going to be many packets per frame
[23:42:33 CET] <JEEB> is that in blocks of 2x2? > YYYYCbCr
[23:42:52 CET] <JEEB> weird stuff
[23:43:05 CET] <nevcairiel> JEEB: they want all parts of one pixel always together so that missing packets only corrupt that particular part of the image
[23:43:16 CET] <JEEB> right, so yeah :D
[23:43:18 CET] <nevcairiel> so yeah, 4 Y samples and their chroma
[23:43:21 CET] <nevcairiel> in 2x2
[23:44:10 CET] <nevcairiel> but TD-Linux is probably right, while you are re-assembling it into a continous image, you might as well just convert it into a format we have
[23:44:24 CET] <JEEB> yup
[23:47:53 CET] <dkc> hum okay, I'll have to discuss that with my team, we might have overlooked a quite big issue
[23:49:06 CET] <dkc> Just for info, this RFC is used in TR-03: http://www.videoservicesforum.org/download/technical_recommendations/VSF_TR-03_DRAFT_2015-10-19.pdf
[23:49:29 CET] <dkc> thanks for your help!
[23:54:53 CET] <kierank> JEEB: it's bitpacked uyvy
[23:56:06 CET] <kierank> looks quite complex but it's quite easy to simd
[00:00:00 CET] --- Wed Nov  9 2016


More information about the Ffmpeg-devel-irc mailing list