[FFmpeg-user] h264_videotoolbox for YUV decoding?
sebastian.roth at gmail.com
Sun Mar 13 16:59:27 CET 2016
For a IP camera viewer (mobile App), I'm trying to hook up ffmpeg to the
RTSP stream like so:
RTSP (h264) -> ffmpeg decoded YUV frames -> OpenGL ES (YUV shader)
That works, but the video has some lag (not sure whether that is actually
related). CPU usage on an iPad 2 is at ~60%.
- Does it make sense to turn on the h264_videotoolbox hwaccel module or is
the overhead already to high with above flow?
- Will that change the above flow (e.g. is it RGB frames instead of YUV)?
I noticed a different pix_fmt
- I'm still looking for a hint on how to assign the hwaccel context to
the AVCodecContext. There is a property, and I could just set it - but I'm
not familiar whether there are other steps to be taken first?
More information about the ffmpeg-user