[MPlayer-dev-eng] gl & distorted fisheye for dome projection
Reimar Döffinger
Reimar.Doeffinger at stud.uni-karlsruhe.de
Sat Jul 1 13:03:43 CEST 2006
Hello,
On Fri, Jun 30, 2006 at 10:54:18PM +0200, Johannes Gajdosik wrote:
> I would assume that the texture-color of a given
> fragment would be obtainted by
>
> TEX input_color, fragment.texcoord, texture, 2D;
>
> input_color would be a vector containing {r,g,b,a}
Yes.
> or in the mplayer case {y,u,v,?}.
No. Due to the nature of the data the video decoders deliver, y, u and v
are in seperate memory areas, and reordering would take about the same
time as converting to RGB, so y, u and v are each in a different
texture.
> What is the reason for that?
> Is fragment.texcoord[0] different from
> fragment.texcoord[1] and fragment.texcoord[2] ?
Yes, if you use -vo gl:rectangle=1, otherwise it shouldn't be. For
starting, I'd assume they are the same and fix problems as they come up.
It is much easier to work from a working solution onwards.
> # lookup trans_coor in index_texture at position fragment.texcoord
> TEX trans_coor, fragment.texcoord, index_texture, 2D;
Yes, but use trans_coor.rgb, no need to write the a component when it is
not used anyway (this is called a write mask).
index_texture more precisely would be texture[3]
> # scale into range [0..1]:
> MUL trans_coor, trans_coor, {0.0039215686,0.0039215686,0.0039215686,0}
Values returned by TEX are always in [0..1] range, so no need for that
(IOW, this step is done automatically in hardware).
> # brightness is the blue-coordinate:
> MOV brightness, trans_coor.zzzz
no point in doing that, just use trans_coor.zzzz whereever you would use
"brightness". Though for readability I personally tend to use the x,y,z,w
specifiers only when really using coordinates, and using r, g, b, a
otherwise.
> # get the color from the transformed location
> TEX input_color, trans_coor, texture, 2D;
Similar, yes. Though I usually use trans_coor.xyxy to make clear that
the zw components are unused.
Something like this should be correct:
TEX yuv.r, trans_coor.xyxy, texture[0], 2D;
TEX yuv.g, trans_coor.xyxy, texture[1], 2D;
TEX yuv.b, trans_coor.xyxy, texture[2], 2D;
> # convert yuv->rgb
> ..copy-paste from edgedetect.fp..
Not exactly, you must remove the gamma-correction step, since that
texture is used up by our translation texture.
> # move brightness-adjusted res into result
> MUL result.color, brightness, res
You can do that, though it needs 3 multiplications - doing before the
conversion
MUL yuv.r, yuv, trans_coor.bbbb;
should do the same at the cost of only one multiplication.
> I can easily supply such a ppm file. But how must mplayer be invoced,
> supposing that the fragment program is called "distort.fp" and
> the index texture "distortion_index.ppm" ?
-vo gl:yuv=2:customprog=distort.fp:customtex=distortion_index.ppm
> By the way, I have problems with edgedetect.fp: it shows no effect on my computer.
> I write
> ./mplayer -vo gl:yuv=4:customprog=edgedetect.fp some_file.avi
> But I see the same as when I use just ./mplayer some_file.avi.
> There is no suspicios error output, just:
> ==========================================================================
> [gl] using extended formats. Use -vo gl:nomanyfmts if playback fails.
> ==========================================================================
That message is normal. Please give the full output (maybe also use -v).
All this stuff will only work with videos that use the yv12 colorspace
(the vast majority, though not all), you can force this via -vf
scale,format=yv12 (I think, untested).
Also try to update, I fixed a bug in SVN, esp. with yuv=4
Greetings,
Reimar Döffinger
More information about the MPlayer-dev-eng
mailing list