# [MPlayer-dev-eng] gl & distorted fisheye for dome projection

Johannes Gajdosik johannes.gajdosik at gmx.at
Thu Jun 29 00:27:58 CEST 2006

```> What is the formula to calculate this? There are two approaches:
> directly in a vertex program, or store in a texture and use a lookup.
> Of course, maybe you should first state what kind of graphics card you
> would target, my approach would require something fairly new, like
> Geforce FX series or comparable.

I see that you are an expert while I am a beginner in openGl.
What I am using is so simple, that it works with any graphic card
capable of openGl. I give you my central piece of code
(from viewport_distorter.cpp of stellarium):

for (int j=0;j<trans_height;j++) {
const VertexData *v0 = trans_array + j*(trans_width+1);
const VertexData *v1 = v0 + (trans_width+1);
for (int i=0;i<=trans_width;i++) {
glColor4fv(v0[i].color);
glTexCoord2fv(v0[i].xy);
glVertex3f(i*16, j*16, 0.0);
glColor4fv(v1[i].color);
glTexCoord2fv(v1[i].xy);
glVertex3f(i*16, (j+1)*16, 0.0);
}
glEnd();
}

Perhaps I could get better results when arranging the grid not rectangular,
but radial around the center of the image.
As you see, no GPU programming is involved.
I assume a "vertex program" is a GPU program, correct?
Can you also please explain how "store in a texture and use a lookup"
in this case my humble apology for disturbing.

Now for the formula of the texture coordinates. I never have cared
to write it down in one single piece, because it would get too long.
But the idea is very simple:
For each given point in the resulting picture, ray tracing is performed:
The ray starts from the projector (which is thought to be a single point),
then it goes to the specific point of the image - which can imagined as
a photo slide in an old photo slide projector -
then to the spheric mirror, which is usually located near the border of the dome.
There the ray is reflected, until it reaches the inside of the dome
- again a sphere.
And the coordinates of the point on the dome directly correspond
to the coordinates in the fisheye image.
For the color(brightness) I need the partial derivatives of the
formula described above. I need brightness correction because the
entire dome surface shall get illuminated evenly.
Otherwise the parts of the dome near to the spheric mirror
would be too bright.
The inverse projection - which would be necessary for a grid that
is aligned radial around the center of the fisheye image - is more
elaborate and involves numerical solution of higher order equations.

> > Could you please explain more about what you call the "vertex program code"
> > and how this can help passing the parameters.
>
> It can not help passing the parameters, the idea is to store the
> parameters and the algorithm to calculate the actual coordinates in a
> program that is executed on the CPU to transform the picture.
> See the -vo gl customprog and customtex suboptions and TOOLS/*.fp for
> examples of fragment programs, they are similar, but they operate on
> pixels (color values) whereas fragment programs operate on triangle
> coordinates.

You seem to mean some kind of lookup-table-texture: the value of a pixel
in the texture would not just be RGB, but instead it would be a pointer
to a position inside the source image. Is such a thing possible?
And if so, how? My vertex-coordinate-array (trans_array) is similar,
but it resides not on the graphic card, but in ordinary memory.
And it is accessed by the CPU, not the GPU.

You see that I am quite new to OpenGl. I would be thankful for any hints
how my code can be improved. A speed-up would be most welcome.

And please also lighten up the way how the geometry parameters needed
for the mapping come from user input into glDrawTex - or into
another place where they can be used for the distortion.

GPU programming would be interesting, of course, but I would also like
to keep compatibility with many graphic cards.

Yours,
Johannes

```