[FFmpeg-devel] [PATCH] Make RTP work with IPv6 enabled v.2
Nicolas George
nicolas.george
Mon Oct 29 12:56:06 CET 2007
Hi.
First of all, thanks to Ronald to take the time to push these patches.
L'octidi 8 brumaire, an CCXVI, Luca Abeni a ?crit?:
> I cannot comment on the code (I do not know IPv6), but this patch fixes
> the problem I am seeing when I configure without --disable-ipv6
It seems that not much people amongst the ffmpeg core developers know IPv6.
I can at least try to explain what my patch does.
It is not really IPv6, in fact. A more correct term would be "protocol
independent".
With IPv4, the way to create and connect a socket is to (1) create the
socket in the correct family, (2) allocate and fill a sockaddr_in structure,
(3) call connect. The allocation and filling of the sockaddr_in structure is
mostly static: the structure is declared as an automatic variable, and its
member are accessed directly.
The preferred method now is different: the program must call getaddrinfo
with text parameters (host name or numeric IP address, service name and
number) and some flags (disabling DNS lookup). getaddrinfo returns a list of
possible parameters: socket family-type-protocol for the socket system call,
and sockaddr structure (dynamically allocated, with its size). Each set of
parameters must then be tried in turn until one succeeds.
The reason to try several set of parameters is that there are possible
failure causes that can not be tested without actually engaging the
connection (imagine that the remote host has an IPv6 address, but the daemon
you are trying to connect is old and only bound to IPv4). This makes thing
heavier to program, especially in asynchronous environment.
The same applies when binding instead of connecting: getaddrinfo has a flag
to indicate that the returned address will be used in bind instead of
collect (or sendto).
All this has really nothing to do with IPv6: it can be used in IPv4-only
contexts. In particular, getaddrinfo is the only standard (as in Single Unix
Specification) way to do host name lookup in a thread-safe manner
(gethostbyname_r is not standard). But with this scheme, as long as host
name / service / stream-or-datagram is enough to identify a peer, the format
of the address structure can change while keeping binary compatibility.
Now, as to what my match actually do. This is not much, actually. The
current IPv4-only code bind()s the socket, unconditionally.
The current protocol-independent code only does the bind if the requested
local port is not 0. The condition was there from the very beginning
(revision 3663, ML message <URL:
http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/16673 >) and there are
no explanations of the why or why not.
I just removed the condition (and reindented the lines accordingly.
I also changed the default value for the service argument to gtaddrinfo: it
was NULL, only requesting for DNS resolving. The same result could have been
achieved by removing yet another "if (port > 0)" condition two lines below
(which, in fact, may be a little bit cleaner, but less efficient).
By the way, I am a little unsure about a point of policy: adding or removing
a test on a few lines of code implies to add or remove one level of
indentation on the affected lines. What is the preferred way to submit such
a patch? Keeping it as one patch mixes cosmetic and functional, but
splitting it means one revision with incorrect indentation. Common sense
tells me to keep one patch if the number of lines is very low, and to split
if it is high. Is there an official statement about it?
Regards,
--
Nicolas George
More information about the ffmpeg-devel
mailing list