2012年9月12日 星期三

Speeding up TCP connection lost detection in a TCP client application

When we're writing a TCP server, it is very easy to detect connection lost of a remote TCP peer by enabling "TCP keep alive";with this feature enabled, when no data is actually transmitting on the present TCP connection, a "tcp-keep-alive" is sent and a "ack" return from the remote peer is required.
If there is no "ack" after several retries, the connection is  determined as "lost" by the OS.
When connection lost is detected, the system(Linux) will signal you by returning "read data ready" on the "select()" function, but returning a zero length data on "read()".
But, when we're writing a TCP client, the tricky part is: as a client, 90% of the time we always have data to send, so when the connection is lost, the TCP stack automatically retransmits the data; as a result, before "tcp-retransmit" actually timeouts, the "tcp-kee-alive" mechanism will not kick in.
Thus, if you are looking for a way to speed up "TCP connection lost" detection efficiency  in a client application, this "keep alive" mechanism  will do you no good.
A possible approach is to use "ioctl(socket , SIOCOUTQ, &BufferUsedSize)" to return the actual data size that is still waiting in the send queue, if no data has been successfully sent for a certain period of time, than the connection is possibly lost.

reference:
man 7 tcp

沒有留言: