Question Details

No question body available.

Tags

c# sockets timeout

Answers (2)

April 21, 2026 Score: 1 Rep: 80,583 Quality: Medium Completeness: 80%

Yeah, this isn't great code at all. I'm going to hazard that synchronous Send and Receive are being used, which should anyway be converted to async calls. Async calls are given their own cancellation tokens, you don't set it on the socket directly.


socket.SetSocketOption( SocketOptionLevel.Socket, SocketOptionName.SendTimeout, timeout );

This is the same as calling:

socket.SendTimeout = 1000;

lower down, so these two calls are conflicting. The latter notes in the docs: "This option applies to synchronous Send calls only." You should be using asynchronous code anyway.


socket.Blocking = true;

Again "The Blocking property has no effect on asynchronous methods" so it doesn't do anything in that case. Even in synchronous mode, TCP sockets will block either way.


socket.SendBufferSize = 8192;

Why mess with this? Windows can allocate dynamic buffers much better than you can.


_socket.LingerState = new LingerOption( true, 10 );

The default behaviour is :

Graceful shutdown, immediate return — allowing the shutdown sequence to complete in the background. Although this is the default behavior, the application has no way of knowing when (or whether) the graceful shutdown sequence actually completes.

In most cases it shouldn't be necessary to set a Linger option.

April 21, 2026 Score: 0 Rep: 39,576 Quality: Medium Completeness: 60%

A typical old-school pattern is to use one thread to accept connections, and then create a new thread to handle each connection, so that there can be multiple concurrent connections.

But that only means that sending or receiving data for a connection does not block new connection. Not that sending or receiving does not block the thread for that connection. So setting a timeout might very well be appropriate. But .Block defaults to true, so should be redundant, and .SetSocketOption(...) and .SendTimeout = 1000 looks like it just sets the same property to different values.

give the socket X seconds to perform the task, at which point it times out

As far as I can tell the example code should give the socket one second to send data before timing out, but wait indefinitely while receiving. Is the observed behavior something else?

The more modern approach would be to use the async APIs to limit the amount of threads used for connections, these take a cancellation token that can be used to specify a timeout. Or my preferred approach, use an existing library that implement whatever protocol you are using, assuming you are using some standard protocol.