How to use CassiniDev.Lib without the timeout? - c#

I am using the CassiniDev.Lib4 DLL and recognized that the server stops responding after a certain amount of time.
Looking at the code in CassiniServer.cs I could see that a timeout of 60 seconds is set:
_server = new Server(port, virtualPath, applicationPath, ipAddress, hostname, 60000);
How can I avoid any timeout of the server? And why is there a timeout?
EDIT: Fiddler tells me:
HTTP/1.1 502 Fiddler - Connection Failed
Content-Type: text/html; charset=UTF-8
Connection: close
Timestamp: 09:18:38.367
The socket connection to localhost failed.
Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte 127.0.0.1:1278
EDIT 2: I'm not sure anymore, that it has to do with an implemented timeout, because I kept time and can't recognize a 60 seconds time window. Sometimes the server didn't respond after 40 seconds after the last click. Or could it be, that a cached website was loaded and the last click didn't trigger a request?
I am really looking forward to your hints!
Best regards,
KB

As a quick walkaround I uncommented the code of DecrementRequestCount() and IncrementRequestCount() in Server.cs. I think there still is a bug in CassiniDev.Lib4.
Cassini now seems to run properly without stopping responding.
I am sorry, that I didn't had more time to dive deeper into this, but I would appreciate any hints or fixes for this.

Related

HttpWebRequest how to completely close connection to server?

There is a problem. I am making a website parser. Apparently, there is some kind of protection from him (Apache server), after about 220 requests, the answer "The operation has timed out" is returned.
Tried:
HttpWebResponce to use with using.
Set Thread.Sleep between requests.
Set after 200 requests Thread.sleep for 10 minutes to close the connection.
4)Use HttpWebRequest.Abort()
5)Use HttpWebResponce.Close()
I even tried to send the Connection: keep-alive header in the first request, in the next Connection: close.
Nothing helps. But at the same time, when you restart the program, everything immediately starts working.
Can you tell me how to completely disconnect from the server?

gRPC connection gets cut after 60 seconds if inactivity

I have been trying to set up a gRPC API capable of streaming events to a client. Basically, after a client has subscribed, the server will use gRPC's "Server Streaming" feature to send any new event to the client.
I expect there to be periods of inactivity, where the connection should remain active. However, with my current setup it seems Nginx is cutting the connection after 60 seconds of inactivity with the following exception at the client:
Grpc.Core.RpcException: Status(StatusCode="Internal", Detail="Error starting gRPC call. HttpRequestException: An error occurred while sending the request. IOException: The request was aborted. IOException: The response ended prematurely, with at least 9 additional bytes expected.", DebugException="System.Net.Http.HttpRequestException: An error occurred while sending the request.
---> System.IO.IOException: The request was aborted.
---> System.IO.IOException: The response ended prematurely, with at least 9 additional bytes expected.
The question is why? and how can I prevent it?
My setup
The API is built in ASP.NET Core 3 (will probably upgrade to .NET 5 soon) and is running in a Docker container on a Digital Ocean server.
Nginx is also running in a Docker container on the server and works as a reverse proxy for the API (among other things).
The client is a simple C# client written in .NET Core and is run locally.
What have I tried?
I have tried to connect to the Docker image directly on the server using grpc_cli (bypassing Nginx) where the connection remain active for long periods of inactivity without any issues. So I can't see what else it can be, except Nginx. Also, most of Nginx default timeout values seem to be 60 seconds.
I have tried these Nginx settings and various combinations of them, yet haven't found the right one (or the right combination) yet:
location /commands.CommandService/ {
grpc_pass grpc://commandApi;
grpc_socket_keepalive on;
grpc_read_timeout 3000s; # These are recommended everywhere, but I haven't had any success
grpc_send_timeout 3000s; #
grpc_next_upstream_timeout 0;
proxy_request_buffering off;
proxy_buffering off;
proxy_connect_timeout 3000s;
proxy_send_timeout 3000s;
proxy_read_timeout 3000s;
proxy_socket_keepalive on;
keepalive_timeout 90s;
send_timeout 90s;
client_body_timeout 3000s;
}
The most common suggestion for people with similar issues is to use grpc_read_timeout and grpc_send_timeout, but they don't work for me. I guess it makes sense since I'm not actively sending/receiving anything.
My client code looks like this:
var httpClientHandler = new HttpClientHandler();
var channel = GrpcChannel.ForAddress("https://myapi.com", new GrpcChannelOptions()
{
HttpClient = new HttpClient(httpClientHandler) { Timeout = Timeout.InfiniteTimeSpan },
});
var commandService = channel.CreateGrpcService<ICommandService>();
var request = new CommandSubscriptionRequest()
{
HandlerId = _handlerId
};
var sd = new CancellationTokenSource();
var r = new CallContext(callOptions: new CallOptions(deadline: null, cancellationToken: sd.Token));
await foreach (var command in commandService.SubscribeCommandsAsync(request, r))
{
Console.WriteLine("Processing command: " + command.Id);
}
return channel;
To be clear, the call to the API works and I can receive commands from the server. If I just keep sending commands from the API, everything is working beautifully. But as soon as I stop for 60 seconds (I have timed it), the connection breaks.
A possible workaround would be to just keep sending a kind of heartbeat to keep the connection open, but I would prefer not to.
Does anyone know how I can fix it? Am I missing something obvious?
UPDATE: Turns out it wasn't Nginx. After I updated the API and the client to .NET 5 the problem disappeared. I can't say in what version this was fixed, but at least it's gone in .NET 5.
Not sure this is an Nginx issue, looks like a client connection problem.
Your results look very similar to an issue I had, that should have been fixed in .net 3.0 patch. Try updating to a newer version of .NET and see if that fixes the problem.
Alternatively, it could be a problem with the max number of connections. Try setting the MaxConcurrentConnections for the kestrel server (in appsettings.json):
{
"Kestrel": {
"Limits": {
"MaxConcurrentConnections": 100,
"MaxConcurrentUpgradedConnections": 100
}
}
}

.NET/C# to MySql running on linux - exception on first command, but subsequent commands do work

Have a really crazy situation. I can't post specifics, so I'm just looking for general guidance. We have already opened a ticket with Oracle/MySql support. I'm just looking to see if anyone else has run into this situation or anything similar. Here is our scenario:
Windows 2012 R2 Server with .NET 4.7.1 running.
Simple Windows Forms .NET application.
We are trying to run a simple query against a Linux MySql Server. MySql is Enterprise Version 5.7.x.
On the first attempted connection, the Windows Forms app locks the UI, waits about 15 seconds, and then reports back that there is an error running the command. The error is shown below.
System.ApplicationException: An exception occurred on the following sql command:select * from tablename where compl_date >= '2019-12-17 04:44:34 PM' ---> MySql.Data.MySqlClient.MySqlException: Authentication to host 'ip address' for user 'userid' using method 'mysql_native_password' failed with message: Reading from the stream has failed. ---> MySql.Data.MySqlClient.MySqlException: Reading from the stream has failed. ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.
When this error pops up, if I click on the "Continue" button, subsequent calls to the database work as intended (at about a 95% rate).
On the server, the mysqld error logs are shown below for the first call. Subsequent calls do work.
2019-12-16T22:06:29.554171Z 3496 [Warning] IP address 'client ip address' could not be resolved: Name or service not known
2019-12-16T22:06:50.188443Z 3496 [Note] Aborted connection 3496 to db: 'drupaldb' user: 'userid' host: 'ip address' (Got an error reading communication packets)
2019-12-17T02:53:17.832725Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 11355ms. The settings might not be optimal. (flushed=0 and evicted=0, during the time.)
2019-12-17T03:25:18.200855Z 3527 [Note] Got an error reading communication packets
2019-12-17T03:25:37.167395Z 3528 [Note] Got packets out of order
2019-12-17T03:25:37.382512Z 3529 [Note] Got packets out of order
2019-12-17T03:25:47.688836Z 3530 [Note] Bad handshake
2019-12-17T14:26:33.619967Z 4803 [Note] Got timeout reading communication packets
2019-12-17T19:34:34.741441Z 4851 [Note] Got timeout reading communication packets
2019-12-17T19:47:47.595426Z 4853 [Note] Got timeout reading communication packets
2019-12-17T19:48:45.586357Z 4854 [Note] Got timeout reading communication packets
If you have some general ideas, let me know.
FYI, we have some other linux/mysql instances, and this runs just fine.
At this point, we think we have solved the problem, at least for the short term. Both server and client are sitting on a private network. We think that the database server is trying to send a certificate to the windows client. The windows client is also on this private network. We think the Windows Client is not accepting the ssl certificate and that this is causing the failure on the first connection attempt. By adding the option "SslMode=None", this seems to resolve the issue.
Blog post we found that helped us: https://blog.csdn.net/fancyf/article/details/78295964

.NET WebSockets forcibly closed despite keep-alive and activity on the connection

We have written a simple WebSocket client using System.Net.WebSockets. The KeepAliveInterval on the ClientWebSocket is set to 30 seconds.
The connection is opened successfully and traffic flows as expected in both directions, or if the connection is idle, the client sends Pong requests every 30 seconds to the server (visible in Wireshark).
But after 100 seconds the connection is abruptly terminated due to the TCP socket being closed at the client end (watching in Wireshark we see the client send a FIN). The server responds with a 1001 Going Away before closing the socket.
After a lot of digging we have tracked down the cause and found a rather heavy-handed workaround. Despite a lot of Google and Stack Overflow searching we have only seen a couple of other examples of people posting about the problem and nobody with an answer, so I'm posting this to save others the pain and in the hope that someone may be able to suggest a better workaround.
The source of the 100 second timeout is that the WebSocket uses a System.Net.ServicePoint, which has a MaxIdleTime property to allow idle sockets to be closed. On opening the WebSocket if there is an existing ServicePoint for the Uri it will use that, with whatever the MaxIdleTime property was set to on creation. If not, a new ServicePoint instance will be created, with MaxIdleTime set from the current value of the System.Net.ServicePointManager MaxServicePointIdleTime property (which defaults to 100,000 milliseconds).
The issue is that neither WebSocket traffic nor WebSocket keep-alives (Ping/Pong) appear to register as traffic as far as the ServicePoint idle timer is concerned. So exactly 100 seconds after opening the WebSocket it just gets torn down, despite traffic or keep-alives.
Our hunch is that this may be because the WebSocket starts life as an HTTP request which is then upgraded to a websocket. It appears that the idle timer is only looking for HTTP traffic. If that is indeed what is happening that seems like a major bug in the System.Net.WebSockets implementation.
The workaround we are using is to set the MaxIdleTime on the ServicePoint to int.MaxValue. This allows the WebSocket to stay open indefinitely. But the downside is that this value applies to any other connections for that ServicePoint. In our context (which is a Load test using Visual Studio Web and Load testing) we have other (HTTP) connections open for the same ServicePoint, and in fact there is already an active ServicePoint instance by the time that we open our WebSocket. This means that after we update the MaxIdleTime, all HTTP connections for the Load test will have no idle timeout. This doesn't feel quite comfortable, although in practice the web server should be closing idle connections anyway.
We also briefly explore whether we could create a new ServicePoint instance reserved just for our WebSocket connection, but couldn't see a clean way of doing that.
One other little twist which made this harder to track down is that although the System.Net.ServicePointManager MaxServicePointIdleTime property defaults to 100 seconds, Visual Studio is overriding this value and setting it to 120 seconds - which made it harder to search for.
I ran into this issue this week. Your workaround got me pointed in the right direction, but I believe I've narrowed down the root cause.
If a "Content-Length: 0" header is included in the "101 Switching Protocols" response from a WebSocket server, WebSocketClient gets confused and schedules the connection for cleanup in 100 seconds.
Here's the offending code from the .Net Reference Source:
//if the returned contentlength is zero, preemptively invoke calldone on the stream.
//this will wake up any pending reads.
if (m_ContentLength == 0 && m_ConnectStream is ConnectStream) {
((ConnectStream)m_ConnectStream).CallDone();
}
According to RFC 7230 Section 3.3.2, Content-Length is prohibited in 1xx (Informational) messages, but I've found it mistakenly included in some server implementations.
For additional details, including some sample code for diagnosing ServicePoint issues, see this thread: https://github.com/ably/ably-dotnet/issues/107
I set the KeepAliveInterval for the socket to 0 like this:
theSocket.Options.KeepAliveInterval = TimeSpan.Zero;
That eliminated the problem of the websocket shutting down when the timeout was reached. But then again, it also probably turns off the send of ping messages altogether.
I studied this issue these days, compared capture packages in Wireshark(webclient-client of python and WebSocketClient of .Net), and found what happened. In WebSocketClient, "Options.KeepAliveInterval" only send one packet to the server when no message received from server in these period. But some server only judge if there is active message from client. So we have to manually send arbitrary packets (not necessarily ping packets,and WebSocketMessageType has no ping type) to the server at regular intervals,even if the server side continuously sends packets. That's the solution.

RabbitMQ Brokerunreachable exception timeout intermittently from a machine

We have 11 windows webapp machines running IIS. These send messages to rabbitMQ server for tasks. We are using rabbit for basic work queue functionality. For each message publish a new connection and a channel is created. Pretty much like in the tutorial here - https://www.rabbitmq.com/tutorials/tutorial-two-dotnet.html
This is working great most of the time, but in production, sporadically from a different machine every time once or twice a day, we start getting this exception on ConnectionFactory.CreateConnection.
[BrokerUnreachableException: None of the specified endpoints were reachable]
RabbitMQ.Client.ConnectionFactory.CreateConnection():56
[TimeoutException: Connection to amqp://machinename.domain.net:5672 timed out]
RabbitMQ.Client.Impl.SocketFrameHandler.Connect(TcpClient socket, AmqpTcpEndpoint endpoint, Int32 timeout):65
RabbitMQ.Client.Impl.SocketFrameHandler..ctor(AmqpTcpEndpoint endpoint, Func2 socketFactory, Int32 timeout):52
RabbitMQ.Client.Framing.Impl.ProtocolBase.CreateFrameHandler(AmqpTcpEndpoint endpoint, Func`2 socketFactory, Int32 timeout):8
RabbitMQ.Client.ConnectionFactory.CreateConnection():45
Which is causing message loss. I have been investigating max concurrent connections for each machine setting - but did not lead me anywhere. This does not coincide with our peak traffic either. The most interesting clue i have is that it happens in bursts and when it happens it is ONLY happening to one out of the 11 machines publishing messages to the queue at a time.
I am using rabbitmq dot net client.
Any ideas or pointers on what could be the possible cause?
Probably some sort of packet loss? Why not Try...Catch..Retry?
Do a ping RabbitServerHostName -t (where RabbitServerHostName is the server where you have Rabbit installed) in a command window and see after couple of days how many packet losses you have.
Because of all the packet drops and network instability issues, it is almost always a recommended approach to retry your connection creation. EasyNetQ library does it really well. It is however not very complex to implement your own timer based retry when you get this exception until connection is established.

Categories