I'm having a little trouble with WCF streaming a file. I am able to stream files to the server that are less than 300 MB, but when I try a file 300 MB or more, I get an error around 60% in saying "An established connection was aborted by the software in your host machine". This error sounds like I'm closing the connection before the file is finished, but I can't find it.
The client code I have opens the connection, calls the Upload Method, waits for the return, then closes the connection. This works fine for small files.
The WCF server is hosted in a Windows Service, using the net.tcp binding. I've tried changing the buffer sizes etc, but still no luck.
I'm looking for some assistance in tracking down this issue.
Server Side Binding:
NetTcpBinding tcp = new NetTcpBinding(SecurityMode.None);
tcp.SendTimeout = TimeSpan.FromMinutes(10);
tcp.ReceiveTimeout = TimeSpan.FromMinutes(10);
tcp.MaxBufferSize = 65536; // 16384;
tcp.MaxBufferPoolSize = 204003200; // 655360;
tcp.MaxReceivedMessageSize = 204003200; // 2147483647;
tcp.TransferMode = TransferMode.Streamed;
tcp.ReaderQuotas = new XmlDictionaryReaderQuotas()
{
MaxArrayLength = 2147483647
};
Client Side Binding:
NetTcpBinding tcp = new NetTcpBinding(SecurityMode.None);
tcp.SendTimeout = TimeSpan.FromMinutes(10);
tcp.ReceiveTimeout = TimeSpan.FromMinutes(10);
tcp.MaxBufferSize = 65536; // 16384;
tcp.MaxBufferPoolSize = 204003200; // 655360;
tcp.MaxReceivedMessageSize = 204003200; // 2147483647;
tcp.TransferMode = TransferMode.Streamed;
tcp.ReaderQuotas = new XmlDictionaryReaderQuotas()
{
MaxArrayLength = 2147483647
};
Class I use for the File Stream:
[MessageContract]
public class DataFileStream
{
[MessageHeader(MustUnderstand = true)]
public String ID { get; set; }
[MessageHeader(MustUnderstand = true)]
public String FileName { get; set; }
[MessageHeader(MustUnderstand = true)]
public long FileSize { get; set; }
[MessageBodyMember(Order = 1)]
public Stream StreamData { get; set; }
}
And the errors I receive:
Exception: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:48:35.9230000'
Inner Exception: The write operation failed, see inner exception.
Inner Exception: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:48:35.9230000'.
Inner Exception: An established connection was aborted by the software in your host machine
Thanks for the help in advance.
I think it's because you have your MaxReceivedMessageSize (and possibly MaxBufferPoolSize) set to about 194 megabytes. Try doubling it to 408006400 (about 400mb).
Change it to this:
tcp.MaxBufferPoolSize = 408006400;
tcp.MaxReceivedMessageSize = 408006400;
Reading the MSDN, I don't think the MaxBufferPoolSize will be the issue (but worthwhile increasing it just to make sure). If this works, reset it to your 200mb limit and test again.
The reason for this happening is once the server has reached 204003200 bytes (194mb) transferred, the server aborts the transfer because you indicated that's the maximum expected size. This is by design to prevent malicious use where somebody uploads an extremely large file to clog the server or to prevent excessive use by users.
Related
I'm uploading rather a lot of data (30gb+) across thousands of files. The whole process takes a while but I've been finding that consistently after 15 mins of transfers, the upload process fails and I get errors for each file that is currently being transferred (I'm doing it multithreaded so there are multiple uploads at once). The error code I'm getting is "error: Amazon.S3.AmazonS3Exception: The difference between the request time and the current time is too large. ---> Amazon.Runtime.Internal.HttpErrorResponseException: The remote server returned an error: (403) Forbidden. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden."
Seeing as its exactly 15 mins from the start of the whole process that this thing crashes, I think its maybe that the client is timing out, however I've set my client's timout to 45 mins I think:
{
var client = new AmazonS3Client(new AmazonS3Config()
{
RegionEndpoint = RegionEndpoint.EUWest2,
UseAccelerateEndpoint = true,
Timeout = TimeSpan.FromMinutes(45),
ReadWriteTimeout = TimeSpan.FromMinutes(45),
RetryMode = RequestRetryMode.Standard,
MaxErrorRetry = 10
});
Parallel.ForEach(srcObjList, async srcObj =>
{
try
{
var putObjectRequest = new PutObjectRequest();
putObjectRequest.BucketName = destBucket;
putObjectRequest.Key = srcObj.Key;
putObjectRequest.FilePath = filePathString;
putObjectRequest.CannedACL = S3CannedACL.PublicRead;
var uploadTask = client.PutObjectAsync(putObjectRequest);
lock (threadLock)
{
syncTasks.Add(uploadTask);
}
await uploadTask;
}
catch (Exception e)
{
Debug.LogError($"Copy task ({srcObj.Key}) failed with error: {e}");
throw;
}
});
try
{
await Task.WhenAll(syncTasks.Where(x => x != null).ToArray());
}
catch (Exception e)
{
Debug.LogError($"Upload encountered an issue: {e}");
}
});
await transferOperations;
Debug.Log("Done!");```
The documentation doesn't specify the maximum timeout value, but given that you're seeing 15 minutes exactly, it stands to reason there is some upper limit to the timeout value, either a hard limit or something in the S3 bucket's settings.
This answer suggests a clock synchronization difference might also be the case, but then I'd wonder why the transfer starts at all.
I have a simple REST service, and am calling it with WCF via WebChannelFactory.
When I set the binding to use TransferMode.Streamed, the connections do not seem to be re-used, and after several requests (usually ServicePointManager.DefaultConnectionLimit, but sometimes a few more), I run out of connections (the request call hangs, and then I get a Timeout exception).
[ServiceContract]
public interface IInviteAPI {
[OperationContract]
[WebGet(UriTemplate = "invites/{id}", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
Invite GetInvite(string id);
}
[STAThread]
static int Main(string[] args) {
ServicePointManager.DefaultConnectionLimit = 16; // make a larger default
WebHttpBinding binding = new WebHttpBinding();
binding.TransferMode = TransferMode.Streamed;
try {
WebChannelFactory<IInviteAPI> factory = new WebChannelFactory<IInviteAPI>(binding, new Uri("http://example.com/invite"));
IInviteAPI channel = factory.CreateChannel();
for (int i = 0; i < 100; i++) {
Invite data = channel.GetInvite("160"); // fails on i==16
}
((IChannel)channel).Close();
}
catch (Exception ex) {
Debug.WriteLine(ex);
}
return 0;
}
System.TimeoutException: The request channel timed out while waiting for a reply after 00:00:59.9969999.
There are many many posts on the net about not closing the channel - this is not the problem here, as I am simply making the same request multiple times on the same channel.
If I remove the binding.TransferMode = TransferMode.Streamed; line it works perfectly.
I can also create and close the channel inside the loop, and it has the same issue
for (int i = 0; i < 100; i++) {
IInviteAPI channel = factory.CreateChannel();
Invite data = channel.GetInvite("160"); // fails on i==20
((IChannel)channel).Close();
}
Interestingly, if I add a GC.Collect() in the loop, it does work!! After much detailed tracing through the .Net code, this seems to be because the ServicePoint is only held with a weak reference in the ServicePointManager. Calling GC.Collect then finalizes the ServicePoint, and closes all the current connections.
Is there something I am missing? How can I keep TransferMode.Streamed and be able to call the service multiple times, with a reasonable ServicePointManager.DefaultConnectionLimit?
(I need TransferMode.Streamed because other calls on the service are for transferring huge archives of data up to 1GB)
Update:
If I run netstat -nb I can see that there are 16 connections to the server in ESTABLISHED state. After 30 seconds or so, they change to CLOSE_WAIT (presumably the server closes the idle connection), but these CLOSE_WAIT connections never disappear after that, no matter how big I set the timeouts.
It seems like a bug in .Net: the connections should be being re-used, but are not. The 17th request is just being queued forever.
I know that there is a 20 limit of concurrent inbound connections to the WCF configuration default limits, concurrency and scalability.
you can incrase concurrent connection limit or :
for (int i = 0; i < 100; i++) {
Invite data = channel.GetInvite("160");
Thread.Sleep(1000); // prevent to concurrency connection to wcf service
}
I have an FTP client, running as part of a windows service that gets information from an FTP server on a scheduled basis. My issue is that sometimes, the FTP server is down for planned maintanance. When this happens, my FTP client still calls out on a scheduled basis and fails with the following error:
System.Net.WebException. The underlying connection was closed: An unexpected error occurred on a receive
I get the error above twice. After this, I get the following timeout error every time indefinitely:
System.Net.WebException The operation has timed out
Even with the maintenance window complete, my windows service will keep timing out when attempting to connect to the FTP server. The only way we can solve the problem is by restarting the windows service. The following code shows my FTP client code:
var _request = (FtpWebRequest)WebRequest.Create(configuration.Url);
_request.Method = WebRequestMethods.Ftp.DownloadFile;
_request.KeepAlive = false;
_request.Timeout = configuration.RequestTimeoutInMilliseconds;
_request.Proxy = null; // Do NOT use a proxy
_request.Credentials = new NetworkCredential(configuration.UserName, configuration.Password);
_request.ServicePoint.ConnectionLeaseTimeout = configuration.RequestTimeoutInMilliseconds;
_request.ServicePoint.MaxIdleTime = configuration.RequestTimeoutInMilliseconds;
try
{
using (var _response = (FtpWebResponse)_request.GetResponse())
using (var _responseStream = _response.GetResponseStream())
using (var _streamReader = new StreamReader(_responseStream))
{
this.c_rateSourceData = _streamReader.ReadToEnd();
}
}
catch (Exception genericException)
{
throw genericException;
}
Anyone know what the issue might be?
We are encountering this exception very often in our production code without any increase in number of requests to Couchbase or any memory pressure on the server itself.
The node has been allocated 30GB of RAM and the usage is of 3GB maximum but every now and then this exception is being thrown. The bucket is opened only once per application lifetime and only get and upsert operations are performed afterwards. The connection is initialised like this:
Config = new ClientConfiguration()
{
Servers = serverList,
UseSsl = false,
DefaultOperationLifespan = 2500,
BucketConfigs = new Dictionary<string, BucketConfiguration>
{
{ bucketName, new BucketConfiguration
{
BucketName = bucketName,
UseSsl = false,
DefaultOperationLifespan = 2500,
PoolConfiguration = new PoolConfiguration
{
MaxSize = 2000,
MinSize = 200,
SendTimeout = (int)Configuration.Config.Instance.CouchbaseConfig.Timeout
}
}}
}
};
Cluster = new Cluster(Config);
Bucket = Cluster.OpenBucket();
Can you please let me know if this initialisation is correct and more importantly what to check on the Couchbase server to find the cause of this issue? I have checked all logs on the server but could not find anything special at the time when those errors are being thrown.
Thank you,
Stacktrace:
System.Exception.Couchbase exception
at ###.DataLayer.Couchbase.CouchbaseUserOperations.Get()
at ###.API.Services.BaseService`1.SetUserID()
at ###.API.Services.EventsService+<GetResponse>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.EventsService.GetResponse()
at ###.API.Services.BaseService`1+<Any>d__28.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.BaseService`1.Any()
at lambda_method()
at ServiceStack.Host.ServiceRunner`1.Execute()
at ServiceStack.Host.ServiceRunner`1.Process()
at ServiceStack.Host.ServiceExec`1.Execute()
at ServiceStack.Host.ServiceRequestExec`2.Execute()
at ServiceStack.Host.ServiceController.ManagedServiceExec()
at ServiceStack.Host.ServiceController+<>c__DisplayClass11.<RegisterServiceExecutor>b__f()
at ServiceStack.Host.ServiceController.Execute()
at ServiceStack.HostContext.ExecuteService()
at ServiceStack.Host.RestHandler.ProcessRequestAsync()
at ServiceStack.Host.Handlers.HttpAsyncTaskHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest()
at System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep()
at System.Web.HttpApplication+PipelineStepManager.ResumeSteps()
at System.Web.HttpApplication.BeginProcessRequestNotification()
at System.Web.HttpRuntime.ProcessRequestNotificationPrivate()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
Caused by: System.Exception : Couchbase.Core.NodeUnavailableException: The node 172.31.34.105:11210 that the key was mapped to is either down or unreachable. The SDK will continue to try to connect every 1000ms. Until it can connect every operation routed to it will fail with this exception.
at ###.DataLayer.Couchbase.CouchbaseUserOperations.Get()
at ###.API.Services.BaseService`1.SetUserID()
at ###.API.Services.EventsService+<GetResponse>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.EventsService.GetResponse()
at ###.API.Services.BaseService`1+<Any>d__28.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start()
at ###.API.Services.BaseService`1.Any()
at lambda_method()
at ServiceStack.Host.ServiceRunner`1.Execute()
at ServiceStack.Host.ServiceRunner`1.Process()
at ServiceStack.Host.ServiceExec`1.Execute()
at ServiceStack.Host.ServiceRequestExec`2.Execute()
at ServiceStack.Host.ServiceController.ManagedServiceExec()
at ServiceStack.Host.ServiceController+<>c__DisplayClass11.<RegisterServiceExecutor>b__f()
at ServiceStack.Host.ServiceController.Execute()
at ServiceStack.HostContext.ExecuteService()
at ServiceStack.Host.RestHandler.ProcessRequestAsync()
at ServiceStack.Host.Handlers.HttpAsyncTaskHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest()
at System.Web.HttpApplication+CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep()
at System.Web.HttpApplication+PipelineStepManager.ResumeSteps()
at System.Web.HttpApplication.BeginProcessRequestNotification()
at System.Web.HttpRuntime.ProcessRequestNotificationPrivate()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper()
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification()
A NodeUnavailableException could be returned for any number of network related issues...However, since you mentioned you are running on AWS, it's likely the TCP keep-alives settings needs to be tuned on the client.
Your MinSize connections (200) is so large, that you are not likely using them all and they are sitting by idly until the AWS LB decides to shut them down. When this happens the SDK will temporarily put the node (1000ms) that failed into a down state and then try to reconnect. During this time any keys mapped to it will fail with that exception.
This blog describes how to set the TCP keep-alives time and interval: http://blog.couchbase.com/introducing-couchbase-.net-sdk-2.1.0-the-asynchronous-couchbase-.net-client
var config = new ClientConfiguration
{
EnableTcpKeepAlives = true, //default it true
TcpKeepAliveTime = 1000*60*60, //set to 60mins
TcpKeepAliveInterval = 5000 //KEEP ALIVE will be sent every 5 seconds after 1hr
};
var cluster = new Cluster(config);
var bucket = cluster.OpenBucket();
That assumes you are using version 2.1.0 or greater of the client. If you are not, you can do it through the ServicePointManager:
//setting keep-alive time to 200 seconds
ServicePointManager.SetTcpKeepAlive(true, 200000, 1000);
You'll have to set that that to a value less than what the AWS LB is set to (I believe it's 60 seconds).
You should also probably set your connection pool min and max a bit lower, like 5 and 10.
Even though the problem was not fully solved since we still encounter timeouts but at a lower rate, we increased the performance by using the ClusterHelper singleton instance as follows:
ClusterHelper.Initialize(
new ClientConfiguration
{
Servers = serverList,
UseSsl = false,
DefaultOperationLifespan = 2500,
EnableTcpKeepAlives = true,
TcpKeepAliveTime = 1000*60*60,
TcpKeepAliveInterval = 5000,
BucketConfigs = new Dictionary<string, BucketConfiguration>
{
{
"default",
new BucketConfiguration
{
BucketName = "default",
UseSsl = false,
Password = "",
PoolConfiguration = new PoolConfiguration
{
MaxSize = 50,
MinSize = 10
}
}
}
}
});
The exception is Remoting Exception - Authentication Failure. The detailed message says "Unable to read data from the transport connection: the connection was closed."
I'm having trouble with creating two simple servers that can comunicate as remote objects in C#. ServerInfo is just a class I created that holds the IP and Port and can give back the address. It works fine, as I used it before, and I've debugged it. Also the server is starting just fine, no exception is thrown, and the channel is registered without problems. I'm using Forms to do the interfaces, and call some of the methods on the server, but didn't find any problems in passing the parameters from the FormsApplication to the server when debugging. All seems fine in that chapter.
public ChordServerProgram()
{
RemotingServices.Marshal(this, "PADIBook");
nodeInt = 0;
}
public void startServer()
{
try
{
serverChannel = new TcpChannel(serverInfo.Port);
ChannelServices.RegisterChannel(serverChannel, true);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
I run two instances of this program. Then startNode is called on one of the instances of the application. The port is fine, the address generated is fine as well. As you can see, I'm using the IP for localhost, since this server is just for testing purposes.
public void startNode(String portStr)
{
IPAddress address = IPAddress.Parse("127.0.0.1");
Int32 port = Int32.Parse(portStr);
serverInfo = new ServerInfo(address, port);
startServer();
//node = new ChordNode(serverInfo,this);
}
Then, in the other istance, through the interface again, I call another startNode method, giving it a seed server to get information from. This is where it goes wrong. When it calls the method on the seedServer proxy it just got, a RemotingException is thrown, due to an authentication failure. (The parameter I'll want to get is the node, I'm just using the int to make sure the ChordNode class has nothing to do with this error.)
public void startNode(String portStr, String seedStr)
{
IPAddress address = IPAddress.Parse("127.0.0.1");
Int32 port = Int32.Parse(portStr);
serverInfo = new ServerInfo(address, port);
IPAddress addressSeed = IPAddress.Parse("127.0.0.1");
Int32 portSeed = Int32.Parse(seedStr);
ServerInfo seedInfo = new ServerInfo(addressSeed, portSeed);
startServer();
ChordServerProgram seedServer = (ChordServerProgram)Activator.GetObject(typeof(ChordServerProgram), seedInfo.GetFullAddress());
// node = new ChordNode(serverInfo,this);
int seedNode = seedServer.nodeInt;
// node.chordJoin(seedNode.self);
}
Try setting the ensureSecurity to false, and it should start working.
ChannelServices.RegisterChannel(serverChannel, false);
You've specified that security is a must on your Remoting server in startServer() with:
ChannelServices.RegisterChannel(serverChannel, true);
Yet the 'client' end does not specify security, hence the authorisation error. You need to specify tcp channel security on both ends unless the server security setting is set to 'false'. In your second startNode method you need to do the following before using Activator.GetObject, note no port specified on the TcpChannel unlike the server end:
TcpChannel ClientChan = new TcpChannel();
ChannelServices.RegisterChannel(ClientChan, true);
Furthermore, unless you're doing it in some code you haven't given us, you also do not seem to have registered a well known service type server side, although you say it's been working in the debugger so maybe that's not necessary in the case. See MSDN on RegisterWellKnownServiceType.