I have very simple code that download files from a web server and here is the code. As i said it is very basic
// use the web client to download
using (var client = new WebClient())
{
// download locally
client.DownloadFile(from, to);
}
But for some client the file does not download completely and does not throw exception. All these client come from different location and all have the same behavior that the WebClient download exactly 10 mb of ANY file above 10mb. A 8mb file is 8mb, a 20mb file is 10mb a 34 mb file is 10mb. The funny thing is we ask those user to stop using the software.
This issue is not related to the computer as we have alot of those user on laptop that the download works fine from home and it doesn't when they are at work and some it's totally reverse, the download doesn't work at work but it does at work. The behavior is also different for client within the same physical office.
We have tried to talk to their IT and they have no problem going on our http browsable directory and downloading many files over 10mb and it works perfectly and they state they never had such issue. The issue seems to spread more and more and since last windows 10 update much more client started to have this issue.
As a side note this download code has been unchanged and running for 5 years with nearly no issues.
Does anyone know why download would complete without any error (in try..catch) without having downloaded the whole file ? and Why would all these different client with the issue would be cut at EXACTLY 10,000 bytes.
Wanted to add that we tried to reinstall .NET Framework for these user in the past without any result thinking it must be an issue with that
I just edited to add this little extra details that the files they are trying to download are on an anonymous access folder so no login required and it is browsable. All user with the issue can use Chrome and Edge to navigate to the folder and right-click and download and the file is complete that way. Only .NET cannot download files above 10mb on their PC.
Related
Short Version
Is there a more efficient or less-resource-intensive way in C# to create a zip file of a folder recursively than using System.IO.Compression.ZipFile.CreateFromDirectory?
Long Version
I have a REST API running in an Azure App Service (Scaled at P2V2 420 total ACU and 7 GB memory). One of my endpoints accepts a POST request and generates a zip file using the following code. It then returns the name of the zip file that was generated wrapped in a standard JSON format.
System.IO.Compression.ZipFile.CreateFromDirectory(sourcePath, outputPath);
This runs fine on my local machine, but appears to have scaling issues on the cloud server. When I scale up to the next level app service (P3V2 820 ACU, 14GB memory) it runs fine, but without scaling up the API returns a 503 Service Unavailable message. I am not sure which resource is constrained (cpu, memory, disk, etc.), but I don't think it's a server timeout issue since it only runs about 10-20 seconds. The zip file generated is roughly 400 files or 200MB compressed.
My question is, since scaling up costs an extra $120 per month, is there a way to generate a zip file more efficiently so I don't need to scale up the server? Would it help at all to split the zip file creation into creating and adding files steps, using multiple commands rather than using CreateFromDirectory(...)?
Other info:
This endpoint is not being hit by anyone but me
I can try to create multiple smaller zip files to see if this helps with resource consumption, but this is not ideal for my use case.
This is not the only thing running on this server which is why it's already scaled up so much
Just to clarify, the zip file does actually get created on the server and I am able to see a log entry that gets logged just before response goes back to the user, and I can see the zip file in kudu. But Azure intercepts the 200 API response for some reason and sends a 503 instead.
I am having a HTTP PUT web API method in my MVC application which receives files from client side and Put it to the server storage.
As the file size might be large, I am not streaming the file into the memory to avoid memory out of bound exceptions, thus I am using MultipartFormDataStreamProvider to load it in a temp folder, and move it to final destination later.
Everything works perfectly except the fact that it doesn't upload files larger than 2097148 KB (2.097GB).
Once I give a file larger than that it starts streaming it at the Temp folder, and then it stops once the file sizes reaches 2097148 KB.
I have the folloing attributes in my web.config file :
maxRequestLength="5097151",
requestLengthDiskThreshold="50971",
maxAllowedContentLength="4242880000".
Also in IIS I have set the Maximum allowed content length (Bytes) to 4242880000 KB.
Is there anyother place which might cause this to happen?
Update
It seems that even under IIS 10 with .NET 4.6.1 the request is denied (400 Bad Request) even though all the limits are set to allow it.
Digging further it seems that this has been rejected at Microsoft.
In .Net 4.0 and earlier there is a 2Gb limitation in ASP.NET, that was fixed in .Net 4.5. However this fix makes a little sense because IIS itself does not support file uploads over 2Gb.
The only way to upload files over 2Gb to IIS-hosted server is to break it into pieces and upload piece by piece. Here are clients that can upload breaking a file into segments:
IT Hit Ajax File Browser
Sample WebDAV Browser
Note that these clients require your server to support PUT with Range header.
Another solution is to create a HttpListener-based server. HttpListener has much less functionality comparing to IIS but it does not have any upload limitations.
source
Scenario:
I have a client & server architecture. Client program captures multiple displays connected to a machine and saves as jpg file in a folder. The minimum speed is 5 images per second per display. Same folder is shared over network.
I have a windows service running on a server grade machine which pulls all the files as soon as it is created in the shared folder. These files are rendered in a browser through asp.net page through an img tag like live streaming. These files are also used to make a video later.
Problem:
Once in 8-10 days I see a slow down of file copy process where the client machine stacks more than 30,000 images in the folder but server could not pull it for some reason.
With the help of Red-Gate profiler I could find that only file copy process was stuck and could not move the file. After some time server pull process covered all the lag & came on track. To access file I am using Fast Directory Enumerator. More info here http://www.codeproject.com/Articles/38959/A-Faster-Directory-Enumerator
Initially we tried push implementation where client was pushing the images to server folder but got similar performance issue more frequently.
I confirmed that there was no network connectivity issue as well as CPU utilization was low when the process lagged. I have also handled case where a file is being accessed by another process due to which it could not be moved.
What could be the reason for this delay?
Is there any other best option to move the files to server?
I have a .Net website with C# code behind.
When I make coding changes to the website (on my local machine), and copy the files (.dll and .aspx files using ftp through Windows Explorer) to the server (hosted by GoDaddy) the site will sometimes not come up without clearing the browser cache first. It happens in IE, Firefox, and Chrome.
Does anyone know why this would happen and how to fix the issue?
(FYI - not sure if it matters, but the website has a SQL database and the site is http://www.fonyfacts.com/)
Thanks for your help!
As soon as you upload a new DLL your website will recycle and can take anything from a few seconds to much more to get going again - this is normal. This will also happen when some other files are changed too, such as the web.config. Like Stanislav says; build local and only upload when you're ready to run it.
I have an ASP.NET web application the entire site is browsed over HTTPS using a valid commercial certificate. In one part of the application it is possible to download an Excel spreadsheet.
The download is initiated from a POST (PostBack from a LinkButton)
The Response is cleared (Response.Clear(), Response.BinaryWrite(bytes[])) blah etc.. like we've done in a thousand projects that all work fine, Correct content headers are set and everything. the only difference here is SSL but I can't see how that's related. Yes there are loads of links about cache headers that prevent IE puting the file to temporary internet files so then the relevant office program can be launched to open it etc etc yadda... I've read all those. I have verified the cache headers with fiddler and LiveHeaders(FF Extension) and can confirm "Cache: private;" is what's being sent in the response from both the production site and my local dev set-up.
If I set up an SSL certificate on my local IIS instance and run the project I can open or save the exact same spreadsheet with no problems using IE ( I know there's nothing wrong with the live production file cos FireFox downloads it no sweat, what a surprise!) However, from the production web-server IE6 says the remote host disconected and IE7 just sits there downloading till the end of time (real helpful!) Gah i'm tearing me hair out
SSL, attachments and IE are a horrible mix. This is a known bug (some call it a "feature") with IE and certain HTTP headers over SSL. Basically, if the browser is told not to store the file what happens is that it is basically deleted before it's served to the user. You actually want to allow them to cache it.
Here is a Microsoft support article about it. And another.
Have you tried the Content Disposition MIME header?