.NET - DotNetZip Backup over the network slow - c#

I am creating a backup software in c# for my organizations.
I have a problem with time to do a backup of my workstation to a shared folder on a server.
If I directly compress the files to the shared folder with temp file created direct to the shared folder, the time to compress is 3 minutes, but if I set the temp dir on the workstation, the compress time is 2 minutes.
I test this job with another backup program and the backup process with temp file created direct to the shared folder is 2 minutes.
What is wrong with dotnetzip?

Without seeing any code, I would imagine that it is trying to stream the output binary file to the server backup location.
The result of this is that every byte that gets wrote needs to be confirmed by the client / server relationship.
When you write it to your local system however, then move it to the server location, you are performing a single transfer, opposed to individual read / write operations for each segment of the file being wrote by the stream.
Its kinda similar to how contiguous file operations are faster on Sata Drives.
If pasting or Copying a 3GB file, you can attain really high speeds.
If pasting 3000 files that are 1kb each, your write speed won't actually be that fast because its treated as 3000 operations vs the single operation that can go at full speed.
Do you know if the other backup programs save the backup locally before moving?
I would imagine that they construct a temp file which is then moved server side.

Related

C#, can I save a file from a stream or other to my single exe?

I would like to take a serialized file and save it to my recourses folder in project.
My reason for doing this (maybe there's a better way) is I have a published exe (single executable file) for the program that runs and when it creates a serialized file I don't want it to save it to desktop. I need to somehow save it to my exe without going outside of it.
Any advice on how I could do this?
It's very ugly.....but you could use an "alternative data stream" on NTFS system.
http://ntfs.com/ntfs-multiple.htm
https://learn.microsoft.com/en-us/sysinternals/downloads/streams
How to read and modify NTFS Alternate Data Streams using .NET
https://blogs.msmvps.com/bsonnino/2016/11/24/alternate-data-streams-in-c/
https://oddvar.moe/2018/04/11/putting-data-in-alternate-data-streams-and-how-to-execute-it-part-2/
https://blog.foldersecurityviewer.com/ntfs-alternate-data-streams-the-good-and-the-bad/
https://www.irongeek.com/i.php?page=security/altds
You'll probably have security scanners stopping you from doing it.
In addition if you copy the from an NTFS volume to say FAT, then alternative data streams are lost.
Also some backup software may not backup ADS properly.
https://wiki.sep.de/wiki/index.php/Support_for_NTFS_alternate_data_streams_(ADS)_for_Windows
https://www.2brightsparks.com/resources/articles/ntfs-alternate-data-stream-ads.html
https://community.osr.com/discussion/89308/alternate-data-streams-and-backups
https://social.technet.microsoft.com/Forums/Azure/en-US/007d5442-1cd8-4293-b717-b8fa72606189/ntfs-data-streams-broken-by-design-on-file-copy?forum=winserverfiles

Editing a file in Azure storage

I am using the Azure Storage File Shares client library for .NET in order to save files in the cloud, read them and so on. I got a file saved in the storage which is supposed to be updated after every time I'm doing a specific action in my code.
The way I'm doing it now is by downloading the file from the storage using
ShareFileDownloadInfo download = file.Download();
And then I edit the file locally and uploading it back to the storage.
The problem is that the file can be updated frequently which means lots of downloads and uploads of the file which increases in size.
Is there a better way of editing a file on Azure storage? Maybe some way to edit the file directly in the storage without the need to download it before editing?
Downloading and uploading the file is the correct way to make edits with the way you currently handling the data. If you are finding yourself doing this often, there are some strategies you could use to reduce traffic:
If you are the only one editing the file you could cache a copy of it locally and upload the updates to that copy instead of downloading it each time.
Cache pending updates and only update the file at regular intervals instead of with each change.
Break the single file up into multiple time-boxed files, say one per hour. This wouldn't help with frequency but it can with size.
FYI, when pushing logs to storage, many Azure services use a combination of #2 and #3 to minimize traffic.

Downloading and running a JAR from memory in .NET

I have a C# application, where I need to download and run a JAR file, without it being saved to disk. Is this possible in C#? I can download files via WebClient just fine to disk (which is what I'm doing as of posting) & launch it via a batch script which is saved then deleted, but I want to take it a step further by not having anything touch the drive.
Thanks.
You could write a special Java class loader that loads classes via interprocess communication (IPC) from the .NET process instead of from a file. In addition, you'll need a small launcher JAR that first installs the class loader and then executes the JAR retrieved via IPC. And you'll need to implement the server part of the IPC communication in your .NET application.
But is it worth it? What would be the benefit of such a complex piece of software?
A JAR file needs to executed by javaw.exe, the JVM. Another process doesn't have the power to reach into your virtual memory space and read the file data. Not unless both processes co-operate and use a shared memory section. You get no such co-operation from javaw.exe, it requires a file.
This is a non-issue in Windows since javaw.exe will actually read the file from memory. When you write the file in your program, you actually write to the file system cache. Which buffers file data in RAM, delay-writing it to the disk. As long as the file isn't to big (gigabyte or more) and you don't wait too long to start java (minutes) and the machine has enough RAM (gigabyte or two) then javaw.exe reads the file from the file system cache. RAM, not disk. Exact same idea as a RAM-disk of old.

File Transfer with Maximum Speed on LAN

Almost all of file transfer softwares like [NetSupport, Radmin, PcAnyWhere..] and also the different codes i used in my application, it slows down the transfer speed when you send alot of small sized files < 1kb like Folder of a game that has alot of files.
for example on a LAN (ethernet CAT5 cables) i send a single file, let say a video, the transfer rate is between 2MB and 9MB
but when i send a folder of a game that has alot of files the transfer rate is about 300kb-800kb
as i guess it's because the way of sending a file:
Send File Info [file_path,file_Size].
Send file bytes [loop till end of the file].
End Transfer [ensure it received completely].
but when you use the regular windows [copy-paste] on a shared folder on the network, the transfer rate of sending a folder is always fast like sending a single file.
so im trying to develop a file transfer application using [WCF service c# 4.0] that would use the maximum speed available on LAN, and I'm thinking about this way:
Get all files from the folder.
if(FileSize<1 MB)
{
Create additional thread to send;
SendFile(FilePath);
}
else
{
Wait for the large file to be sent. // fileSize>1MB
}
void SendFile(string path) // a regular single file send.
{
SendFileInfo;
Open Socket and wait for server application to connect;
SendFileBytes;
Dispose;
}
but im confused about using more than one Socket for a file transfer, because that will use more ports and more time (delay of listening and accepting).
so is it a good idea to do it?
need an explaination about if it's possible to do, how to do it, a better protocol than tcp that would meant for this.
thanks in advance.
It should be noted you won't ever achieve 100% LAN speed usage - I'm hoping you're not hoping for that - there are too many factors there.
In response to your comment as well, you can't reach the same level that the OS uses to transfer files, because you're a lot further away from the bare metal than windows is. I believe file copying in Windows is only a layer or two above the drivers themselves (possibly even within the filesystem driver) - in a WCF service you're a lot further away!
The simplest thing for you to do will be to package multiple files into archives and transmit them that way, then at the receiving end you unpack the complete package into the target folder. Sure, some of those files might already be compressed and so won't benefit - but in general you should see a big improvement. For rock-solid compression in which you can preserve directory structure, I'd consider using SharpZipLib
A system that uses compression intelligently (probably medium-level, low CPU usage but which will work well on 'compressible' files) might match or possibly outperform OS copying. Windows doesn't use this method because it's hopeless for fault-tolerance. In the OS, a transfer halted half way through a file will still leave any successful files in place. If the transfer itself is compressed and interrupted, everything is lost and has to be started again.
Beyond that, you can consider the following:
Get it working using compression by default first before trying any enhancements. In some cases (depending on size/no. files) it might be you can simply compress the whole folder and then transmit it in one go. Beyond a certain size, however, and this might take too long, so you'll want to create a series of smaller zips.
Write the compressed file to a temporary location on disk as it's being received, don't buffer the whole thing in memory. Delete the file once you've then unpacked it into the target folder.
Consider adding the ability to mark certain file types as being able to be sent 'naked'- i.e. uncompressed. That way you can exclude .zips, avis etc files from the compression process. That said, a folder with a million 1kb zip files will clearly benefit from being packed into one single archive - so perhaps give yourself the ability to set a min size beyond which that file will still be packed into a compressed folder (or perhaps a file count/size on disk ratio for a folder itself - including sub-folders).
Beyond this advice you will need to play around to get the best results.
perhaps, an easy solution would be gathering all files together onto a big stream (like zipping them, but just append to make this fast) and send this one stream. This would give more speed, but will use up some cpu on both devices and a good idea how to separate all files in the stream.
But using more ports would, from what i know, only be a disadvantage, since there would be more different streams colliding and so the speed would go down.

Best process for auto-zipping of multiple MP3s

I've got a project which requires a fairly complicated process and I want to make sure I know the best way to do this. I'm using ASP.net C# with Adobe Flex 3. The app server is Mosso (cloud server) and the file storage server is Amazon S3. The existing site can be viewed at NoiseTrade.com
I need to do this:
Allow users to upload MP3 files to
an album "widget"
After the user has uploaded their
album/widget, I need to
automatically zip the mp3 (for other
users to download) and upload the
zip along with the mp3 tracks to
Amazon S3
I actually have this working already (using client side processing in Flex) but this no longer works because of Adobe's flash 10 "security" update. So now I need to implement this server-side.
The way I am thinking of doing this is:
Store the mp3 in a temporary folder
on the app server
When the artist "publishes" create a
zip of the files in that folder
using a c# library
Start the amazon S3 upload process (zip and mp3s)
and email the user when it is
finished (as well as deleting the
temporary folder)
The major problem I see with this approach is that if a user deletes or adds a track later on I'll have to update the zip file but the temporary files will not longer exist.
I'm at a loss at the best way to do this and would appreciate any advice you might have.
Thanks!
The bit about updating the zip but not having the temporary files if the user adds or removes a track leads me to suspect that you want to build zips containing multiple tracks, possibly complete albums. If this is incorrect and you're just putting a single mp3 into each zip, then StingyJack is right and you'll probably end up making the file (slightly) larger rather than smaller by zipping it.
If my interpretation is correct, then you're in luck. Command-line zip tools frequently have flags which can be used to add files to or delete files from an existing zip archive. You have not stated which library or other method you're using to do the zipping, but I expect that it probably has this capability as well.
MP3's are compressed. Why bother zipping them?
I would say it is not necessary to zip a compressed file format, you are only gong to get a five percent reduction in filesize, give or take a little. Mp3's dont really zip up by their nature the have compressed most of the possible data already.
DotNetZip can zip up files from C#/ASP.NET. I concur with the prior posters regarding compressibility of MP3s. DotNetZip will automatically skip compression on MP3, and just store the file, just for this reason. It still may be interesting to use a zip as a packaging/archive container, aside from the compression.
If you change the zip file later (user adds a track), you could grab the .zip file from S3, and just update it. DotNetZip can update zip files, too. But in this case you would have to pay for the transfer cost into and out of S3.
DotNetZip can do all of this with in-memory handling of the zips - though that may not be feasible for large archives with lots of MP3s and lots of concurrent users.

Categories