How would i show the progress of compressing in SharpZipLib?
I'm developing a small application that Zip many file to a single zip file.
it may got a while to be done, there might be a progress bar that shows the progress of compressing, so is there a way to know how much has been compressed in SharpZipLib?
Yes you can see how much is compressed, by size of output stream, but that is not enought to show a progress bar, you should also know how big a output stream would be at the end, and of course you can't know that in advance.
You can measure progres when zipping individual files, and do that proportionaly by size of files, one file moves progress percentage by (size_of_file / total_size_of_all_files) * 100
For example let's say that you have 3 files :
file1.bin 1000 kb
file2.bin 500 kb
file3.bin 200 kb
after first file compressed move progres on 59%, after second file move it by 29% to 88% and after third to 100%.
If you use DotNetZip, there is a SaveProgress event that tells you how many bytes it has compressed.
There are code examples in the DotNetZip SDK showing how to use it.
Related
I'm making a project using c# 2013, windows forms and this project will use an IP camera to display a video for a long time using CGI Commands.
I know from the articles I've read that the return of the streaming video of the IP camera is a continuous multi-part stream. and I found some samples to display the video like this one Writing an IP Camera Viewer in C# 5.0
but I see a lot of code to extract the single part that represents a single image and displays it and so on.
Also I tried to take continuous snap shots from the camera using the following code.
HttpWebRequest req=(HttpWebRequest)WebRequest.Create("http://192.168.1.200/snap1080");
HttpWebResponse res = (HttpWebResponse)req.GetResponse();
Stream strm = res.GetResponseStream();
image.Image = Image.FromStream(strm);
and I repeated this code in a loop that remains for a second and counts the no. of snapshots that were taken in a second and it gives me a number between 88 and 114 snapshots per second
IMHO the first example that displays the video makes a lot of processing to extract the single part of the multi-part response and displays it which may be as slow as the other method of taking a continuous snapshots.
So I ask for other developers' experiences in this issue if they see other difference between the 2 methods of displaying the video. Also I want to know the effect of receiving a continuous multi-part stream on the memory is it safe or will generate an out of memory errors.
Thanks in advance
If you are taking more than 1 jpeg per 1-3 seconds, better capture H264 video stream, it will take less bandwidth and cpu.
Usually mjpeg stream is 10-20 times bigger than the same h264 stream. So 80 snapshots per second is a really big amount.
As long as you dispose of the image and stream correctly, you should not have memory issues. I have done a similar thing in the past with an IP Camera, even converting all the images that I take as a snapshot back into a video using ffmpeg (I think it was).
I'm looking to add a progress bar to a file upload to a WebService.
I just started working on a winforms application that I believe uses WCF to allow the client to upload a document to our corporate repository.
I'm using a UploadService where I pass it a Multi-part Stream consisting of metadata and a file. I've already taken care of building this part.
I'm not quite sure how to go about how to hook "something" to the stream so I can track it being uploaded
I've seen some people using a background worker to track the progress of a task asynchronously, but can't seem to find an example of someone doing this to track a file being uploaded to a WebService. I only seem to find an example of someone tracking the stream being built into memory.
Any advice/help is appreciated.
Thank you!
(I'm an intern, so if I mis-explained things, I apologize. I'd be happy to provide clearer details if necessary)
edit: From what I can tell the method to upload the stream only takes a stream in, there's no option to hand it the size of the stream, or how many bytes to read at a time.
Assuming you know the size of the file (if it's local you most likely do).
You're probably accessing it off-disk as a stream, and then copying it over to the upload stream.
If you're doing it in chunks (e.g a buffer), then you can calculate the progress:
var totalNumberOfChunks = (fileSize / chunkSize);
for(var chunk = 0; chunk < totalNumberOfChunks; chunk++)
{
// assuming you have the chunk byte array
// and have already sent it up to the server
var progress = ((double)chunk / totalNumberOfChunks) * 100;
// do something to surface this progress
}
Essentially you just want to work out how many separate chunks of data you're sending, and then as you send them calculate how far along you are and surface the progress somehow.
Of course, there are sometimes ways to have this done for you: Getting the upload progress during file upload using Webclient.Uploadfile
I am using the AviFile Library to make a avi video from Bitmaps, comming from Kinect. The file size gets really high, and over 2 GB I cannot open these files anymore. I will have to compress these files. Does anyone know a tool how I can compress it or a better lib than AviFile?
Kind regards
Alexander Ziegler
Ok, I have done a work around! The biggest problem was to handle the huge amount of images comming from the kinect. With compressing it was not possible to write them to file (--> out of memory). So I write them just into a file (during recording) as bytes. After recording I read them out, compress them and save them to avi (the files getting much smaller about 100 MB). Thanks for every comment
When the Xbox 360 console formats a 1gb USB device, it adds 978mb of data to it in just 20 seconds. I can see the files on the USB and they are that size.
When I copy a file of the same length in Windows, it takes 6 minutes.
Maybe it is because Windows reads/writes, but the 360 just writes?
Is there a way to create large files like that on a USB with that kind of performance? The files can be blank, of course. I need this writing performance for my application.
Most of the cmd tools I have tried have not had any noticeable performance gains.
It would appear that the 360 is allocating space for the file and writing some data to the file, but is otherwise leaving the rest of the file filled with whatever data was there originally (so-called "garbage data"). When you copy a file of the same size to the drive, it is writing all 978MB of, which is a different scenario and is why it takes so much longer.
Most likely the 360 is not sending 978mb of data to the usb stick, but is instead creating an empty file of size 978mb - yours takes longer because rather than simply sending a few KB to alter the file system information, you are actually sending 978mb of data to the device.
You can do something similar (create an empty file of fixed size) on windows with fsutil or Sysinternals "contig" tool: See Quickly create large file on a windows system? - try this, and you'll see that it can take much less than 20 seconds (I would guess that the 360 is sending some data, as well as reserving space for more). Note that one of the answers shows how to use the windows API to do the same thing, as well as a python script.
Could it be that the 360 is just doing some direct filesystem header manipulation? If a blank file is fine for you maybe you could try that?
It is all dependent on the throughput of the usb drive. You will need a high end usb such as the following: this list
In my application, the user selects a big file (>100 mb) on their drive. I wish for the program to then take the file that was selected and chop it up into archived parts that are 100 mb or less. How can this be done? What libraries and file format should I use? Could you give me some sample code? After the first 100mb archived part is created, I am going to upload it to a server, then I will upload the next 100mb part, and so on until the upload is finished. After that, from another computer, I will download all these archived parts, and then I wish to connect them into the original file. Is this possible with the 7zip libraries, for example? Thanks!
UPDATE: From the first answer, I think I'm going to use SevenZipSharp, and I believe I understand now how to split a file into 100mb archived parts, but I still have two questions:
Is it possible to create the first 100mb archived part and upload it before creating the next 100mb part?
How do you extract a file with SevenZipSharp from multiple splitted archives?
UPDATE #2: I was just playing around with the 7-zip GUI and creating multi-volume/split archives, and I found that selecting the first one and extracting from it will extract the whole file from all of the split archives. This leads me to believe that paths to the subsequent parts are included in the first one (or is it consecutive?). However, I'm not sure if this would work directly from the console, but I will try that now, and see if it solves question #2 from the first update.
Take a look at SevenZipSharp, you can use this to create your spit 7z files, do whatever you want to upload them, then extract them on the server side.
To split the archive look at the SevenZipCompressor.CustomParameters member, passing in "v100m". (you can find more parameters in the 7-zip.chm file from 7zip)
You can split the data into 100MB "packets" first, and then pass each packet into the compressor in turn, pretending that they are just separate files.
However, this sort of compression is usually stream-based. As long as the library you are using will do its I/O via a Stream-derived class, it would be pretty simple to implement your own Stream that "packetises" the data any way you like on the fly - as data is passed into your Write() method you write it to a file. When you exceed 100MB in that file, you simply close that file and open a new one, and continue writing.
Either of these approaches would allow you to easily upload one "packet" while continuing to compress the next.
edit
Just to be clear - Decompression is just the reverse sequence of the above, so once you've got the compression code working, decompression will be easy.