I need to transfer a big file from PC to PPC (Pocket PC) but I can't use RAPI.
I need to:
Convert an .sdf (big) file to a binary format
Transfer to PPC (through a Web service)
In PPC convert from binary to .sdf
The problem is that in PPC I got an exception "out of memory" (in big file). With a small file it works excellently.
What can I do? Maybe I can send the file in slices?
What Conrad said, you need to send the file in smaller sizes, depending on what kind pf PPC you have. That could be as small as few mb, and then write to either a storage card or the main memory.
I personaly use FTP to transfer "big" files ( Ranging in the hundreds of mb ) to transfer to my PPC, but im sure you can use any other solution you may desire. But you need to research the limits of your target PPC, they often have limited memory,storage and CPU.
There is also a built in http downloader class in the .net compact framework you can use.
Related
I have a p2p app for file transfers using TCP. I am trying to send a DLL file and use reflection to read classes from it using Assembly.LoadFile("file directory") and i get the good old BadImageFormatException.
exception stacktrace
When I run the same file generated on this PC it works smoothly, but after i transfer it using the p2p it doesn't work.
I have tested files of bigger and smaller sizes than the file in question, and they transfer properly with no missed bytes.
For making the file i'm using File.WriteAllBytes() with the received bytes from the other peers.
The other files I have tested were .txt and .xml formats and they worked well.
I have a C# application, where I need to download and run a JAR file, without it being saved to disk. Is this possible in C#? I can download files via WebClient just fine to disk (which is what I'm doing as of posting) & launch it via a batch script which is saved then deleted, but I want to take it a step further by not having anything touch the drive.
Thanks.
You could write a special Java class loader that loads classes via interprocess communication (IPC) from the .NET process instead of from a file. In addition, you'll need a small launcher JAR that first installs the class loader and then executes the JAR retrieved via IPC. And you'll need to implement the server part of the IPC communication in your .NET application.
But is it worth it? What would be the benefit of such a complex piece of software?
A JAR file needs to executed by javaw.exe, the JVM. Another process doesn't have the power to reach into your virtual memory space and read the file data. Not unless both processes co-operate and use a shared memory section. You get no such co-operation from javaw.exe, it requires a file.
This is a non-issue in Windows since javaw.exe will actually read the file from memory. When you write the file in your program, you actually write to the file system cache. Which buffers file data in RAM, delay-writing it to the disk. As long as the file isn't to big (gigabyte or more) and you don't wait too long to start java (minutes) and the machine has enough RAM (gigabyte or two) then javaw.exe reads the file from the file system cache. RAM, not disk. Exact same idea as a RAM-disk of old.
Almost all of file transfer softwares like [NetSupport, Radmin, PcAnyWhere..] and also the different codes i used in my application, it slows down the transfer speed when you send alot of small sized files < 1kb like Folder of a game that has alot of files.
for example on a LAN (ethernet CAT5 cables) i send a single file, let say a video, the transfer rate is between 2MB and 9MB
but when i send a folder of a game that has alot of files the transfer rate is about 300kb-800kb
as i guess it's because the way of sending a file:
Send File Info [file_path,file_Size].
Send file bytes [loop till end of the file].
End Transfer [ensure it received completely].
but when you use the regular windows [copy-paste] on a shared folder on the network, the transfer rate of sending a folder is always fast like sending a single file.
so im trying to develop a file transfer application using [WCF service c# 4.0] that would use the maximum speed available on LAN, and I'm thinking about this way:
Get all files from the folder.
if(FileSize<1 MB)
{
Create additional thread to send;
SendFile(FilePath);
}
else
{
Wait for the large file to be sent. // fileSize>1MB
}
void SendFile(string path) // a regular single file send.
{
SendFileInfo;
Open Socket and wait for server application to connect;
SendFileBytes;
Dispose;
}
but im confused about using more than one Socket for a file transfer, because that will use more ports and more time (delay of listening and accepting).
so is it a good idea to do it?
need an explaination about if it's possible to do, how to do it, a better protocol than tcp that would meant for this.
thanks in advance.
It should be noted you won't ever achieve 100% LAN speed usage - I'm hoping you're not hoping for that - there are too many factors there.
In response to your comment as well, you can't reach the same level that the OS uses to transfer files, because you're a lot further away from the bare metal than windows is. I believe file copying in Windows is only a layer or two above the drivers themselves (possibly even within the filesystem driver) - in a WCF service you're a lot further away!
The simplest thing for you to do will be to package multiple files into archives and transmit them that way, then at the receiving end you unpack the complete package into the target folder. Sure, some of those files might already be compressed and so won't benefit - but in general you should see a big improvement. For rock-solid compression in which you can preserve directory structure, I'd consider using SharpZipLib
A system that uses compression intelligently (probably medium-level, low CPU usage but which will work well on 'compressible' files) might match or possibly outperform OS copying. Windows doesn't use this method because it's hopeless for fault-tolerance. In the OS, a transfer halted half way through a file will still leave any successful files in place. If the transfer itself is compressed and interrupted, everything is lost and has to be started again.
Beyond that, you can consider the following:
Get it working using compression by default first before trying any enhancements. In some cases (depending on size/no. files) it might be you can simply compress the whole folder and then transmit it in one go. Beyond a certain size, however, and this might take too long, so you'll want to create a series of smaller zips.
Write the compressed file to a temporary location on disk as it's being received, don't buffer the whole thing in memory. Delete the file once you've then unpacked it into the target folder.
Consider adding the ability to mark certain file types as being able to be sent 'naked'- i.e. uncompressed. That way you can exclude .zips, avis etc files from the compression process. That said, a folder with a million 1kb zip files will clearly benefit from being packed into one single archive - so perhaps give yourself the ability to set a min size beyond which that file will still be packed into a compressed folder (or perhaps a file count/size on disk ratio for a folder itself - including sub-folders).
Beyond this advice you will need to play around to get the best results.
perhaps, an easy solution would be gathering all files together onto a big stream (like zipping them, but just append to make this fast) and send this one stream. This would give more speed, but will use up some cpu on both devices and a good idea how to separate all files in the stream.
But using more ports would, from what i know, only be a disadvantage, since there would be more different streams colliding and so the speed would go down.
When the Xbox 360 console formats a 1gb USB device, it adds 978mb of data to it in just 20 seconds. I can see the files on the USB and they are that size.
When I copy a file of the same length in Windows, it takes 6 minutes.
Maybe it is because Windows reads/writes, but the 360 just writes?
Is there a way to create large files like that on a USB with that kind of performance? The files can be blank, of course. I need this writing performance for my application.
Most of the cmd tools I have tried have not had any noticeable performance gains.
It would appear that the 360 is allocating space for the file and writing some data to the file, but is otherwise leaving the rest of the file filled with whatever data was there originally (so-called "garbage data"). When you copy a file of the same size to the drive, it is writing all 978MB of, which is a different scenario and is why it takes so much longer.
Most likely the 360 is not sending 978mb of data to the usb stick, but is instead creating an empty file of size 978mb - yours takes longer because rather than simply sending a few KB to alter the file system information, you are actually sending 978mb of data to the device.
You can do something similar (create an empty file of fixed size) on windows with fsutil or Sysinternals "contig" tool: See Quickly create large file on a windows system? - try this, and you'll see that it can take much less than 20 seconds (I would guess that the 360 is sending some data, as well as reserving space for more). Note that one of the answers shows how to use the windows API to do the same thing, as well as a python script.
Could it be that the 360 is just doing some direct filesystem header manipulation? If a blank file is fine for you maybe you could try that?
It is all dependent on the throughput of the usb drive. You will need a high end usb such as the following: this list
How to write large content to disk dynamically using c sharp. any advice or reference is appreciated.
Iam trying to create a file(custom format and extension)and writing to it. The User will upload a file and its contents are converted to byte stream and is written to the file(filename.hd).The indexing of the uploaded files is done in another file(filename.hi).
This works fine for me when the "filename.hd" file size is 2 GB when it exceeds 2GB it is not allowing me to add the content.This is my problem.
After googling i found that the FAT 32 windows based system (most of the versions) only support 2GB of file size.Is there any solution for me to handle this situation.Please let me know.
Thanks in advance
sree
Use another filesystem (e.g. NTFS) ?
Use StreamWriter for writing to disk. StringBuilder is recommended to create the string, since when using 'string' appending two strings really creates a new string, which hurts preformance.
Okay you will have some restrictions that are not code related:
File system - FAT and FAT32 will restrict you.
Whether the system is 16, 32 or 64 bit will place restrictions on you.