I have a p2p app for file transfers using TCP. I am trying to send a DLL file and use reflection to read classes from it using Assembly.LoadFile("file directory") and i get the good old BadImageFormatException.
exception stacktrace
When I run the same file generated on this PC it works smoothly, but after i transfer it using the p2p it doesn't work.
I have tested files of bigger and smaller sizes than the file in question, and they transfer properly with no missed bytes.
For making the file i'm using File.WriteAllBytes() with the received bytes from the other peers.
The other files I have tested were .txt and .xml formats and they worked well.
Related
I have a general question concerning C# & Windows API:
My task is loading a file from a document management system (DMS) and create a byte array from this file. From the developer of the DMS I got a dll which provides a method like this:
loadFile(int DocId, string PathToSaveFile);
Unfortunately the given dll does not provide me a method to deliver the requested file as a byte array or any kind of stream. Now my question, is it possible with C# to create some kind of virtual path which does actually not exists on secondary storage. Instead all bits and bytes written to this path are forwarded to me in a stream? The goal of my intention is to increase the performance as I don't have to write data to a hard drive.
I already searched a lot, but actually don't know the keywords I have to look for. Perhaps someone can give me a hint or just tell me that it is not possible at all.
It will depend somewhat on how the library will open the file and read the file. If it is using CreateFile then there is the potential that you could provide access via a named pipe. The path to a named pipe can be specified using \\.\pipe\PipeNameHere. In C# you can use NamedPipeServerStream.
However, I think the odds of the client application being compatible with this are relatively slim and would suggest creating a RAM drive that will be easier to implement and is more likely to work. A RAM drive will appear as a normal disk drive. You can save and load files to it, but it is all done in memory.
I have a C# application, where I need to download and run a JAR file, without it being saved to disk. Is this possible in C#? I can download files via WebClient just fine to disk (which is what I'm doing as of posting) & launch it via a batch script which is saved then deleted, but I want to take it a step further by not having anything touch the drive.
Thanks.
You could write a special Java class loader that loads classes via interprocess communication (IPC) from the .NET process instead of from a file. In addition, you'll need a small launcher JAR that first installs the class loader and then executes the JAR retrieved via IPC. And you'll need to implement the server part of the IPC communication in your .NET application.
But is it worth it? What would be the benefit of such a complex piece of software?
A JAR file needs to executed by javaw.exe, the JVM. Another process doesn't have the power to reach into your virtual memory space and read the file data. Not unless both processes co-operate and use a shared memory section. You get no such co-operation from javaw.exe, it requires a file.
This is a non-issue in Windows since javaw.exe will actually read the file from memory. When you write the file in your program, you actually write to the file system cache. Which buffers file data in RAM, delay-writing it to the disk. As long as the file isn't to big (gigabyte or more) and you don't wait too long to start java (minutes) and the machine has enough RAM (gigabyte or two) then javaw.exe reads the file from the file system cache. RAM, not disk. Exact same idea as a RAM-disk of old.
Almost all of file transfer softwares like [NetSupport, Radmin, PcAnyWhere..] and also the different codes i used in my application, it slows down the transfer speed when you send alot of small sized files < 1kb like Folder of a game that has alot of files.
for example on a LAN (ethernet CAT5 cables) i send a single file, let say a video, the transfer rate is between 2MB and 9MB
but when i send a folder of a game that has alot of files the transfer rate is about 300kb-800kb
as i guess it's because the way of sending a file:
Send File Info [file_path,file_Size].
Send file bytes [loop till end of the file].
End Transfer [ensure it received completely].
but when you use the regular windows [copy-paste] on a shared folder on the network, the transfer rate of sending a folder is always fast like sending a single file.
so im trying to develop a file transfer application using [WCF service c# 4.0] that would use the maximum speed available on LAN, and I'm thinking about this way:
Get all files from the folder.
if(FileSize<1 MB)
{
Create additional thread to send;
SendFile(FilePath);
}
else
{
Wait for the large file to be sent. // fileSize>1MB
}
void SendFile(string path) // a regular single file send.
{
SendFileInfo;
Open Socket and wait for server application to connect;
SendFileBytes;
Dispose;
}
but im confused about using more than one Socket for a file transfer, because that will use more ports and more time (delay of listening and accepting).
so is it a good idea to do it?
need an explaination about if it's possible to do, how to do it, a better protocol than tcp that would meant for this.
thanks in advance.
It should be noted you won't ever achieve 100% LAN speed usage - I'm hoping you're not hoping for that - there are too many factors there.
In response to your comment as well, you can't reach the same level that the OS uses to transfer files, because you're a lot further away from the bare metal than windows is. I believe file copying in Windows is only a layer or two above the drivers themselves (possibly even within the filesystem driver) - in a WCF service you're a lot further away!
The simplest thing for you to do will be to package multiple files into archives and transmit them that way, then at the receiving end you unpack the complete package into the target folder. Sure, some of those files might already be compressed and so won't benefit - but in general you should see a big improvement. For rock-solid compression in which you can preserve directory structure, I'd consider using SharpZipLib
A system that uses compression intelligently (probably medium-level, low CPU usage but which will work well on 'compressible' files) might match or possibly outperform OS copying. Windows doesn't use this method because it's hopeless for fault-tolerance. In the OS, a transfer halted half way through a file will still leave any successful files in place. If the transfer itself is compressed and interrupted, everything is lost and has to be started again.
Beyond that, you can consider the following:
Get it working using compression by default first before trying any enhancements. In some cases (depending on size/no. files) it might be you can simply compress the whole folder and then transmit it in one go. Beyond a certain size, however, and this might take too long, so you'll want to create a series of smaller zips.
Write the compressed file to a temporary location on disk as it's being received, don't buffer the whole thing in memory. Delete the file once you've then unpacked it into the target folder.
Consider adding the ability to mark certain file types as being able to be sent 'naked'- i.e. uncompressed. That way you can exclude .zips, avis etc files from the compression process. That said, a folder with a million 1kb zip files will clearly benefit from being packed into one single archive - so perhaps give yourself the ability to set a min size beyond which that file will still be packed into a compressed folder (or perhaps a file count/size on disk ratio for a folder itself - including sub-folders).
Beyond this advice you will need to play around to get the best results.
perhaps, an easy solution would be gathering all files together onto a big stream (like zipping them, but just append to make this fast) and send this one stream. This would give more speed, but will use up some cpu on both devices and a good idea how to separate all files in the stream.
But using more ports would, from what i know, only be a disadvantage, since there would be more different streams colliding and so the speed would go down.
I wrote a basic c# wrapper for winpcap to capture packets from an interface and saving them to a dump file. Now i wanna get images in that pcap files. Is there a c# library for this purpose?
Have a look at DriftNet
Inspired by EtherPEG (though, not owning an Apple Macintosh, I've never actually seen it in operation), Driftnet is a program which listens to network traffic and picks out images from TCP streams it observes. Fun to run on a host which sees lots of web traffic.
I need to transfer a big file from PC to PPC (Pocket PC) but I can't use RAPI.
I need to:
Convert an .sdf (big) file to a binary format
Transfer to PPC (through a Web service)
In PPC convert from binary to .sdf
The problem is that in PPC I got an exception "out of memory" (in big file). With a small file it works excellently.
What can I do? Maybe I can send the file in slices?
What Conrad said, you need to send the file in smaller sizes, depending on what kind pf PPC you have. That could be as small as few mb, and then write to either a storage card or the main memory.
I personaly use FTP to transfer "big" files ( Ranging in the hundreds of mb ) to transfer to my PPC, but im sure you can use any other solution you may desire. But you need to research the limits of your target PPC, they often have limited memory,storage and CPU.
There is also a built in http downloader class in the .net compact framework you can use.