I have a C# application, where I need to download and run a JAR file, without it being saved to disk. Is this possible in C#? I can download files via WebClient just fine to disk (which is what I'm doing as of posting) & launch it via a batch script which is saved then deleted, but I want to take it a step further by not having anything touch the drive.
Thanks.
You could write a special Java class loader that loads classes via interprocess communication (IPC) from the .NET process instead of from a file. In addition, you'll need a small launcher JAR that first installs the class loader and then executes the JAR retrieved via IPC. And you'll need to implement the server part of the IPC communication in your .NET application.
But is it worth it? What would be the benefit of such a complex piece of software?
A JAR file needs to executed by javaw.exe, the JVM. Another process doesn't have the power to reach into your virtual memory space and read the file data. Not unless both processes co-operate and use a shared memory section. You get no such co-operation from javaw.exe, it requires a file.
This is a non-issue in Windows since javaw.exe will actually read the file from memory. When you write the file in your program, you actually write to the file system cache. Which buffers file data in RAM, delay-writing it to the disk. As long as the file isn't to big (gigabyte or more) and you don't wait too long to start java (minutes) and the machine has enough RAM (gigabyte or two) then javaw.exe reads the file from the file system cache. RAM, not disk. Exact same idea as a RAM-disk of old.
Related
I would like to take a serialized file and save it to my recourses folder in project.
My reason for doing this (maybe there's a better way) is I have a published exe (single executable file) for the program that runs and when it creates a serialized file I don't want it to save it to desktop. I need to somehow save it to my exe without going outside of it.
Any advice on how I could do this?
It's very ugly.....but you could use an "alternative data stream" on NTFS system.
http://ntfs.com/ntfs-multiple.htm
https://learn.microsoft.com/en-us/sysinternals/downloads/streams
How to read and modify NTFS Alternate Data Streams using .NET
https://blogs.msmvps.com/bsonnino/2016/11/24/alternate-data-streams-in-c/
https://oddvar.moe/2018/04/11/putting-data-in-alternate-data-streams-and-how-to-execute-it-part-2/
https://blog.foldersecurityviewer.com/ntfs-alternate-data-streams-the-good-and-the-bad/
https://www.irongeek.com/i.php?page=security/altds
You'll probably have security scanners stopping you from doing it.
In addition if you copy the from an NTFS volume to say FAT, then alternative data streams are lost.
Also some backup software may not backup ADS properly.
https://wiki.sep.de/wiki/index.php/Support_for_NTFS_alternate_data_streams_(ADS)_for_Windows
https://www.2brightsparks.com/resources/articles/ntfs-alternate-data-stream-ads.html
https://community.osr.com/discussion/89308/alternate-data-streams-and-backups
https://social.technet.microsoft.com/Forums/Azure/en-US/007d5442-1cd8-4293-b717-b8fa72606189/ntfs-data-streams-broken-by-design-on-file-copy?forum=winserverfiles
I'm looking for a way to join separate audio and video streams into a single container.
Specifically I have VP8 video (webm container) and 16-bit PCM audio (wav container), which I'd like to combine into a Matroska container.
So far I can achieve this by saving the streams to files, and calling ffmpeg.exe by using the Process API which produces the result I need, but I'd prefer a solution that doesn't rely on saving the intermediate files to disk or requiring the ffmpeg.exe to be on the server. Any help much appreciated!
You would need a managed Matroska/WebM library, or at least a managed wrapper to some native library if you want to avoid the additional process. I'm not aware of any that exist/are up-to-date. I started writing one a few years ago but never completed it.
On launching the process, it's not actually necessary to "save files to disk", as you can use a named pipe, which "looks like a file on disk", but is in fact just an interface to some in-memory value - so you can share the memory directly with ffmpeg/mkvmerge, by passing them the name of the pipe in place of the regular filename. Can't help with not requiring the binary on the server though - other than just packaging it with your solution.
I have a web application in C# that creates a file stream and then sequentially reads and processes rather large text files from the server. This process is currently very slow.
I've heard about memory-mapped files in C#. My question is: would it be faster to do that process if the file was entirely mapped to memory? Also, what are the other advantages/disadvantages of that?
I have a general question concerning C# & Windows API:
My task is loading a file from a document management system (DMS) and create a byte array from this file. From the developer of the DMS I got a dll which provides a method like this:
loadFile(int DocId, string PathToSaveFile);
Unfortunately the given dll does not provide me a method to deliver the requested file as a byte array or any kind of stream. Now my question, is it possible with C# to create some kind of virtual path which does actually not exists on secondary storage. Instead all bits and bytes written to this path are forwarded to me in a stream? The goal of my intention is to increase the performance as I don't have to write data to a hard drive.
I already searched a lot, but actually don't know the keywords I have to look for. Perhaps someone can give me a hint or just tell me that it is not possible at all.
It will depend somewhat on how the library will open the file and read the file. If it is using CreateFile then there is the potential that you could provide access via a named pipe. The path to a named pipe can be specified using \\.\pipe\PipeNameHere. In C# you can use NamedPipeServerStream.
However, I think the odds of the client application being compatible with this are relatively slim and would suggest creating a RAM drive that will be easier to implement and is more likely to work. A RAM drive will appear as a normal disk drive. You can save and load files to it, but it is all done in memory.
Almost all of file transfer softwares like [NetSupport, Radmin, PcAnyWhere..] and also the different codes i used in my application, it slows down the transfer speed when you send alot of small sized files < 1kb like Folder of a game that has alot of files.
for example on a LAN (ethernet CAT5 cables) i send a single file, let say a video, the transfer rate is between 2MB and 9MB
but when i send a folder of a game that has alot of files the transfer rate is about 300kb-800kb
as i guess it's because the way of sending a file:
Send File Info [file_path,file_Size].
Send file bytes [loop till end of the file].
End Transfer [ensure it received completely].
but when you use the regular windows [copy-paste] on a shared folder on the network, the transfer rate of sending a folder is always fast like sending a single file.
so im trying to develop a file transfer application using [WCF service c# 4.0] that would use the maximum speed available on LAN, and I'm thinking about this way:
Get all files from the folder.
if(FileSize<1 MB)
{
Create additional thread to send;
SendFile(FilePath);
}
else
{
Wait for the large file to be sent. // fileSize>1MB
}
void SendFile(string path) // a regular single file send.
{
SendFileInfo;
Open Socket and wait for server application to connect;
SendFileBytes;
Dispose;
}
but im confused about using more than one Socket for a file transfer, because that will use more ports and more time (delay of listening and accepting).
so is it a good idea to do it?
need an explaination about if it's possible to do, how to do it, a better protocol than tcp that would meant for this.
thanks in advance.
It should be noted you won't ever achieve 100% LAN speed usage - I'm hoping you're not hoping for that - there are too many factors there.
In response to your comment as well, you can't reach the same level that the OS uses to transfer files, because you're a lot further away from the bare metal than windows is. I believe file copying in Windows is only a layer or two above the drivers themselves (possibly even within the filesystem driver) - in a WCF service you're a lot further away!
The simplest thing for you to do will be to package multiple files into archives and transmit them that way, then at the receiving end you unpack the complete package into the target folder. Sure, some of those files might already be compressed and so won't benefit - but in general you should see a big improvement. For rock-solid compression in which you can preserve directory structure, I'd consider using SharpZipLib
A system that uses compression intelligently (probably medium-level, low CPU usage but which will work well on 'compressible' files) might match or possibly outperform OS copying. Windows doesn't use this method because it's hopeless for fault-tolerance. In the OS, a transfer halted half way through a file will still leave any successful files in place. If the transfer itself is compressed and interrupted, everything is lost and has to be started again.
Beyond that, you can consider the following:
Get it working using compression by default first before trying any enhancements. In some cases (depending on size/no. files) it might be you can simply compress the whole folder and then transmit it in one go. Beyond a certain size, however, and this might take too long, so you'll want to create a series of smaller zips.
Write the compressed file to a temporary location on disk as it's being received, don't buffer the whole thing in memory. Delete the file once you've then unpacked it into the target folder.
Consider adding the ability to mark certain file types as being able to be sent 'naked'- i.e. uncompressed. That way you can exclude .zips, avis etc files from the compression process. That said, a folder with a million 1kb zip files will clearly benefit from being packed into one single archive - so perhaps give yourself the ability to set a min size beyond which that file will still be packed into a compressed folder (or perhaps a file count/size on disk ratio for a folder itself - including sub-folders).
Beyond this advice you will need to play around to get the best results.
perhaps, an easy solution would be gathering all files together onto a big stream (like zipping them, but just append to make this fast) and send this one stream. This would give more speed, but will use up some cpu on both devices and a good idea how to separate all files in the stream.
But using more ports would, from what i know, only be a disadvantage, since there would be more different streams colliding and so the speed would go down.