Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to create a simple virus remover. The algorithm I developed is meant to:
inspect the original file and the infected file
separate the virus from the infected file
use same algorithm for repairing other files infected with the virus
I know this is possible, since this is the same way patches are created, but I am a little bit lost on how to go about with this.
Any help around??
You'll have to put more intelligence than simply do some pattern matching and remove the isolated virus code.
The viruses you are aiming at are files infectors which are rarely used in our days.
Most of the time their replication process is as follow:
They copy themselves at the beginning or at the end of the PE files
Locate the entry point of the PE files
Put a jump instruction to this location pointing at theirs code
Disinfecting a file is the most difficult part for any anti-virus. It relies on the quality of the virus code: if it's buggy, the host file will just be unrecoverable.
In any case, you are entering a world of machine instructions where disassemblers (IDA, PE Explorer ...), and debuggers will be your dearest friends.
Do a difference of the two files, the basic idea would be to compare the original and infected files character by character until and saving discrepancies to some data structure. Then in the future you could look for the "virus" which would hypothetically be a collection of the differences, in other files and remove the "virus".
The only problem with this is that there will probably be discrepancies between the two files which have nothing to do with the "virus", e.g. the infected file was modified in some way different from the original, which has nothing to do with the virus.
EDIT***
Checking other files for the virus would not be too hard, but I am running under the assumption that you are dealing with some plain text form of file, for binary propitiatory files, I do not think you would be able to remove the "virus".
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Unity is trying to retrieve images stored in a folder other than the resource folder.
I don't know what to do because it's not included in the resource folder, so I can't call it to the resource manager.
Of course you can't, because Unity will cut off the files outside Resources while Build process. Even if you can access the file in editor with relative file path, the file will gone after the build, which will result FileNotFoundException or something. Briefly, Resources is a special folder name recognized by unity, to tell which files should be included in build process.
I think there are three choices to manage resources:
Using Resources folder
Which is pretty simple, easy, and old way. I think this method is pretty outdated, but it's still a valid option and good for Unity newbies.
Using AssetBundle
Unity will process your resource files in it's own way, which will optimize the saving or managing performance. If you have not tried this, you may have to read the tutorials about AssetBundle.
Using StreamingAssets folder
StreamingAssets is also a special folder name for Unity. Any file in this folder will be kept even after build, which means you can use normal System.IO file system. For example, Path.Combine(Application.streamingAssetsPath, "myimage.png") to get a path to your image. This makes your game highly modifiable for not only you, but also the other players, which means the game becomes very suitable for making mods.
By the way, there are another option called Addressable, but since it's pretty new feature I don't know much of it :(
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Would it be possible to create empty mp3 file, and then when a mp3 player tries to play such file, the mp3 file content gets download from a remote server?
Basically, I would want any ordinary mp3 player to be able to stream from a remote server as if it was playing from a local file.
If an empty file strategy would not be possible, are there other ways to let a ordinary mp3 player play a remote mp3 file?
The normal mp3 codec is not really designed to deal with that. It will decode around the speed of disk and once it hits the end of data, it will crash (as the rest of the file is invalid). The concept that data is not fully there on teh disk is not something you ever expect with filesystems. Nor do they expect the filesize to change while reading it.
That being said, it might be possible if you play the mp3 directly from a WebServer. It is fully expected that it will take time to get all the data. But in this case, it should actually be the read order to the OS that should block up. But it is equally likely the OS will fully download the file (propably into a temp directory) before it even allows the read of the 1st byte to progress.
For proper streaming you may need a specialized format. Stuff like having multiple qualities of the same data (higher resolution versions of the same image or video) placed after one another. This technique is usually called Interlacing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
My system will save ~20-40 million image files
Each file is 150-300KB
My application will run on windows server 2012 R2 and the files will be saved on storage (don't know which one yet)
My application is written in C#
My requirements are:
- The system will constantly delete old files and save new files (around 100K files per day)
- The most recent images will be automatically displayed to users on web and wpf applications
- I need fast access to recent files (last week) for report purposes
What is the best practice for storing / organizing this amount of files?
Broad questions much? If you're asking about how to organize them for efficient access that's a bit harder to answer without knowing the reason you're storing that many files.
Let me explain:
Lets say you're storing a ton of log files. Odds are your users are going to be most interested in the logs from the last week or so. So storing your data on disk in a way that you can easily access the files by day (e.g. yyyy-mm-dd.log) will speed up getting access to a specific day log.
Now instead think of it like a phone book and you're accessing peoples names. Well storing it by the time you inserted that name in the phone book really isn't going to help you get to the result you want quickly. Better come up with a better sorting algorithm.
Essentially look at how your data will be accessed, try to sort it in a logical manner so that you can do a binary search algorithm or better algorithm on it.
I'd highly recommend rewording your question so it is clearer though.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
As part of the Discovery process for an upcoming project, I am trying to find a way of taking a representative sample of the PPT files on our network. So far, I have collected & organized all of the PPT files that we have, however I've realized that there is an overwhelming volume of documents, such that I need to find a way to reduce it down. To this end, I was thinking that it'd be helpful to delete all "duplicate" files.
Our company does not have any sort of version control system for files on our network. As such, users often create copies of files in order to make small minor changes. This has led to a high volume of "duplicate" files with no real naming convention, etc. Ideally, I'd be able to make a best-guess as to which files are "duplicates" and keep the most recent version. Since I just need a representative sample, I do not need to be 100% accurate regarding the save/delete decision and it's also ok if I lose a chunk of the files due to (there are currently 135K files, and I expect to end up with 3-5K). I am not sure how to go about this, as tools like http://www.easyduplicatefinder.com/ seem to look for truly identical documents, as opposed to a more nuanced difference.
Here are a couple of additional details:
File names do not follow any standard convention
I think it's fair to assume that many of the PPT properties would remain unchanged across versions
Versions of files are always located in the same folder, however other PPT files may also exist in the same folder
I'm open to addressing this problem in any of the following languages/technologies: C#, VB, Ruby, Python, IronPython, PowerShell
I would approach it like:
extract all visible text strings from each .ppt file
dump the strings into text files, one per .ppt
run diff across all pairs of text files (in the same directory?) to get min edit distance
run the resulting distance matrix through a clustering algorithm
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
If you are copying many folders with files inside it can be usually better to just create a ZIP/RAR with the folder and files, copy it to network path and unzip it. This usually works much faster than copy paste
Is there a way to do this programatically and embed it into windows so that It can try to detect which way is faster and use that one (normal way or "compress on the fly") to improve speed
"compression on the fly" is a waste unless there is something on the other end that can perform the decompress OR if the compressed state is acceptable. That said:
Yes, you can write an app that zips/rars files.
Yes, you can have that app copy the zip/rar to a network directory.
Yes, you can have an app on the other end wait for the file and unzip it locally...
Can you have it detect "which way is faster"?? Although possible it is unlikely to be of benefit for anything other than large files... at which point you should always zip/rar and transfer...which would make the entire exercise rather pointless. Of course, you should probably evaluate the data that is likely to be transferred using your app to see if it is even a candidate for compression. Video, for example, might not be...
More to the point here, each end would have to have an application that is aware of each other (or at least the protocols involved). One app (we'll call it the client) would zip and post the file to another app (we'll call that one the server). When the server receives the file it would unzip it and store it on the file system.
update
I thought of another situation for zipping: transferring LOTs of little files at one time. Normal network file copy routines go much faster for a single large file vs lots of little files. So, if they are selecting a few hundred files to go at once you might be better off always zipping. Which, incidentally, doesn't change the requirement of having something on the other side able to decompress it.
Have you tried using robocopy ? It's built-in on Windows, robust, multi-threaded and features a lot of options, including mirroring and retries in case of failure. I use it with all copy to network locations. Give it a try.