How to create self expiring files - c#

I need to have controlled usage over STL files, using a desktop software.( I shall develop the software). It can be either in C++ or C#.
Here STL files refers to STereoLithography files, used for 3D printing.
Controlled usage , refers to usage specified by the distributor. So it can be 1 day, 2 hour, or whatever the distributor deems fit . The files shall self expire after the user has received it.
Any ideas shall be appreciated.

I looked into the STL standard definition and it looks like it might be hard to embed some license data inside. A few options coming to my mind are:
a) Create your own format being a superset of STL, including some embedded license data. You would have to restrict usage of "clear" STL files, because user might have extracted the data portion of your file and save it to simple STL file.
b) Create your own format with your own structure including the license. It'll make extracting the data harder than in point a).
c) Make the program download the data from your server - the license testing will be on your side then. Make sure, that no data is saved on the harddrive, because otherwise user can again extract data and save the file somewhere else
d) (Preferred) Do not implement any security measures (determined cracker will destroy them eventually, because at some point you have to store unencrypted STL data on the disc or in memory, so it can be accessed). Instead, license your files correctly.
Remember, there is no security measure, that cannot be broken. It's a lot more valuable for your customers, that you spend time on developing new features than on implementing new security measures, which will annoy legit users and will be ignored by unfair ones anyway eventually.

Files do not expire on their own (unless we're talking about faulty storage media) and access to them needs to be restricted by software or a combination of software and hardware.
If you plan to make the STL files openly available at any single point (e.g. when the user tries to open them in the viewer or editor), their content cannot be hidden or prevented from copying.
And even if you bundle them with a program that would extract them from itself or obtain them from your website when the editor starts and delete them when it exits (automatically), the editor may still be able to save a copy as a different file (it may even save a temporary/backup copy automatically).
One way to protect those files from copying is to make them available within a program of yours and never outside of it, which may render the files totally useless if your program doesn't let the user determine if they're good (I'm imagining, 1 day, 2 hour, or whatever implies some sort of trial version). But even then they may still be extracted from it at run time by skillful hackers.
If the OS supports DRM for arbitrary files and in ways of interest to you, you might be able to use the OS DRM functionality to control file copying and lifetime. Unfortunately, I do not have practical knowledge of this to point in the direction of such a solution.
Another option is to distribute the files in the open, but embed into them some kind of watermarks, unique for each user/license and able to survive a certain amount of editing. This won't solve every problem, but if a copy starts circulating online, you will be able to tell who "leaked" it and go after them.
At any rate, all protection can be circumvented, given enough time and skills. If you can't break it, it doesn't mean someone else won't be able to.

Related

File.Delete or File.Encrypt to wipe files?

is it possible to use either File.Delete or File.Encrypt to shred files? Or do both functions not overwrite the actual content on disk?
And if they do, does this also work with wear leveling of ssds and similar techniques of other storages? Or is there another function that I should use instead?
I'm trying to improve an open source project which currently stores credentials in plaintext within a file. Because of reasons they are always written to that file (I don't know why Ansible does this, but for now I don't want to touch that part of the code, there may be some valid reason, why that is that way, at least for now) and I can just delete that file afterwards. So is using File.Delete or File.Encrypt the right approach to purge that information off the disk?
Edit: If it is only possible using native API and pinvoke, I'm also fine with that. I'm not limited to only .net, but to C#.
Edit2: To provide some context: The plaintext credentials are saved by the ansible internals as they are passed as a variable for the modules that get executed on the target windows host. This file is responsible for retrieving the variables again: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1#L287
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/csharp/Ansible.Basic.cs#L373
There's a possibility that File.Encrypt would do more to help shred data than File.Delete (which definitely does nothing in that regard), but it won't be a reliable approach.
There's a lot going on at both the Operating System and Hardware level that's a couple of abstraction layers separated from the .NET code. For example, your file system may randomly decide to move the location where it's storing your file physically on the disk, so overwriting the place where you currently think the file is might not actually remove traces from where the file was stored previously. Even if you succeed in overwriting the right parts of the file, there's often residual signal on the disk itself that could be picked up by someone with the right equipment. Some file systems don't truly overwrite anything: they just add information every time a change happens, so you can always find out what the disk's contents were at any given point in time.
So if you legitimately cannot prevent a file getting saved, any attempt to truly erase it is going to be imperfect. If you're willing to accept imperfection and only want to mitigate the potential for problems somewhat, you can use a strategy like the ones you've found to try to overwrite the file with garbage data several times and hope for the best.
But I wouldn't be too quick to give up on solving the problem at its source. For example, Ansible's docs mention:
A great alternative to the password lookup plugin, if you don’t need to generate random passwords on a per-host basis, would be to use Vault in playbooks. Read the documentation there and consider using it first, it will be more desirable for most applications.

Performance concerns when writing my own file system

I'm writing a file system using Dokan. What I want to achieve is allowing users to access files that are on multiple sources as if they are all on a local folder. i.e. a file can be available locally, on a remote location or in memory.
Initially I was creating placeholders that describe where the actually file is available (like the win8.1 OneDrive). When the user access a file, I read the placeholders first. Knowing the real location of that file, I read the real one and send the data back to the user application.
After about an hour of coding I found this idea seriously wrong. If the real location of the files are on the Internet, this will work. But if the file is available locally, I actually need request my hard drive to find two files(placeholder and the real file). Also, if the file is available in memory (users do this to improve performance), I still need to access the hard drive, making it pointless to cache the file into RAM.
So... I guess I have to write my own file table, like the NTFS MFT. Well, the concept of a file table is straightforward. But I'm not sure if I can write one that's as efficient as NTFS. Then I started considering a Database. But I'm also not sure if this is a good idea...
What should I do?
Thanks!
PS. I only have very basic knowledge of File Systems.

Best way to store infrequently changing information to use in applications?

I have a list of store information.
Each store has a region, a zone, and a store number.
The way I've been doing this now is:
I have a Store class, and a List with elements type Store.
In each application, I have to add this long list of StoreList.Add(new Store() { ... }), which looks bad, is sloppy, and totally not convenient. So I was looking for a way to use this information across multiple solutions/projects.
I don't want to use a database because I don't really want additional overhead in what could be simple scripts. Is a DLL something I would use in this circumstance?
You said you don't want to use database, but probably its not a bad choice. You can store the information in a XML file and read that on application startup. Having such information in a class and then dll, would complicate things. If you have to modify a store number, you have to deploy that dll on computers running your application, although modification in XML would be required on computers as well but it would be easier IMO.
Also if you have that information in some central database and loads up that information on application start event, it would provide you a much better option of maintaining your application and having lesser changes in client side / deployment.
The problem is not whether you want a database or not, but if you need to store your data once your application closes.
Now, you can use a database (could be an embedded one) or a file (xml most probably).
If all your data is stored in code (not the best option really) then yes, you can move that code to a class library project and distribute it wherever you need it.
But still, at the very least this is what i'd do
Move your list items to an xml file
Create a class that reads this file, and loads it into the list
Add the xml file to your project and mark it as an embedded resource (so it'll be packed with the dll)
You can read the xml file from the assembly directly (check here on SO how to do it)
Hope that helps

Should I store localization content in the application state

I am developing my first multilingual C# site and everything is going ok except for one crucial aspect. I'm not 100% sure what the best option is for storing strings (typically single words) that will be translated by code from my code behind pages.
On the front end of the site I am going to use asp.net resource files for the wording on the pages. This part is fine. However, this site will make XML calls and the XML responses are only ever in english. I have been given an excel sheet with all the words that will be returned by the XML broken into the different languages but I'm not sure how best to store/access this information. There are roughly 80 words x 7 languages.
I am thinking about creating a dictionary object for each language that is created by my global.asax file at application run time and just keeping it stored in memory. The plus side for doing this is that the dictionary object will only have to be created once (until IIS restarts) and can be accessed by any user without needing to be rebuilt but the downside is that I have 7 dictionary objects constantly stored in memory. The server is a Win 2008 64bit with 4GB of RAM so should I even be concerned with memory taken up by using this method?
What do you guys think would be the best way to store/retrieve different language words that would be used by all users?
Thanks for your input.
Rich
From what you say, you are looking at 560 words which need to differ based on locale. This is a drop in the ocean. The resource file method which you have contemplated is fit for purpose and I would recommend using them. They integrate with controls so you will be making the most from them.
If it did trouble you, you could have them on a sliding cache, i.e. sliding cache of 20mins for example, But I do not see anything wrong with your choice in this solution.
OMO
Cheers,
Andrew
P.s. have a read through this, to see how you can find and bind values in different resource files to controls and literals and use programatically.
http://msdn.microsoft.com/en-us/magazine/cc163566.aspx
As long as you are aware of the impact of doing so then yes, storing this data in memory would be fine (as long as you have enough to do so). Once you know what is appropriate for the current user then tossing it into memory would be fine. You might look at something like MemCached Win32 or Velocity though to offload the storage to another app server. Use this even on your local application for the time being that way when it is time to push this to another server or grow your app you have a clear separation of concerns defined at your caching layer. And keep in mind that the more languages you support the more stuff you are storing in memory. Keep an eye on the amount of data being stored in memory on your lone app server as this could become overwhelming in time. Also, make sure that the keys you are using are specific to the language. Otherwise you might find that you are storing a menu in german for an english user.

A different Approach for anti-virus . Am I going in the right direction?

I'm currently conceiving a system that works like an anti-virus, but also uses the White Listing i.e
Preventing Viruses from Running by having a database of Known legitimate Programs
Yes , there is the Windows UAC, but still many viruses "work around" it. I'm planning on a more reliable system.
My system has also a database of known threats (cryptographic hash).
Is this approach viable,
What are the possible loop holes in this approach
I understand that there has been a lot of attempts at this. But still I want to try it out.
I'm planning to use C# and .Net for a prototype may be i'll move on to C++ for performance later
Update:
Thank you all for your time and thoughts.
I decided to do some more research in this area before actually designing something
Espcially as pointd out below the Zeroday threat problem
What about DLLs used by executables? Do you hash them too? A virus can replace a DLL.
This has been brought up before, and there are products out there which do that. (Faronics Anti-Executable works like this)
There are two main problems with this approach:
A virus can embed itself into any file; not just EXEs. Programs can load DLLs and other bits of code(macros, scripts, etc), and programs can contain bugs(such as buffer overflows) which can be exploited by malicious documents and other files.
Every time you patch a system or otherwise legitimately modify the software, you also need to update the white list.
There is products like Appsense Application Manager that do this already. It was temporarily pitched as a security product but they changed tact and focused it on licensing. I think it's because it didn't work too well as a security product.
If you are planning to work with a limited set of applications and you can work with application developers you can use a code signing model. You can find a similar approach in most mobile operating systems. You have to sign all the executable modules including libraries and need to verify they have a valid signature and not modified using a root certificate.
If you are only planning to white list applications based on their hash value you need to make sure your white listed applications verify any modules they use before they load. Even if the applications/installation files are digitally signed it does not guarantee that a library will be modified later in a malicious way.
In reality, it is not even enough to only verify executables and libraries. For example, Xbox Linux hack utilizes a malicious save file. It is a specially prepared save file that causes a legitimate and signed application behave in unexpected ways. And, of course it is not possible to white list a save file based on its hash value.
Another problem with keeping a database is zero day attacks. You need to be ahead of the curve for creating hash values for new attacks and propagating these updates to your users otherwise they will be vulnerable all new attacks. Unless you only allow only white listed applications to be executed and that would be really restrictive.
IMHO, it is really difficult build such a system on open platfom. Good luck with it.

Categories