I am going to create an application which will run in the client's computer. The program will allow the client to use the software N times, and then for the client to be able to use the software again he/she will need to buy an X amount of times to use the software. It would be like buying a license or token (i don't know if they're the same or not, since my english isn't that good).
I was thinking about creating a .lic or .txt or anything else, which would be encrypted, and when updated with the new .lic or .txt, etc, it would change the number fo times the client would be able to use the software.
The thing is that I don't think that method is very reliable, since, even if encrypted, the client somehow could be able to crack and understand this file.
Could anybody help me in figurnig out a solution for this?
PS: The software can't be validated via internet, the client must be able to use the software offline, if it wasn't I'd validate the software's usage via internet and wouldn't be having this problem.
First I must agree with the commentary that simply says, this will not be secure. Even though they are correct, it will be easy for a developer to work around, there still may be a valid need/desire to prevent the other 99% of the population. This is the same battle that DRM faces, there are always those 1%'ers that are willing to put in the time to decipher what you're doing and work around it. But let's move on to how you would achieve this...
Step 1 - You need a 'counter' to know how many times your application has been run. This, unfortunately, will only be obfuscated from the user since your application must be able to read this value. Often this obfuscation is achieved by hiding values in several places both in the registry and file system. Occasionally you will find this information is 'encrypted' (actually it's obfuscated by an encryption algorithm) using information available on host, the bios, cpu type, hdd id, etc.
Ultimately this storage and obfuscation of the execution counter is your 'secret sauce', and the only thing that makes it difficult to reverse is by keeping what you are doing a closely guarded secret (as most forms of obfuscation rely on secrecy). Due to this there is not really a value I could provide in offering you a solution, once posted here it's no longer a secret :)
Step 2 - Once you've got this counter working you will need to provide a 'license' to the user. This is actually the easy part and where PKI cryptography can serve you well. What you want is a private key only you control, while your client software has the public key hard-coded somewhere. Then you use you're private key to 'digitally sign' a license file for a client. When your client loads the license file it verifies the signature to ensure that this license file was signed by the related private key, which in theory since only you have access to this key, means that you authorized this license.
Step 3 - Now you need to provide a way to verify that this counter has not exceeded the licensed number of uses. This should be straight forward.
Problems and Solutions
The most obvious attack on such a solution will be reverse engineering the code. You will need to address this with a .NET obfuscation library or by writing unmanaged code.
The next most likely attack is using a debugger to skip past this verification. There are lots of anti-debugging articles out there. The most complete I've found is titled "An Anti-Reverse Engineering Guide".
Another attack that should be considered is modification of you're executable. Sign you're executable and verify it's signature just as you will for the license to prevent the code from being directly edited.
Your storage of the execution counter will be an obvious target, make sure you store it in multiple places and if any of them have been tampered with you can take an appropriate action.
Lastly, All of this will be insufficient to prevent a determined individual from successfully defeating you're licensing strategy. Accept that now and implement only as much as you feel is required based on both the level of computer competency of your average user and the amount of lost revenue versus the cost of implementation. In other words say you implement something really silly and basic and expect that 20% of your users could figure it out. Based on your clients you believe that of that 20% of your users less than a quarter of those would actually circumvent you're DRM rather than paying for the license. So you expect to loose out on 5% of your possible revenue, say you make 1 million a year, that mean you loose 50k in revenue. Now ask yourself if I spend X dollars of my time making this harder for someone to circumvent, at what point does it become a negative return? Certainly at an expected loss of 50k you wouldn't want to spend a year working on DRM.
Honestly speaking I think most applications that employ a DRM could do with a great deal less effort. If you're application is priced right people will pay for it. For the people that will circumvent your DRM, they probably wouldn't buy your application anyway so you haven't really lost anything. If I where you I'd set aside a fixed amount of time to spend on this problem, (a week?) and do only what you can within that time limit.
Related
This question arise because when someone wants to use flat file as database, most people will say "is database not an option?" and things like that. This makes me think that most people believe that popular database softwares are reliable in handling data storage.
However, since database engines also write their data stores to files (or allow me to say "flat files"), then I am confused as to why most people believe that protection from data loss is almost completely guaranteed in database engines.
I suppose that database softwares use features like the Windows' CreateFile() function with the FILE_FLAG_WRITE_THROUGH option set; yet, Microsoft specifies in their Documentation that "Not all hard disk hardware supports this write-through capability."
Then why can a database engine be more reliable than my C# code that also uses unmanaged CreateFile() function to write to disk directly using some algorithms (like this SO way) to prevent damage to data? Especially when writing small bits of files and appending small bytes to it in some future time? (Note: not comparing in terms of robustness, features, etc... just reliability of data integrity)
The key to most database systems integrity is the log file.
As well as updating the various tables/data stores/documents they also write all operations and associated data to a log file.
In most cases when the program "commits()" it waits until all operations are written (really written!) to the log file. If anything happens after that database can be rebuilt using the log file data.
Note -- you could get something similar using standard disk I/O and calling flush() at the appropriate points. However you could never guarantee the status of the file (many I/Os could have taken place before you called flush() ) and you could never recover to a point in time as you have no copy of deleted records or a copy of a previous version of an updated record.
Of course you can write a very secure piece of C# code that handles all possible exceptions and faults, that calculates hash codes and check them back for anything it is going to write on the disk, that manages all quirks of every operating system it's deployed on with respect with file caching, disk write buffering and so forth and so on.
The question is: why should you?
Admittedly, a DB is not always the right choice if you just want to write data on the disk. But if you want to store data consistently, safely and most importantly, without losing too much of your time in nitty-gritty IO-operation details, then you should use some kind of well established and tested piece of code that someone else wrote and took the time to debug (hint: a database is a good choice).
See?
Also, there are databases, like sqlite, that are perfect for fast, installation-less use in a program. Use them or not, it's your choice, but I wouldn't spend my time to reinvent the wheel, if I were you.
I'm working on a C# library project that will process transactions between SQL and QuickBooks Enterprise, keeping both data stores in sync. This is great and all, but the initial sync is going to be a fairly large set of transactions. Once the initial sync is complete, transactions will sync as needed for the remainder of the life of the product.
At this point, I'm fairly familiar with the SDK using QBFC, as well as all of the various resources and sample code available via the OSR, the ZOMBIE project by Paul Keister (thanks, Paul!) and others. All of these resources have been a huge help. But one thing I haven't come across yet is whether there is a limit or substantial or deadly performance cost associated with large amounts of data via a single Message Set Request. As I understand it, the database on QuickBooks' end is just a SQL database as well, but I don't want to make any assumptions.
Again, I just need to hit this hard once, so I don't want to engineer a separate solution to do the import. This also affords me an opportunity to test a copy of live data against my library, logs and all.
For what it's worth, this is my first ever post on Stack, so feel free to educate me on posting here if I've steered off course in any way. Thanks.
For what it's worth, I found that in a network environment (as opposed to everything happening on 1 box) it's better to have a larger MsgSetRequest as opposed to a smaller one. Of course everything has its limits, and maybe I just never hit it. I don't remember exactly how big the request set was, but it was big. The performance improvement was easily 10 to 1 or better.
If I was you, I'd build some kind of iteration into my design from the beginning (to iterate through your SQL data set). Start with a big number that will do it all at once, and if that breaks just scale it back until you find something that works.
I know this answer doesn't have the detail you're looking for, but hopefully it will help.
I have a process that will have some important values in the memory. I don't want anyone to be able to read the memory of my process and obtain those values. So I tried to create a program that would look at the list of programs running and determine if any of them were "debuggers", etc. But I realized that someone could just write a quick program to dump the memory of my process. I know several process on my system have their memory protected. How could I also obtain this? (ps: I'm using C#)
Any application that runs under an user with enough privileged (eg. local administrator) can call ReadProcessMemory and read your process at will, any time, without being attached to your process debugging port, and without your processing being able to prevent, or even detect this. And I'm not even going into what is possible for a system kernel driver to do...
Ultimately, all solutions available to do this are either snake oil, or just a way to obfuscate the problem by raising the bar to make it harder. Some do make it really hard, but none make it bullet-proof. But ultimately, one cannot hide anything from a user that has physical access to the machine and has sufficiently elevated privileges.
If you don't want users to read something, simply don't have on the user machine. Use a service model where your IP is on a server and users access it via internet (ie. web services).
First of all, there will always be a way to dump the memory image of your program. Your program can only make it harder, never impossible. That said, there may be ways to 'hide' the values. It is generally considered hard to do and not worth the trouble, but there are programs which encrypt those values in memory. However, to be able to use them, they need to decrypt them temporarily and re-encrypt (or discard) them afterwards.
While encryption is easy with the .Net framework, discarding the decrypted value is not an easy thing to do in C#. In C, you would allocate a chunk of memory to store the decrypted values and clear that (by writing zero's or random data to it) before freeing it. In C#, there is no guarantee that your data won't be stored somewhere (caching, garbage collection) and you won't be able to clear it. However, as eulerfx noted in a comment, in .Net 4.0 SecureString may be used. How safe that is, I don't know.
As you may see, there will always be a short time where the value lies in memory unencrypted, and that is the vulnerability here.
I think the best way to do it is employ a commercial solution such as in this PDF brochure, this is their website. You may be better off going down this route if you really care about protecting the application from sniffing, IP theft etc instead of rolling up your own solution...
Edit: I would not go down the route in kidding myself that the solution I shall craft up will be tamper proof, crack proof, idiot proof. Leave that to a company like Arxan I mentioned (I aint a sales rep - I'm just giving an example), sure it might be costly, but you can sleep better at night knowing it is much harder for a cracker to break than having no solution at all...
A very similar question has also been asked here on SO in case you are interested, but as we will see the accepted answer of that question is not always the case (and it's never the case for my application use-pattern).
The performance determining code consists of FileStream constructor (to open a file) and a SHA1 hash (the .Net framework implementation). The code is pretty much C# version of what was asked in the question I've linked to above.
Case 1: The Application is started either for the first time or Nth time, but with different target file set. The application is now told to compute the hash values on the files that were never accessed before.
~50ms
80% FileStream constructor
18% hash computation
Case 2: Application is now fully terminated, and started again, asked to compute hash on the same files:
~8ms
90% hash computation
8% FileStream constructor
Problem
My application is always in use Case 1. It will never be asked to re-compute a hash on a file that was already visited once.
So my rate-determining step is FileStream Constructor! Is there anything I can do to speed up this use case?
Thank you.
P.S. Stats were gathered using JetBrains profiler.
... but with different target file set.
Key phrase, your app will not be able to take advantage of the file system cache. Like it did in the second measurement. The directory info can't come from RAM because it wasn't read yet, the OS always has to fall back to the disk drive and that is slow.
Only better hardware can speed it up. 50 msec is about the standard amount of time needed for a spindle drive, 20 msec is about as low as such drives can go. Reader head seek time is the hard mechanical limit. That's easy to beat today, SSD is widely available and reasonably affordable. The only problem with it is that when you got used to it then you never move back :)
The file system and or disk controller will cache recently accessed files / sectors.
The rate-determining step is reading the file, not constructing a FileStream object, and it's completely normal that it will be significantly faster on the second run when data is in the cache.
Off track suggestion, but this is something that I have done a lot and got our analyses 30% - 70% faster:
Caching
Write another piece of code that will:
iterate over all the files;
compute the hash; and,
store it in another index file.
Now, don't call a FileStream constructor to compute the hash when your application starts. Instead, open the (expectedly much) smaller index file and read the precomputed hash off it.
Further, if these files are log etc. files which are freshly created every time before your application starts, add code in the file creator to also update the index file with the hash of the newly created file.
This way your application can always read the hash from the index file only.
I concur with #HansPassant's suggestion of using SSDs to make your disk reads faster. This answer and his answer are complimentary. You can implement both to maximize the performance.
As stated earlier, the file system has its own caching mechanism which perturbates your measurement.
However, the FileStream constructor performs several tasks which, the first time are expensive and require accessing the file system (therefore something which might not be in the data cache). For explanatory reasons, you can take a look at the code, and see that the CompatibilitySwitches classes is used to detect sub feature usage. Together with this class, Reflection is heavily used both directly (to access the current assembly) and indirectly (for CAS protected sections, security link demands). The Reflection engine has its own cache, and requires accessing the file system when its own cache is empty.
It feels a little bit odd that the two measurements are so different. We currently have something similar on our machines equipped with an antivirus software configured with realtime protection. In this case, the antivirus software is in the middle and the cache is hit or missed the first time depending the implementation of such software.
The antivirus software might decide to aggressively check certain image files, like PNGs, due to known decode vulnerabilities. Such checks introduce additional slowdown and accounts the time in the outermost .NET class, i.e. the FileStream class.
Profiling using native symbols and/or with kernel debugging, should give you more insights.
Based on my experience, what you describe cannot be mitigated as there are multiple hidden layers out of our control. Depending on your usage, which is not perfectly clear to me right now, you might turn the application in a service, therefore you could serve all the subsequent requests faster. Alternative, you could batch multiple requests into one single call to achieve an amortized reduced cost.
You should try to use the native FILE_FLAG_SEQUENTIAL_SCAN, you will have to pinvoke CreateFile in order to get an handle and pass it to FileStream
I have a C# .NET app. This application is tightly coupled to a piece of hardware. Think ATM, drive up kiosk kinda thing. I want a way for my application to assure it is being run on our hardware. Our initial plan was to get the serial number of the CPU, OS, HD, or other hardware with WMI, digitally sign that, and ship that signature with the software. The application would then have the public key in it to verify the signature. Is there a better way to do this?
Update 1
We dont want a dongle or a hasp. Nothing external to the system.
Yes, you would have a semi-safe system. It can prevent running on different hardware. It will also prevent some forms of maintenance of that hardware.
It will, as usual, not prevent anyone from decompiling and changing your software.
We do something similar for software licensing by signing an XML file, although ours isn't tied to any hardware. The same concept applies. It works well.
You will also need to protect your .NET code using some kind of obfuscation tool, we use {smartassembly} but there are several others out there.
Keep in mind that no matter what you do, given enough time and resources, someone can bypass it.
That doesn't mean you should not protect your intellectual property, but there is a point where you get diminishing returns and cause more trouble to you and your customers that it's worth.