I'm trying to use a pattern to retrieve all keys matching a pattern by Stackexchange.Redis.
Code
KEYS *o*
On the project homepage is linked Where are KEYS, SCAN, FLUSHDB etc? which gives full details on how to access this, and why it isn't on IDatabase. I should point out that you should avoid KEYS on a production server. The library will automatically try to use SCAN instead if it is available - which is less harmful but should still be treated with some caution. It would be preferable to explicitly store related keys in a set or hash.
Related
I have a very large ASP.Net C# ver 4.8 solution with several projects. The application uses Redis as remote cache. I want to audit the code on regular basis (once a quarter) to get full list of all redis keys. Once I have baseline after initial run, I want to compare and see any recently added keys etc.
Code interacts with Redis through an abstracted layer, CacheManager. Some typical calls looks like this.
cacheManager.Set("key1", "value1")
myCache.Get("key1")
cache.Set("key2", "value2");
Where cacheManager, myCache, cache etc. are local instances of CacheManager.
I want to get a full list of CacheManager usage along with keys and values. Should I look into Static Code Analysis options or something else?
You can approach this from two sides. If you want to find all set methods from your Cache Manager, right click the class name and select Find all references (Ctrl + K, R by default). Then go through that list and check all keys.
If the redis instance is only used by this application, you could very wsell open up the redis-cli and output all keys to a txt file and analyse it this way. Might also be faster.
redis-cli keys * > myKeys.txt
If all your keys are prefixed you can of course adapt the query to keys prefix:* or whatever. The asterisk is the wildcard.
We have a requirement that the value of a particular field in documents of a collection need to be unique. This collection is partitioned, so I cannot write a trigger or SP to ensure this before each insert (as I want to ensure the uniqueness for whole collection and not just the partition).
I've come across the following links where it is mentioned that there is a uniqueKeyPolicy which can be added to a collection , but couldn't see any examples or more info in official documentation.
What's the best way to do this?
Information I found:
https://github.com/Azure/azure-documentdb-dotnet/blob/master/changelog.md - Adds the ability to specify unique indexes for the documents by using UniqueKeyPolicy property on the DocumentCollection.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.uniquekeypolicy.uniquekeys?view=azure-dotnet
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.uniquekey?view=azure-dotnet
Update: After some trying, I was able to do this using the DocumentClient from my .NET code while creating a new collection(Non Partitioned) . But is there a way to add this constraint for an existing collection. Also I couldn’t find a way of doing this from the portal. Is there any particular place I should be looking?
It appears there is no way to do cross partition unique keys and unique keys can only be defined during collection creation. From the documentation:
Unique keys must be defined when the container is created, and the
unique key is scoped to the partition key. To build on the earlier
example, if you partition based on zip code, you could have the
records from the table duplicated in each partition.
Existing containers cannot be updated to use unique keys.
Cosmos DB's DocumentDB API does not currently offer unique indexes for a property. This feature is listed as "started", on UserVoice. Perhaps there is some preparation work taking place to support unique indexes, but currently there is no official documentation stating they exist, or how to use them if they do exist.
EDIT It appears this feature has recently rolled out, as evidenced by the update to the .net SDK release notes. No official documentation published yet. Documentation is now published here.
I am working on an existing system that using NCache. it is a distributed system with large caching requirements, so there is no question that caching is the correct answer, but...
For some reason, in the existing code, all cache keys are hashed before storing in the cache.
My argument is that we should NOT hash the key, as the caching library may have some super optimized way of storing it's dictionary and hashing everything means we may actually be slowing down lookups if we do this.
The guy who originally wrote the code has left, and the knowledge of why the keys are cached has been lost.
Can anyone suggest if hashing is the correct thing to do, or should it be removed.
Okay so your question is
Should we hash the keys before storing?
If you yourself do hashing, will it slow down anything
Well, the cache API works on strings as keys. In the background NCache automatically generates hashes against these keys which help it to identify where the object should be stored. And by where I mean in which node.
When you say that your application Hashes keys before handing it over to NCahe, then it is simple an unnecessary step. NCache API was meant to take this headache from you.
BUT if those hashes were generated because of some internal Logic within your application then that's another case. Please check carefully.
Needless to say, if you're doing something again and again then it will definitely have a performance degradation. The Hash strings that you provide will be used again to generate another hash value (int).
Whether you should or shouldn't hash keys depends on your system requirements.
NCache identifies object by it's key, and considers objects with equal keys to be equal. Below is a definition of a hash function from Wikipedia:
A hash function is any function that can be used to map data of
arbitrary size to data of fixed size.
If you stop hash keys, then cache may behave differently. For example, some objects that NCache considered equal, now NCache may consider not equal. And instead of one cache entry you will get two.
NCache doesn't require you to hash keys. NCache key is just a string that is unique for each object. Relevant excerpt from NCache 4.6 Programmer’s Guide:
NCache uses a “key” and “value” structure for objects. Every object
must have a unique string key associated with it. Every key has an
atomic occurrence in the cache whether it is local or clustered.
Cached keys are case sensitive in nature, and if you try to add
another key with same value, an OperationFailedException is thrown by
the cache.
I need help with this asap (I was just asked to come up with a solution to this for tomorrow morning). The lead developer asked me to come up with a possible already existing solution to the following problem:
We have this C# search result page used publicly by third party websites, but we will be adding access restriction through hash keys passed onto the query string.
To start, the third party website developer will visit a new page we will create to auto generate the hash key based on the state they are in. They will then add the key to their links. When we have a request from one of their links we will try to match the key with what we have in the database in order to allow access to results - we would also like to check extra information like domain/IP address to prevent spoofing and other kinds of attacks.
Another consideration is: can we make this key system dynamic in such way that it would change over time, but third party developers wouldn't need to come back to us to update what they have - a intermediate key?.
The question is what is the best solution for this case scenario? Is there already something similar out there? Is using hash keys in the query string the right/best approach?
Generally in this case, you have an identifier and a shared secret.
The identifier is passed in the query for you to identify the user. The shared secret is used in a hashing algorithm to provide a checksum of the request. This is commonly sent in the query string (easy on the third party) or as a HTTP header.
Amazon uses this type of digital signing in AWS with HMAC-SHA256. See MSDN documentation on System.Security.Cryptography.HMACSHA256 and AWS documentation on Authenticating Requests. MD5 or SHA alone would probably work fine in your case as well.
You can maintain a per-user shared key assuming you have a way of distributing it to your client.
I'm new to C# and ASP.NET and I have to do a project now. It deals with confidential data of a firm's employees so it needs to be encrypted. I am not sure if I will be able to get through with my own encryption algorithm. If I use any existing algorithms, they said that I should find a foolproof way to store the key.
To be honest, I don't really understand the term "key" in encryption. I would like someone to brief about it and help me with how I should move forward with this project.
http://en.wikipedia.org/wiki/Key_%28cryptography%29
dunno, but maybe start there?
IMHO:
as already advised, don't cobble up your "own", use existing algorithms in the framework that have been tested extensively. Whatever weaknesses they may have will (likely) still be better than what you can cobble up on your own.
understand what needs to be encrypted which pretty much means at some point will need to be decrypted vs. data that needs to be hashed (one-way - e.g. passwords).
decide if you want this to happen on the application side or perhaps, if resources are available to you like SQL server (to store data), on the database side (discuss this with your DBA). You can do both encryption and hashing in SQL server alone.
on the application side, you can think about storing keys in your web.config and subsequently encrypting that section - just like the option to do so for your db connection strings (encrypting the connection section of web.config). This way even your keys aren't in plain text.
The first rule of cryptography - never use your own algorithm, unless you are a Ph.D. and several other Ph.D's are helping you, even then, use only after public auditing.
What they mean about storing the key is that it shouldn't be exposed anywhere - if an attacker can get the key, they can decrypt all data in the database. Currently, there are no known ways to do this. You can store the key in a file outside the website's root folder - this way either the server itself must be compromised, your app must be compromised (e.g. by making it display the "../../key.txt" file, thus descending below the webroot) or your app must be tricked into encrypting/decrypting the data transparently for the attacker (e.g. by having a bug that allows authentication bypass, thus allowing them to use your app to talk to the database).
For the last part of the question, use #Haxx's answer :)