We have a requirement that the value of a particular field in documents of a collection need to be unique. This collection is partitioned, so I cannot write a trigger or SP to ensure this before each insert (as I want to ensure the uniqueness for whole collection and not just the partition).
I've come across the following links where it is mentioned that there is a uniqueKeyPolicy which can be added to a collection , but couldn't see any examples or more info in official documentation.
What's the best way to do this?
Information I found:
https://github.com/Azure/azure-documentdb-dotnet/blob/master/changelog.md - Adds the ability to specify unique indexes for the documents by using UniqueKeyPolicy property on the DocumentCollection.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.uniquekeypolicy.uniquekeys?view=azure-dotnet
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.uniquekey?view=azure-dotnet
Update: After some trying, I was able to do this using the DocumentClient from my .NET code while creating a new collection(Non Partitioned) . But is there a way to add this constraint for an existing collection. Also I couldn’t find a way of doing this from the portal. Is there any particular place I should be looking?
It appears there is no way to do cross partition unique keys and unique keys can only be defined during collection creation. From the documentation:
Unique keys must be defined when the container is created, and the
unique key is scoped to the partition key. To build on the earlier
example, if you partition based on zip code, you could have the
records from the table duplicated in each partition.
Existing containers cannot be updated to use unique keys.
Cosmos DB's DocumentDB API does not currently offer unique indexes for a property. This feature is listed as "started", on UserVoice. Perhaps there is some preparation work taking place to support unique indexes, but currently there is no official documentation stating they exist, or how to use them if they do exist.
EDIT It appears this feature has recently rolled out, as evidenced by the update to the .net SDK release notes. No official documentation published yet. Documentation is now published here.
Related
I'm trying to use a pattern to retrieve all keys matching a pattern by Stackexchange.Redis.
Code
KEYS *o*
On the project homepage is linked Where are KEYS, SCAN, FLUSHDB etc? which gives full details on how to access this, and why it isn't on IDatabase. I should point out that you should avoid KEYS on a production server. The library will automatically try to use SCAN instead if it is available - which is less harmful but should still be treated with some caution. It would be preferable to explicitly store related keys in a set or hash.
I am writing an app which uses XRM SDK to get data from MS CRM 11. For fields of type lookup I want to get their corresponding definition so I can retrieve all possible ids and values from the related entity.
I cannot seem to find a way to get this. I understand I can get the entity reference for an entity instance value and how to get target entities from meta data but cannot find anything to help me get all lookup ids and values for a lookup field. Please help!
Edit:
In the interest of helping others, and after several more days investigation, I am convinced what I want to do is not possible server-side using sdk.
I can get pick list values no problem using a RetrieveAttributeRequest and cast the resulting AttributeMetaData to PicklistAttributeMetadata. You can use the same technique to deal with Lookups by casting to LookupAttributeMetadata but this only gives you one useful property over the base class: Targets. All this does is provide a string array of entity logical names. It gives you no additional detail such as mapped Id/Name properties or any view details for lookups where queries are applied (such as primary contact lookups where contacts listed are for current organisation).
So, in the end I have had to compromise. I can get the target entity name from Targets and assume that the lookup is simple - just pull all records from entity. The Id column is fixed so that is ok and generally you are safe to assume the Name column is available (albeit likely asl_name etc. if custom).
If anyone knows a better way I will gladly, willingly eat humble pie!
I believe what you're trying to do is fetch all records of a particular entity after you have retrieved the entity name using the metadata service.
If that is what you are looking to do, I suggest having a look at the following location in the SDK (I am referring to the SDK for CRM 2013) SDK\SampleCode\JS\RESTEndpoint\JQueryRESTDataOperations\JQueryRESTDataOperations\Scripts for the file JQueryRESTDataOperationsSample.js
There's this function called RetrieveMultiple which you can use to fetch all records of a particular entity by providing the Entity Name and any filters (optional). If you need help creating the oData query for the filters, you can download the oData Query Deisgner to form the query.
I have a table where the primary key is of type Guid. In my MVC application, I want to add a record to this table. I know of the Guid.newGuid() that creates a new Guid. This workds well but I have a concern. How does one ensures that the created Guid is unique (does not yet exist in the database)? Is there a way to generate it by comparing already existing values to make sure that the new guid is unique across the database records?
The entire purpose of the guid generation technique is that it doesn't need to. The algorithm will generate a globally unique value even if it doesn't have access to all of the other previously generated GUIDs.
In particular, the algorithm is to just generate one big random number. There are so many bits of data in the GUID that the odds of two of them having the same value are infinitesimally small. Small enough that they truly can be ignored.
For a more detailed analysis see Eric Lippert's blog on the subject. (In particular part three.)
Note that, as a consequence of this, using a GUID as a unique identifier will take up quite a bit more space in the database then just using a numeric identifier. Any decent database will have a special column type specifically designed to be a unique identifier that it will populate; such a column will be able to ensure uniqueness while using quite a lot less space than a GUID.
The possibility od generating a duplicate is very low. However you could enforce a UNIQUE constraint on the database table.
However there are very little chances that the new GUId will match with anyone present in database.
But if you still want to be sure just create a proc or something that will give true or false for a GUID. If it returns true then generate again and repeat this process till a unique GUID is not achieved.
This article discusses possible ways CQL 3 could be used for creating composite columns in Cassandra 1.1. They are just ideas. Nothing is official, and the Datastax documentation doesn't cover this (only composite keys).
As I understand it, composite columns are a number of columns that together have only one value.
How do you create them with CQL?
EDIT
I will be using C# to interface into Cassandra. CQL looks straightforward to use, which is why I want to use it.
You've got a couple concepts confused, I think. Quite possibly this is the fault of the Datastax documentation; if you have any good suggestions for making it clearer after you have a better picture, I'll be glad to send them on.
The "composite keys" stuff in the Datastax docs is actually talking about composite Cassandra columns. The reason for the confusion is that rows in CQL 3 do not map directly to storage engine rows (what you work with when you use the thrift interface). "Composite key" in the context of a CQL table just means a primary key which consists of multiple columns, which is implemented by composite columns at the storage layer.
This article is one of the better explanations as to how the mapping happens and why the CQL model is generally easier to think about.
With this sort of use, the first CQL column becomes being the storage engine partition key.
As of Cassandra 1.2 (in development), it's also possible to create composite storage engine keys using CQL, by putting extra parentheses in the PRIMARY KEY definition around the CQL columns that will be stored in the partition key (see CASSANDRA-4179), but that's probably going to be the exception, not the rule.
With Cassandra, you store data in rows. Each row has a row key and some number of columns. Each column has a name and a value. Usually the column name and value (and row key, for that matter) are single values (int, long, UTF8, etc), but you can use composite values in row keys, column names and column values. A composite value is just some number of values that have been serialized together in some way.
Over time a number of language-specific API's have been developed. These API's start with the understanding I describe above and provide access to a Column Family accordingly. Hector, the java client API, is the one I'm most familiar with, but there are others.
CQL was introduced as a means to use Cassandra tables in an SQL/JDBC fashion. Not all Cassandra capabilities were supported through CQL at first, although CQL is getting more and more functional as time goes on.
I don't doubt your need for composite column names and values (I believe that's what your asking for). The problem is that CQL has yet to evolve (as I understand it) to that level of native support. Whether or not it ever will is not known to me.
I suggest that you complete the definition of your desired column family schemas, complete with composite values if necessary. Once you've done that, look at the various API's available to access Cassandra column families and choose the one that best supports your desired schema.
You haven't said what language you're using. If you were coding in java then I'd recommend Hector and not CQL.
Are you sure you want to create them with CQL? What is your use case?
Is there some additional logic performed by Associate() operation?
I want to programmatically copy a lot of data from one Dynamics CRM instance to another one. And I suppose it would be simpler to make plain copies of rows (starting from the root objects in order to avoid breaking constraints).
And furthermore, is it possible to clone systemuser and business units instances (rows), too?
Thank you in advance!
PS: by cloning a row (using OrganizationServiceProxy), I mean:
fetch all attributes of a row (from Dynamics CRM 1)
e = new entity(), set all attributes (including id), then service.create(e) (on Dynamics CRM 2)
Did you consider to do a backup and restore to another server your CRM database? Might be it could help you. In any case, you can add new records to any tables inside of CRM database, but it is on your own risk. Using SQL to modify any data is in the list of unsupported technologies by Microsoft. Especially if you are talking about system users and business units.
Also you can write simple application which will insert data using CRM SDK.
Associate can be used to clean up in the end, but your order of entities will be what you want to first layout.
So for example, you will want to copy Accounts before Contacts. But then, on the Account you may have a primary contact that you will need to go back and Associate. This is no different than going back and updating the account record with the lookup value (post contacts being inserted).
I would also suggest looking at programmatically exporting the base unmanaged solution and then importing it if need be.