I am using Dynamodb.net here. How can I add multiple scan conditions so that the data is filtered out based on those conditions.
I am using the below code:
var creds = new BasicAWSCredentials(awsId, awsPassword);
var dynamoClient = new AmazonDynamoDBClient(creds, awsDynamoDbRegion);
var context = new DynamoDBContext(dynamoClient);
List<ScanCondition> conditions = new List<ScanCondition>();
// conditions.Add(new ScanCondition("Id", ScanOperator.Equal, myId));
conditions.Add(new ScanCondition("name", ScanOperator.Equal, myName));
var response = await context.ScanAsync<Data>(conditions).GetRemainingAsync();
return response;
In my code above if I add 2 scan conditions, it does not works. But does work with one condition. Not sure if what I am doing wrong here.
Your code looks OK, with one caveat: scan conditions are for non-key attributes.
I'm going to go out on a limb and assume that Id is the partition key (or perhaps sort key) of your table. If that's true then that's why you can't use it in a scan condition. You can have multiple Scan conditions added but they must all be for non-key attributes.
In order to specify key conditions, you must use a Query operation, not a Scan.
Assuming your table only has a Primary key and no Sort key, then the example below should work. However, if the table has a sort key as well then your query must include that as well so the example below would need to be modified slightly.
var creds = new BasicAWSCredentials(awsId, awsPassword);
var dynamoClient = new AmazonDynamoDBClient(creds, awsDynamoDbRegion);
var context = new DynamoDBContext(dynamoClient);
var opConfig = new DynamoDBOperationConfig();
opConfig.QueryFilter = new List<ScanCondition>();
opConfig.QueryFilter.Add(new ScanCondition("name", ScanOperator.Equal, myName));
var response = await context.QueryAsync<Data>(myId, opConfig).GetRemainingAsync();
return response;
Related
I use this code to create a new iteration on VSTS/TFS programatically:
var tfs = new TfsTeamProjectCollection(uri, tfsCredential);
var service = tfs.GetService<ICommonStructureService>();
var iterationRoot = service.GetNodeFromPath("\\TeamProjectName\\Iteration");
var iteration = service.CreateNode("Sprint 1", iterationRoot.Uri);
Now I want to remove an iteration but there is no corresponding method on ICommonStructureService. Oddly there is a method named GetDeletedNodesXml().
I got it! I'm assuming here that you have some method to retrieve an iteration.
var tfs = new TfsTeamProjectCollection(uri, tfsCredential);
var service = tfs.GetService<ICommonStructureService>();
// TODO var iteration = GetIteration();
var projectInfo = service.GetProjectFromName(projectName)
var nodes = service.ListStructures(projectInfo.Uri);
service.DeleteBranches(iteration.Id, nodes[0].Uri);
The key is to pass in Ids and not paths. TFS wants to retrieve artifact URLs which are represented as Ids. The second parameter of DeleteBranches is the artifact URL of the iteration-root which is obtained by calling ListStructures of the ICommonStructureService and taking the first element there (which is kind of nasty IMHO but I don't know a better way).
I'm using the RetrieveEntityRequest to get an entity's attributes' metadata:
RetrieveEntityRequest entityRequest = new RetrieveEntityRequest
{
EntityFilters = EntityFilters.Attributes,
LogicalName = joinedEntityName.Value,
};
RetrieveEntityResponse joinedEntityMetadata = (RetrieveEntityResponse)_service.Execute(entityRequest);
Now, consider I need to execute this request for multiple entities. Is it possible to do this in one execution (maybe not with RetrieveEntityRequest), instead of one request for each entity?
You can't do it with RetrieveEntityRequest. However, you can do a RetrieveMetadataChangesRequest to get what you want. It's misleadingly named for your purposes, but if you don't provide a ClientVersionStamp property, it will simply retrieve everything you've specified in the Query property.
Here's a simple example where you'd retrieve the metadata for account and contact, and only retrieve the LogicalName and DisplayName properties:
var customFilterExpression = new[]
{
new MetadataConditionExpression("LogicalName", MetadataConditionOperator.Equals, "account"),
new MetadataConditionExpression("LogicalName", MetadataConditionOperator.Equals, "contact")
};
var customFilter = new MetadataFilterExpression(LogicalOperator.Or);
customFilter.Conditions.AddRange(customFilterExpression);
var entityProperties = new MetadataPropertiesExpression
{
AllProperties = false
};
entityProperties.PropertyNames.AddRange("LogicalName", "DisplayName");
var request = new RetrieveMetadataChangesRequest
{
Query = new EntityQueryExpression
{
Properties = entityProperties,
Criteria = customFilter,
}
};
This method also has the benefit of only retrieving what specific properties you need, which makes the request faster and the payload smaller. It's specifically designed for mobile where you want to only retrieve the Metadata you need, and what has changed since the last time you retrieved it, but it works nicely in a lot of scenarios.
You have to use RetrieveAllEntitiesRequest. Sample below:
RetrieveAllEntitiesRequest retrieveAllEntityRequest = new RetrieveAllEntitiesRequest
{
RetrieveAsIfPublished = true,
EntityFilters = EntityFilters.Attributes
};
RetrieveAllEntitiesResponse retrieveAllEntityResponse = (RetrieveAllEntitiesResponse)serviceProxy.Execute(retrieveAllEntityRequest);
CRM SDK has all or one-by-one approach only.
You have to keep your list of entities ready & issue the RetrieveEntityRequest for each item.
I'm trying to build a prediction module implementing a Hidden Markov Model type DBN in Bayes Server 7 C#. I managed to create the network structure but I'm not sure if its correct because their documentation and examples are not very comprehensive and I also don't fully understand how the prediction is meant to be done in the code after training is complete.
Here is a how my Network creation and training code looks:
var Feature1 = new Variable("Feature1", VariableValueType.Continuous);
var Feature2 = new Variable("Feature2", VariableValueType.Continuous);
var Feature3 = new Variable("Feature3", VariableValueType.Continuous);
var nodeFeatures = new Node("Features", new Variable[] { Feature1, Feature2, Feature3 });
nodeFeatures.TemporalType = TemporalType.Temporal;
var nodeHypothesis = new Node(new Variable("Hypothesis", new string[] { "state1", "state2", "state3" }));
nodeHypothesis.TemporalType = TemporalType.Temporal;
// create network and add nodes
var network = new Network();
network.Nodes.Add(nodeHypothesis);
network.Nodes.Add(nodeFeatures);
// link the Hypothesis node to the Features node within each time slice
network.Links.Add(new Link(nodeHypothesis, nodeFeatures));
// add a temporal link of order 5. This links the Hypothesis node to itself in the next time slice
for (int order = 1; order <= 5; order++)
{
network.Links.Add(new Link(nodeHypothesis, nodeHypothesis, order));
}
var temporalDataReaderCommand = new DataTableDataReaderCommand(evidenceDataTable);
var temporalReaderOptions = new TemporalReaderOptions("CaseId", "Index", TimeValueType.Value);
// here we map variables to database columns
// in this case the variables and database columns have the same name
var temporalVariableReferences = new VariableReference[]
{
new VariableReference(Feature1, ColumnValueType.Value, Feature1.Name),
new VariableReference(Feature2, ColumnValueType.Value, Feature2.Name),
new VariableReference(Feature3, ColumnValueType.Value, Feature3.Name)
};
var evidenceReaderCommand = new EvidenceReaderCommand(
temporalDataReaderCommand,
temporalVariableReferences,
temporalReaderOptions);
// We will use the RelevanceTree algorithm here, as it is optimized for parameter learning
var learning = new ParameterLearning(network, new RelevanceTreeInferenceFactory());
var learningOptions = new ParameterLearningOptions();
// Run the learning algorithm
var result = learning.Learn(evidenceReaderCommand, learningOptions);
And this is my attempt at prediction:
// we will now perform some queries on the network
var inference = new RelevanceTreeInference(network);
var queryOptions = new RelevanceTreeQueryOptions();
var queryOutput = new RelevanceTreeQueryOutput();
int time = 0;
// query a probability variable
var queryHypothesis = new Table(nodeHypothesis, time);
inference.QueryDistributions.Add(queryHypothesis);
double[] inputRow = GetInput();
// set some temporal evidence
inference.Evidence.Set(Feature1, inputRow[0], time);
inference.Evidence.Set(Feature2, inputRow[1], time);
inference.Evidence.Set(Feature3, inputRow[2], time);
inference.Query(queryOptions, queryOutput);
int hypothesizedClassId;
var probability = queryHypothesis.GetMaxValue(out hypothesizedClassId);
Console.WriteLine("hypothesizedClassId = {0}, score = {1}", hypothesizedClassId, probability);
Here I'm not even sure how to "Unroll" the network properly to get the prediction and what value to assign to the variable "time". If someone can shed some light on how this toolkit works, I would greatly appreciate it. Thanks.
The code looks fine except for the network structure, which should look something like this for an HMM (the only change to your code is the links):
var Feature1 = new Variable("Feature1", VariableValueType.Continuous);
var Feature2 = new Variable("Feature2", VariableValueType.Continuous);
var Feature3 = new Variable("Feature3", VariableValueType.Continuous);
var nodeFeatures = new Node("Features", new Variable[] { Feature1, Feature2, Feature3 });
nodeFeatures.TemporalType = TemporalType.Temporal;
var nodeHypothesis = new Node(new Variable("Hypothesis", new string[] { "state1", "state2", "state3" }));
nodeHypothesis.TemporalType = TemporalType.Temporal;
// create network and add nodes
var network = new Network();
network.Nodes.Add(nodeHypothesis);
network.Nodes.Add(nodeFeatures);
// link the Hypothesis node to the Features node within each time slice
network.Links.Add(new Link(nodeHypothesis, nodeFeatures));
// An HMM also has an order 1 link on the latent node
network.Links.Add(new Link(nodeHypothesis, nodeHypothesis, 1));
It is also worth noting the following:
You can add multiple distributions to 'inference.QueryDistributions' and query them all at once
While it is perfectly valid to set evidence manually and then query, see EvidenceReader, DataReader and either DatabaseDataReader or DataTableDataReader, if you want to execute the query over multiple records.
Check out the TimeSeriesMode on ParameterLearningOptions
If you want the 'Most probable explanation' set queryOptions.Propagation = PropagationMethod.Max; // an extension of the Viterbi algorithm for HMMs
Check out the following link:
https://www.bayesserver.com/docs/modeling/time-series-model-types
An Hidden Markov model (as a Bayesian network) has a discrete latent variable and a number of child nodes. In Bayes Server you can combine multiple variables in a child node, much like a standard HMM. In Bayes Server you can also mix and match discrete/continuous nodes, handle missing data, and add additional structure (e.g. mixture of HMM, and many other exotic models).
Regarding prediction, once you have built the structure from the link above, there is a DBN prediction example at https://www.bayesserver.com/code/
(Note that you can predict an individual variable in the future (even if you have missing data), you can predict multiple variables (joint probability) in the future, you can predict how anomalous the time series is (log-likelihood) and for discrete (sequence) predictions you can predict the most probable sequence.)
It it is not clear, ping Bayes Server Support and they will add an example for you.
I can query a single document from the Azure DocumentDB like this:
var response = await client.ReadDocumentAsync( documentUri );
If the document does not exist, this will throw a DocumentClientException. In my program I have a situation where the document may or may not exist. Is there any way to query for the document without using try-catch and without doing two round trips to the server, first to query for the document and second to retrieve the document should it exist?
Sadly there is no other way, either you handle the exception or you make 2 calls, if you pick the second path, here is one performance-driven way of checking for document existence:
public bool ExistsDocument(string id)
{
var client = new DocumentClient(DatabaseUri, DatabaseKey);
var collectionUri = UriFactory.CreateDocumentCollectionUri("dbName", "collectioName");
var query = client.CreateDocumentQuery<Microsoft.Azure.Documents.Document>(collectionUri, new FeedOptions() { MaxItemCount = 1 });
return query.Where(x => x.Id == id).Select(x=>x.Id).AsEnumerable().Any(); //using Linq
}
The client should be shared among all your DB-accesing methods, but I created it there to have a auto-suficient example.
The new FeedOptions () {MaxItemCount = 1} will make sure the query will be optimized for 1 result (we don't really need more).
The Select(x=>x.Id) will make sure no other data is returned, if you don't specify it and the document exists, it will query and return all it's info.
You're specifically querying for a given document, and ReadDocumentAsync will throw that DocumentClientException when it can't find the specific document (returning a 404 in the status code). This is documented here. By catching the exception (and seeing that it's a 404), you wouldn't need two round trips.
To get around dealing with this exception, you'd need to make a query instead of a discrete read, by using CreateDocumentQuery(). Then, you'll simply get a result set you can enumerate through (even if that result set is empty). For example:
var collLink = UriFactory.CreateDocumentCollectionUri(databaseId, collectionId);
var querySpec = new SqlQuerySpec { <querytext> };
var itr = client.CreateDocumentQuery(collLink, querySpec).AsDocumentQuery();
var response = await itr.ExecuteNextAsync<Document>();
foreach (var doc in response.AsEnumerable())
{
// ...
}
With this approach, you'll just get no responses. In your specific case, where you'll be adding a WHERE clause to query a specific document by its id, you'll either get zero results or one result.
With CosmosDB SDK ver. 3 it's possible. You can check if an item exists in a container and get it by using Container.ReadItemStreamAsync<T>(string id, PartitionKey key) and checking response.StatusCode:
using var response = await container.ReadItemStreamAsync(id, new PartitionKey(key));
if (response.StatusCode == HttpStatusCode.NotFound)
{
return null;
}
if (!response.IsSuccessStatusCode)
{
throw new Exception(response.ErrorMessage);
}
using var streamReader = new StreamReader(response.Content);
var content = await streamReader.ReadToEndAsync();
var item = JsonConvert.DeserializeObject(content, stateType);
This approach has a drawback, however. You need to deserialize the item by hand.
var filter = new BsonDocument("filename", filename);
var list = await col.Find(filter).ToListAsync();
here are my code, I can't figure out what's the proper syntax to perform the task I wanted to do.
Not sure what you mean. I guess you want the returned document contains no _id field. To do so, you need a projection.
Try
var projection = new BsonDocument("_id", false);
var list = col.Find(filter).SetFields(new FieldsBuilder<YourClass>()
.Exclude(p => p.Id));
reference