How to assign ORC field to OBR field in nHapi - c#

I am using nHapi v2.4 to build HL7 message.
I would like to copy the Ordering provider from ORC segment to Ordering provider in OBR segment.
I am trying to do it in the following way but it doesn't work:
// I have created objects in the following way.
ORM_O01 _ormMessage = new ORM_O01()
ORM_O01_ORDER order = _ormMessage.AddORDER();
var obrSegment = order.ORDER_DETAIL.OBR;
var orcSegment = order.ORC;
// Here, Set ordering field in ORC segment.
// Now, set Ordering provider in OBR
foreach(var orcOrderingProvider in orcSegment.GetOrderingProvider())
{
var obrOrderingProvider = obrSegment.GetOrderingProvider(obrSegment.OrderingProviderRepetitionsUsed);
obrOrderingProvider = orcOrderingProvider;
}
Is there any simple way to copy the whole field from one field to another?
Thank you.

Related

Is there a way to use VarVector to represent raw data in Ml.net K-means clustering

I would like to use ML.Net K-means clustering on some 'raw' vectors which I've generated in-memory by processing another dataset. I would like to be able to select the length of the vectors at runtime. All vectors within a given model will be the same length but that length may vary from model to model as I try out different clustering approaches.
I use the following code:
public class MyVector
{
[VectorType]
public float[] Values;
}
void Train()
{
var vectorSize = GetVectorSizeFromUser();
var vectors = .... process dataset to create an array of MyVectors, each with 'vectorSize' values
var mlContext = new MLContext();
string featuresColumnName = "Features";
var pipeline = mlContext
.Transforms
.Concatenate(featuresColumnName, nameof(MyVector.Values))
.Append(mlContext.Clustering.Trainers.KMeans(featuresColumnName, numberOfClusters: 3));
var trainingData = mlContext.Data.LoadFromEnumerable(vectors);
Console.WriteLine("Training...");
var model = pipeline.Fit(trainingData);
}
The problem is that when I try to to run the training, I get this exception...
Schema mismatch for feature column 'Features': expected
Vector, got VarVector (Parameter 'inputSchema')
I can avoid this for any given value of vectorSize (say 20) by using [VectorType(20)], but the key thing here is I would like not to rely on a specific compile-time value. Is there a recipe to allow for dynamically sized data to be used for this kind of training?
I can imagine various nasty workarounds involving dynamically constructing dataviews with dummy columns but was hoping there would be a simpler approach.
Thanks to Jon for finding the link to this issue which contains the required information. The trick is to override the SchemaDefinition at run-time....
public class MyVector
{
//it's not required to specify the type here since we will override in our custom schema
public float[] Values;
}
void Train()
{
var vectorSize = GetVectorSizeFromUser();
var vectors = .... process dataset to create an array of MyVectors, each with 'vectorSize' values
var mlContext = new MLContext();
string featuresColumnName = "Features";
var pipeline = mlContext
.Transforms
.Concatenate(featuresColumnName, nameof(MyVector.Values))
.Append(mlContext.Clustering.Trainers.KMeans(featuresColumnName, numberOfClusters: 3));
//create a custom schema-definition that overrides the type for the Values field...
var schemaDef = SchemaDefinition.Create(typeof(MyVector));
schemaDef[nameof(MyVector.Values)].ColumnType
= new VectorDataViewType(NumberDataViewType.Single, vectorSize);
//use that schema definition when creating the training dataview
var trainingData = mlContext.Data.LoadFromEnumerable(vectors,schemaDef);
Console.WriteLine("Training...");
var model = pipeline.Fit(trainingData);
//Note that the schema-definition must also be supplied when creating the prediction engine...
var predictor = mlContext
.Model
.CreatePredictionEngine<MyVector,ClusterPrediction>(model,
inputSchemaDefinition: schemaDef);
//now we can use the engine to predict which cluster a vector belongs to...
var prediction = predictor.Predict(..some MyVector...);
}

Removing iteration in VSTS/TFS using Microsoft.TeamFoundation.Client

I use this code to create a new iteration on VSTS/TFS programatically:
var tfs = new TfsTeamProjectCollection(uri, tfsCredential);
var service = tfs.GetService<ICommonStructureService>();
var iterationRoot = service.GetNodeFromPath("\\TeamProjectName\\Iteration");
var iteration = service.CreateNode("Sprint 1", iterationRoot.Uri);
Now I want to remove an iteration but there is no corresponding method on ICommonStructureService. Oddly there is a method named GetDeletedNodesXml().
I got it! I'm assuming here that you have some method to retrieve an iteration.
var tfs = new TfsTeamProjectCollection(uri, tfsCredential);
var service = tfs.GetService<ICommonStructureService>();
// TODO var iteration = GetIteration();
var projectInfo = service.GetProjectFromName(projectName)
var nodes = service.ListStructures(projectInfo.Uri);
service.DeleteBranches(iteration.Id, nodes[0].Uri);
The key is to pass in Ids and not paths. TFS wants to retrieve artifact URLs which are represented as Ids. The second parameter of DeleteBranches is the artifact URL of the iteration-root which is obtained by calling ListStructures of the ICommonStructureService and taking the first element there (which is kind of nasty IMHO but I don't know a better way).

How do I push List or Map type to DynamoDB with the .NET SDK?

I'm trying to push some data into an DynamoDB table and I'm having trouble making the .NET SDK detect that I want a List or Map type, rather than the Numbered/String Set types.
var doc = new Document();
doc["Game ID"] = "SW Proto";
doc["Run ID"] = 666;
doc["Profiler Column"] = stats.Key.ToString();
//doc["Stats Data"] = stats.Value as List<string>;
// Works:
doc["Stats Data"] = new List<string> { "2.45", "2.35", "2.5" };
// Fails:
doc["Stats Data"] = new List<string> { "2.45", "2.45", "2.45" };
It fails because the data is non-unique, as required for a Set type.
How does one force the data to serialize to List or Map?
To store a list (L) instead of a string set (SS), you need to use a different conversion schema. This blog post discusses the different conversion schemas and how they can be used.

MongoDB: update only specific fields

I am trying to update a row in a (typed) MongoDB collection with the C# driver. When handling data of that particular collection of type MongoCollection<User>, I tend to avoid retrieving sensitive data from the collection (salt, password hash, etc.)
Now I am trying to update a User instance. However, I never actually retrieved sensitive data in the first place, so I guess this data would be default(byte[]) in the retrieved model instance (as far as I can tell) before I apply modifications and submit the new data to the collection.
Maybe I am overseeing something trivial in the MongoDB C# driver how I can use MongoCollection<T>.Save(T item) without updating specific properties such as User.PasswordHash or User.PasswordSalt? Should I retrieve the full record first, update "safe" properties there, and write it back? Or is there a fancy option to exclude certain fields from the update?
Thanks in advance
Save(someValue) is for the case where you want the resulting record to be or become the full object (someValue) you passed in.
You can use
var query = Query.EQ("_id","123");
var sortBy = SortBy.Null;
var update = Update.Inc("LoginCount",1).Set("LastLogin",DateTime.UtcNow); // some update, you can chain a series of update commands here
MongoCollection<User>.FindAndModify(query,sortby,update);
method.
Using FindAndModify you can specify exactly which fields in an existing record to change and leave the rest alone.
You can see an example here.
The only thing you need from the existing record would be its _id, the 2 secret fields need not be loaded or ever mapped back into your POCO object.
It´s possible to add more criterias in the Where-statement. Like this:
var db = ReferenceTreeDb.Database;
var packageCol = db.GetCollection<Package>("dotnetpackage");
var filter = Builders<Package>.Filter.Where(_ => _.packageName == packageItem.PackageName.ToLower() && _.isLatestVersion);
var update = Builders<Package>.Update.Set(_ => _.isLatestVersion, false);
var options = new FindOneAndUpdateOptions<Package>();
packageCol.FindOneAndUpdate(filter, update, options);
Had the same problem and since I wanted to have 1 generic method for all types and didn't want to create my own implementation using Reflection, I end up with the following generic solution (simplified to show all in one method):
Task<bool> Update(string Id, T item)
{
var serializerSettings = new JsonSerializerSettings()
{
NullValueHandling = NullValueHandling.Ignore,
DefaultValueHandling = DefaultValueHandling.Ignore
};
var bson = new BsonDocument() { { "$set", BsonDocument.Parse(JsonConvert.SerializeObject(item, serializerSettings)) } };
await database.GetCollection<T>(collectionName).UpdateOneAsync(Builders<T>.Filter.Eq("Id", Id), bson);
}
Notes:
Make sure all fields that must not update are set to default value.
If you need to set field to default value, you need to either use DefaultValueHandling.Include, or write custom method for that update
When performance matters, write custom update methods using Builders<T>.Update
P.S.: It's obviously should have been implemented by MongoDB .Net Driver, however I couldn't find it anywhere in the docs, maybe I just looked the wrong way.
Well there are many ways to updated value in mongodb.
Below is one of the simplest way I choose to update a field value in mongodb collection.
public string UpdateData()
{
string data = string.Empty;
string param= "{$set: { name:'Developerrr New' } }";
string filter= "{ 'name' : 'Developerrr '}";
try
{
//******get connections values from web.config file*****
var connectionString = ConfigurationManager.AppSettings["connectionString"];
var databseName = ConfigurationManager.AppSettings["database"];
var tableName = ConfigurationManager.AppSettings["table"];
//******Connect to mongodb**********
var client = new MongoClient(connectionString);
var dataBases = client.GetDatabase(databseName);
var dataCollection = dataBases.GetCollection<BsonDocument>(tableName);
//****** convert filter and updating value to BsonDocument*******
BsonDocument filterDoc = BsonDocument.Parse(filter);
BsonDocument document = BsonDocument.Parse(param);
//********Update value using UpdateOne method*****
dataCollection.UpdateOne(filterDoc, document);
data = "Success";
}
catch (Exception err)
{
data = "Failed - " + err;
}
return data;
}
Hoping this will help you :)

Using Accord.Net's Codification Object to Codify second data set

I am trying to figure out how to use the Accord.Net Framework to make a bayesian prediction using the machine learning NaiveBayes class. I have followed the example code listed in the documentation and have been able to create the model from the example.
What I can't figure out is how to make a prediction based on that model.
The way the Accord.Net framework works is that it translates a table of strings into numeric symolic representation of those strings using a class called Codification. Here is how I create inputs and outputs DataTable to train the model (90% of this code is straight from the example):
var dt = new DataTable("Categorizer");
dt.Columns.Add("Word");
dt.Columns.Add("Category");
foreach (string category in categories)
{
rep.LoadTrainingDataForCategory(category,dt);
}
var codebook = new Codification(dt);
DataTable symbols = codebook.Apply(dt);
double[][] inputs = symbols.ToArray("Word");
int[] outputs = symbols.ToIntArray("Category").GetColumn(0);
IUnivariateDistribution[] priors = {new GeneralDiscreteDistribution(codebook["Word"].Symbols)};
int inputCount = 1;
int classCount = codebook["Category"].Symbols;
var target = new NaiveBayes<IUnivariateDistribution>(classCount, inputCount, priors);
target.Estimate(inputs, outputs);
And this all works successfully. Now, I have new input that I want to test against the trained data model I just built. So I try to do this:
var testDt = new DataTable("Test Data");
testDt.Columns.Add("Word");
foreach (string token in tokens)
{
testDt.Rows.Add(token);
}
DataTable testDataSymbols = codebook.Apply(testDt);
double[] testData = testDataSymbols.ToArray("Word").GetColumn(0);
double logLikelihood = 0;
double[] responses;
int cat = target.Compute(testData, out logLikelihood, out responses);
Notice that I am using the same codebook object that I was using previously when I built the model. I want the data to be codified using the same codebook as the original model, otherwise the same word might be encoded with two completely different values (the word "bob" in the original model might correspond to the number 23 and in the new model, the number 43... no way that would work.)
However, I am getting a NullReferenceException error on this line:
DataTable testDataSymbols = codebook.Apply(testDt);
Here is the error:
System.NullReferenceException: Object reference not set to an instance of an object.
at Accord.Statistics.Filters.Codification.ProcessFilter(DataTable data)
at Accord.Statistics.Filters.BaseFilter`1.Apply(DataTable data)
at Agent.Business.BayesianClassifier.Categorize(String[] categories, String testText)
The objects I am passing in are all not null, so this must be something happening deeper in the code. But I am not sure what.
Thanks for any help. And if anyone knows of an example where a prediction is actually made from the bayesian example for Accord.Net, I would be much obliged if you shared it.
Sorry about the lack of documentation on the final part. In order to obtain the same integer codification for a new word, you could use the Translate method of the codebook:
// Compute the result for a sunny, cool, humid and windy day:
double[] input = codebook.Translate("Sunny", "Cool", "High", "Strong").ToDouble();
int answer = target.Compute(input);
string result = codebook.Translate("PlayTennis", answer); // result should be "no"
but it should also have been possible to call codebook.Apply to apply the same transformation to a new dataset. If you feel this is a bug, would you like to fill a bug report in the issue tracker?

Categories