I have a many-to-many relationship between photos and tags: A photo can have multiple tags and several photos can share the same tags.
I have a loop that scans the photos in a directory and then adds them to NHibernate. Some tags are added to the photos during that process, e.g. a 2009-tag when the photo is taken in 2009.
The Tag class implements Equals and GetHashCode and uses the Name property as the only signature property. Both Photo and Tag have surrogate keys and are versioned.
I have some code similar to the following:
public void Import() {
...
foreach (var fileName in fileNames) {
var photo = new Photo { FileName = fileName };
AddDefaultTags(_session, photo, fileName);
_session.Save(photo);
}
...
}
private void AddDefaultTags(…) {
...
var tag =_session.CreateCriteria(typeof(Tag))
.Add(Restriction.Eq(“Name”, year.ToString()))
.UniqueResult<Tag>();
if (tag != null) {
photo.AddTag(tag);
} else {
var tag = new Tag { Name = year.ToString()) };
_session.Save(tag);
photo.AddTag(tag);
}
}
My problem is when the tag does not exist, e.g. the first photo of a new year. The AddDefaultTags method checks to see if the tag exists in the database and then creates it and adds it to NHibernate. That works great when adding a single photo but when importing multiple photos in the new year and within the same unit of work it fails since it still doesn’t exist in the database and is added again. When completing the unit of work it fails since it tries to add two entries in the Tags table with the same name...
My question is how to make sure that NHibernate only tries to create a single tag in the database in the above situation. Do I need to maintain a list of newly added tags myself or can I set up the mapping in such a way that it works?
You need to run _session.Flush() if your criteria should not return stale data.
Or you should be able to do it correctly by setting the _session.FlushMode to Auto.
With FlushMode.Auto, the session will automatically be flushed before the criteria is executed.
EDIT: And important! When reading the code you've shown, it does not look like you're using a transaction for your unit of work. I would recommend wrapping your unit of work in a transaction - that is required for FlushMode.Auto to work if you're using NH2.0+ !
Read further here: NHibernate ISession Flush: Where and when to use it, and why?
If you want the new tag to be in the database when you check it each time you need to commit the transaction after you save to put it there.
Another approach would be to read the tags into a collection before you process the photos.
Then like you said you would search local and add new tags as needed. When you are done with the folder you can commit the session.
You should post your mappings as i may not have interpreted your question correctly.
This is that typical "lock something that is not there" problem. I faced it already several times and still do not have a simple solution for it.
This are the options I know until now:
Optimistic: have a unique constraint on the name and let one of the sessions throw on commit. Then you try it again. You have to make sure that you don't end in a infinite loop when another error occurs.
Pessimistic: When you add a new Tag, you lock the whole Tag table using TSQL.
.NET Locking: you synchronize the threads using .NET locks. This only works if you parallel transactions are in the same process.
Create Tags using a own session (see bellow)
Example:
public static Tag CreateTag(string name)
{
try
{
using (ISession session = factors.CreateSession())
{
session.BeginTransaction();
Tag existingTag = session.CreateCriteria(typeof(Tag)) /* .... */
if (existingtag != null) return existingTag;
{
session.Save(new Tag(name));
}
session.Transaction.Commit();
}
}
// catch the unique constraint exception you get
catch (WhatEverException ex)
{
// try again
return CreateTag(name);
}
}
This looks simple, but has some problems. You get always a tag, that is either existing or created (and committed immediately). But the tag you get is from another session, so it is detached for your main session. You need to attach it to your session using cascades (which you probably don't want to) or update.
Creating tags is not coupled to your main transaction anymore, this was the goal but also means that rolling back your transaction leaves all created tags in the database. In other words: creating tags is not part of your transaction anymore.
Related
I am creating a web API. I need something like this:
When I updating a document at mongodb, I do not want to update a field (createdAt). I know that I can get a old value of that field and manuelly and then put it updated object but it requires one more unnecessarry request to db. I do not want this. My method is here:
public async Task<bool> UpdateAsync(Customer updatedCustomer)
{
var result = await _mongoService.Customers.ReplaceOneAsync(c => c.Id == updatedCustomer.Id, updatedCustomer);
return result.IsModifiedCountAvailable && result.ModifiedCount>0;
}
Is there any way to exclude one property of my Customer class (createdAt) and left it same everytime. BTW please do not recomend that set all properties update one by one by using "Set" method. Thank you.
I'm not sure if there is a way other than to set the properties one by one, but researching the following may be helpful or suggestive of something new.
In Mongodb you can use some decoration to do like [BsonIgnore] but it will ignore it every time
One alternative would be to load the document you wish to modify, followed by calling BsonDocument.Merge with overwriteExistingElements set to true in order to merge your changes.
I have this error when I try to add a line of package
Error : Another process has added the "SOPackagedetail" record. Your changes will be lost.
error
My c# code is this :
protected virtual void creationColis()
{
SOShipment ship=Base.CurrentDocument.Select();
SOPackageDetailEx colis = new SOPackageDetailEx();
colis.BoxID="COLIS";
colis.PackageType="M";
colis.ShipmentNbr=ship.ShipmentNbr;
SOShipmentEntry graph = PXGraph.CreateInstance<SOShipmentEntry>();
graph.Packages.Insert(colis); //insertion de l'enregistrement
graph.Packages.Update(colis);
graph.Actions.PressSave();
graph.Clear();
}
Do you know what I must to change please ?
Thanks so much
Xavier
Your question needs more context. For starters, where does your code reside? Given that you reference Base.CurrentDocument.Select, I'm going to assume you are extending SOShipmentEntry to add your code.
In this case, you would just use the Base.Packages view rather than initializing your own instance of SOShipmentEntry where your example goes into trying to use graph.Packages. Regardless, there are 2 parts here that need to be addressed.
Packages is not the primary view of SOShipmentEntry. When you create an instance of a graph, you must tell the graph what record is needed in the primary view. In your example where you create a new instance of a graph, you might do something like this:
graph.Document.Current = graph.Document.Search<SOShipment.shipmentNbr>(myShipmentNbr);
If you are working on a graph extension of SOShipmentEntry, then you probably don't need to create a new instance of the graph. Just make sure graph.Document.Current isn't null before you add your package record - see bullet 2.
Once you have a shipment selected, you can then insert your package information. However, the way you have done it here effectively is trying to add a random package to a null shipment (by the structure of the views) but forcing the record to attach to the right shipment by sheer brute force. The views don't like to work that way.
A better way to add your package once you have a current shipment (Document) is like this:
// Find the current shipment (from the primary view Document)
SOShipment ship = Base.Document.Current();
if(ship?.ShipmentNbr != null) {
// Insert a record into the Packages view of the current shipment and return the record into colis
SOPackageDetailEx colis = Base.Packages.Insert(colis);
// Set the custom values
colis.BoxID="COLIS";
colis.PackageType="M";
// Update the Packages cache with the modified fields
Base.Packages.Update(colis);
// If more fields need to be updated after those changes were applied, instead do this...
colis = Base.Packages.Update(colis);
colis.FieldA = ValueA;
colis.FieldB = ValueB;
Base.Packages.Update(colis);
// If a save is needed, now is the time
Base.Save.Press();
}
Notice that I didn't assign ShipmentNbr. That is because the DAC has that field defined to pull the ShipmentNbr from SOShipment through these 2 attributes.
[PXParent(typeof(FK.Shipment))]
[PXDBDefault(typeof(SOShipment.shipmentNbr))]
This means that when the record is created, Acumatica should lookup the parent SOShipment record via the Key and do a DBDefault on the field to assign it to the SOShipment.ShipmentNbr value (from the parent). Important side note: PXDefault and PXDBDefault are NOT interchangeable. We use PXDefault a lot, but off the top of my head I can't think of a case of PXDBDefault outside of defaulting from a database value like this specific usage.
My situation is like this:
A "Code" field from the source tree, needs to be mapped to a "Code" field in the destination tree. The "Code" field in the destination tree has 2 parent nodes. For the destination schema to validate, the same code must not occur more than once in the scope of the 2nd parent node. Here's an image of the hiearchy:
So within the scope of "PurchaseInformation", no same "Code" may occur. A looping functoid loops on "GoodsDescription". I've tried to create an inline C# script to handle it, but it doesn't take the scope into account. See code below:
public System.Collections.Generic.List<string> duplicateList = new System.Collections.Generic.List<string>();
public bool IsDuplicate(string code)
{
if( duplicateList.Contains(code)) {
return false;
}
else {
duplicateList.Add(code);
return true;
}
}
My problem is the global List that is created. It does not reset after each loop, but I'm unsure how to implement this functionality. My question is how I can make sure no duplicate codes are mapped within the scope of the "PurchaseInformation" record in the destination tree?
Without seeing the whole process, it's difficult to give what might be the best solution...but...
Instead of trying to reset the collection (there are reasons this is difficult) you might try a list of lists instead.
Presuming SimplifiedInvoice is an ID or something, you can use a Dictionary of Lists which will track lists of unique Code values per Invoice.
I am trying to write two Tag helpers that are used to localize strings in razor views. The purpose of the parent tag is to gather all of the keys that are requested by the child tags and get them from the database in one batch and cache them.
Then the child tags will use the cached versions, this way I am hoping to lower the load on the database. I am looking for something like this:
<parent-tag>
<child-tag key="Hello" />
some HTML here
<child-tag key="Hi!" />
</parent-tag>
I want to be able to get a list of objects in the Parent tag's Invoke method.
I have also tried storing data in TagHelperContext to communicate with other tag helpers, but this will not work either, because I have to call output.GetChildContentAsync() inside the Parent's Invoke method, which defeats the whole purpose of the caching.
#Encrypt0r you could have an in-memory static cache relation of the TagHelpers context.Items and your localized keys. This would involve using context.Items in your to execute GetChildContentAsync one time. All subsequent times would then be cached by looking up the key values based off of the context.Items (assuming the key values were not dynamic).
Think of it this way:
// Inside your TagHelper
if (_cache.TryGetValue(context.UniqueId, out var localizationKeys))
{
... You already have the keys, do whatever
}
else
{
var myStatefulObject = new SomeStatefulObject();
context.Items[typeof(SomeStatefulObject)] = myStatefulObject;
await output.GetChildContentAsync();
_cache[context.UniqueId] = new LocalizationKeyStuff(myStatefulObject);
}
I'm building a repository with caching using spring.net. Can I update/add/delete one item in the cached list without having to rebuild the whole list?
Looking at the documentation and the example project from their site they always clear the cache whenever they update/add/delete one item. Therefore as long as you only read an object or the list of objects the caching works well but it feels stupid having to rebuild the whole cache just because I change one item?
Example:
// Cache per item and a list of items
[CacheResult("DefaultCache", "'AllMovies'", TimeToLive = "2m")]
[CacheResultItems("DefaultCache", "'Movie-' + ID")]
public IEnumerable<Movie> FindAll()
{
return movies.Values;
}
// Update or add an item invalidating the list of objects
[InvalidateCache("DefaultCache", Keys = "'AllMovies'")]
public void Save([CacheParameter("DefaultCache", "'Movie-' + ID")]Movie movie)
{
if (this.movies.ContainsKey(movie.ID))
{
this.movies[movie.ID] = movie;
}
else
{
this.movies.Add(movie.ID, movie);
}
}
Having mutable things stored in the cache seems to me a fountain of horrible side effects. Imho that is what you would need if you want to add/remove entries from a cached list.
The implementation of CacheResultAdvice and InvalidateCacheAdvice allows to store and invalidate an object (key) -> object (value) combination. You could add another layer and retrieve movie per movie but I think that it is just a case of premature optimization (with the opposite effect).
CacheResultAdvice
InvalidateCacheAdvice
Edit:
Btw. if you use a mature ORM look for integrated level2 caching, if you want to avoid hitting the db server: http://www.klopfenstein.net/lorenz.aspx/using-syscache-as-secondary-cache-in-nhibernate