db4o does not delete the record - c#

Good day! Try db4o, faced with this problem: I can not delete records:
using (IObjectServer server = Db4oClientServer.OpenServer(HttpContext.Current.Server.MapPath("~/transfers.data"), 0))
{
using (IObjectContainer client = server.OpenClient())
{
var keyValuePair = (from KeyValuePair<DateTime, Transfer> d in client where d.Key < DateTime.Now.AddHours(-3) select d);
client.Delete(keyValuePair.First());
client.Commit();
}
}
After this code the number of objects (KeyValuePair< DateTime, Transfer >) in the database is not changed.

This will not work! The reason is that a KeyValuePair is a value type, which means it has no identity. However db4o manages objects by their identity! Now C# happily boxes any value type to an object, but that useless for db4o since it won't find any object with the given identity in the database.
You ran into annoying corner-case between the .NET and db4o behavior. Basically there is no nice work around for this, especially since db4o doesn't have an API to delete a object by its internal id =(.
For the future. Don't store KeyValuePairs (or any struct) for itself. Only as part of another object. (and use 8.1, it has a bugfix preventing it from never deleting structs). That avoids this issue.

Related

Incomprehensible problem between MongoDB and Oracle (dynamic assembly)

I've got a similar problem like Oracle DataAccess related: "The invoked member is not supported in a dynamic assembly."
I've got the same Oracle exception ("the invoked member is not supported in a dynamic assembly").
However, I think the Oracle version is not the problem since the program worked fine before that, and the Oracle version is the same. It's related to the support of a custom oracle data type (SdoGeometry).
[OracleCustomTypeMappingAttribute("MDSYS.SDO_GEOMETRY")]
public class SdoGeometry : OracleCustomTypeBase<SdoGeometry> { ... }
Note: it worked well until now.
The program tries to get some data from an oracle database and computes and stores result in MongoDB database. Since there have been recent developments on the MongoDB part (what's the connection? I'm getting there), i'm getting the exception in some cases.
So, the program works as follow:
1. If there is data in MongoDB, the program checks it
2. The program selects data in Oracle and prepares it
3. It stores data in MongoDB
The (sometimes) failing operation is in step 2: it consists in adding colums type names in the output data, like this:
private void ReadTypes(Dictionary<string, string> types, DbDataReader reader){
if (types.Count == 0) {
int fieldCount = reader.FieldCount;
for (int i = 0; i < fieldCount; i++) {
string fieldName = reader.GetName(i);
try {
string fieldType = reader.GetFieldType(i).Name;
types[fieldName] = fieldType;
}
catch (Exception e) {
// the invoked member is not supported in a dynamic assembly
// only for SdoGeometry type
}
}
}
}
OracleDataReader.GetFieldType(i) fails for SdoGeometry type (our custom data type) only when step 1 is executed (there is some MongoDB operations). I've identified the responsible operation, it's:
mongoEvents = mongoEvents_.Find(e => e.Identifier.Table.Equals(table)).ToList();
(by moving an if-true-return block from line to line - on a part of the application that I did not show - I identified that it was this operation that produced the error).
This operation consists in extracting from Mongo, the data already archived for the current Oracle table. It's an operation for MongoAPI (IMongoCollection.Find). So:
If I comment this line and return an empty list (or with a manually inserted object), there is no more exception. Well...
But what is strange is this:
//I've replaced the previous statement.
//It's working, no mongo data is returned but this is independent of step 2,
//which in any case retrieves data from Oracle database.
//(MongoDataEvent is one of our classes which defines the structure of archived data)
mongoEvents = new List<MongoEvent.MongoDataEvent>();
Okay, but if instead of that, I add this statment after the previous one:
mongoEvents = mongoEvents_.Find(e => e.Identifier.Table.Equals(table)).ToList();
mongoEvents = new List<MongoEvent.MongoDataEvent>();
Okay it's useless, but when emptying the list after performing the Find method, the exception appears again (not while calling Find, but after while calling GetFieldType), while the list is empty.
So...I don't have any idea what's going on. Any ideas ? Thanks.
I found a solution !
By simply adding, in the first statement of my program, an Oracle (dummy) query making a select on a SdoGeometry type field of an arbitrarily chosen table.
I think this forces the SdoGeometry type to load.
So as I understand it the bug was happening while querying MongoDb data before loading Oracle SdoGeometry data, there was a problem loading in assembly (?).
Still, this solution works perfectly! Every time I try it works and as soon as I try to comment out the request forcing SdoGeometry to load, the error always occurs again. Additionally, the MongoDb data looks correct.
I don't understand everything in the details but it works !

Update only the changes that are different

I have a Entity-Set employee_table, I'm getting the data through excel sheet which I have loaded in the memory and the user will click Save to save the changes in the db and its all good for the first time inserting records and no issue with that.
but how can I update only the changes that are made? meaning that, let say I have 10 rows and 5 columns and out of 10 rows say row 7th was modified and out of 5 column let say 3rd column was modified and I just need to update only those changes and keep the existing value of the other columns.
I can do with checking if (myexistingItem.Name != dbItem.Name) { //update } but its very tedious and not efficient and I'm sure there is a better way to handle.
here is what I got so far.
var excelData = SessionWrapper.GetSession_Model().DataModel.OrderBy(x => x.LocalName).ToList();;
var dbData = context.employee_master.OrderBy(x => x.localname).ToList();
employee_master = dbEntity = employee_master();
if (dbData.Count > 0)
{
//update
foreach (var dbItem in dbData)
{
foreach(var xlData in excelData)
{
if(dbItem.customer == xlData.Customer)
{
dbEntity.customer = xlData.Customer;
}
//...do check rest of the props....
db.Entry(dbEntity).State = EntityState.Modified;
db.employee_master.Add(dbEntity);
}
}
//save
db.SaveChanges();
}
else
{
//insert
}
You can make this checking more generic with reflection.
Using this answer to get the value by property name.
public static object GetPropValue(object src, string propName)
{
return src.GetType().GetProperty(propName).GetValue(src, null);
}
Using this answer to set the value by property name.
public static void SetPropertyValue(object obj, string propName, object value)
{
obj.GetType().GetProperty(propName).SetValue(obj, value, null);
}
And this answer to list all properties
public static void CopyIfDifferent(Object target, Object source)
{
foreach (var prop in target.GetType().GetProperties())
{
var targetValue = GetPropValue(target, prop.Name);
var sourceValue = GetPropValue(source, prop.Name);
if (!targetValue.Equals(sourceValue))
{
SetPropertyValue(target, prop.Name, sourceValue);
}
}
}
Note: If you need to exclude some properties, you can implement very easy by passsing the list of properties to the method and you can check in if to be excluded or not.
Update:
I am updating this answer to provide a little more context as to why I suggested not going with a hand-rolled reflection-based solution for now; I also want to clarify that there is nothing wrong with a such a solution per se, once you have identified that it fits the bill.
First of all, I assume from the code that this is a work in progress and therefore not complete. In that case, I feel that the code doesn't need more complexity before it's done and a hand-rolled reflection-based approach is more code for you to write, test, debug and maintain.
For example, right now you seem to have a situation where there is a simple 1:1 simple copy from the data in the excel to the data in the employee_master object. So in that case reflection seems like a no-brainer, because it saves you loads of boring manual property assignments.
But what happens when HR (or whoever uses this app) come back to you with the requirement: If Field X is blank on the Excel sheet, then copy the value "Blank" to the target field, unless it's Friday, in which case copy the value "N.A".
Now a generalised solution has to accomodate custom business logic and could start to get burdensome. I have been in this situation and unless you are very careful, it tends to end up with turning a mess in the long run.
I just wanted to point this out - and recommend at least looking at Automapper, because this already provides one very proven way to solve your issue.
In terms of efficiency concerns, they are only mentioned because the question mentioned them, and I wanted to point out that there are greater inefficiencies at play in the code as posted as opposed to the inefficiency of manually typing 40+ property assignments, or indeed the concern of only updating changed fields.
Why not rewrite the loop:
foreach (var xlData in excelData)
{
//find existing record in database data:
var existing = dbData.FirstOrDefault(d=>d.customer==xlData.Customer);
if(existing!=null)
{
//it already exists in database, update it
//see below for notes on this.
}
else
{
//it doesn't exist, create employee_master and insert it to context
//or perform validation to see if the insert can be done, etc.
}
//and commit:
context.SaveChanges();
}
This lets you avoid the initial if(dbData.Count>0) because you will always insert any row from the excel sheet that doesn't have a matching entry in dbData, so you don't need a separate block of code for first-time insertion.
It's also more efficient than the current loop because right now you are iterating every object in dbData for every object in xlData; that means if you have 1,000 items in each you have a million iterations...
Notes on the update process and efficiency in general
(Note: I know the question wasn't directly about efficiency, but since you mentioned it in the context of copying properties, I just wanted to give some food for thought)
Unless you are building a system that has to do this operation for multiple entities, I'd caution against adding more complexity to your code by building a reflection-based property copier.
If you consider the amount of properties you have to copy (i.e the number of foo.x = bar.x type statements) , and then consider the code required to have a robust, fully tested and provably efficient reflection-based property copier (i.e with built-in cache so you don't have to constantly re-reflect type properties, a mechanism to allow you to specify exceptions, handling for edge cases where for whatever unknown reason you discover that for random column X the value "null" is to be treated a little differently in some cases, etc), you may well find that the former is actually significantly less work :)
Bear in mind that even the fastest reflection-based solution will always still be slower than a good old fashioned foo.x = bar.x assignment.
By all means if you have to do this operation for 10 or 20 separate entities, consider the general case, otherwise my advice would be, manually write the property copy assignments, get it right and then think about generalising - or look at Automapper, for example.
In terms of only updating field that have changed - I am not sure you even need to. If the record exists in the database, and the user has just presented a copy of that record which they claim to be the "correct" version of that object, then just copy all the values they gave and save them to the database.
The reason I say this is because in all likelihood, the efficiency of only sending for example 4 modified fields versus 25 or whatever, pales into insignificance next to the overhead of the actual round-trip to the database itself; I'd be surprised if you were able to observe a meaningful performance increase in these kinds of operations by not sending all columns - unless of course all columns are NVARCHAR(MAX) or something :)
If concurrency is an issue (i.e. other uses might be modifying the same data) then include a ROWVERSION type column in the database table, map it in Entity Framework, and handle the concurrency issues if and when they arise.

Updating coordinated dictionaries in C#

I have two dictionaries that are acting as a cache, let's call them d1 and d2, with d2 being a subset a of d1.
When I refresh the dictionaries with new entries of d1, I would like the entries in d2 to be refreshed as well. It can be done the obvious way:
foreach(var update in updates) {
d1[update.id] = update;
if(d2.Contains(update.id))
d2[update.id] = update;
}
But since I'd like to cache other subsets (d3,d4,d5, etc), this may get unwieldy. So I added a "CopyFrom" method to my object, which allows me to maintain the reference by simply just copying properties to the object being updated.
foreach(var update in updates) {
d1[update.id].CopyFrom(update)
}
In this way, any other dictionaries that have a reference to the entry won't lose it when d1 gets updated.
I'd just like to know if I'm missing anything here? I'm just getting back into C# after a break, and my grasp on the obvious may be shaky :).
Why not have a CacheItem class that contains the payload (the current values in your dictionaries). Then for each key in your dictionaries, store a CacheItem containing what you're currently storing. If you store the same CacheItem object in multiple dictionaries, you only need to modify the payload of a CacheItem, and all the dictionaries containing it are updated.
foreach(var update in updates) {
if (d1.ContainsKey(update.id)) {
var cacheItem = d1[update.id];
cacheItem.Payload = update;
} else {
d1[update.id] = new CacheItem(update);
}
}
My answer assumes that your design of having multiple dictionaries, some being subsets of the main one is based on your requirements, and is a sound way to address them. It seems a little unusual to me.
Well if the value of the Dictionary is a reference type, and you're only modifying it, you wouldn't need to do anything. If you are creating a new reference or it is a value type, I'd change the subsets to be arrays of the type of id which is the subset you'd like to have. And whenever you need the value, you'd still go to d1 for accessing it.
As for all types of cache keep in mind that a cache with a bad policy is another name for a memory leak.
Further: Caching Implies Policy.

How to display the content of asp.net cache?

We have an asp.net MVC web application which uses the HttpRuntime.Cache
object to keep some static lookup values. We want to be able to monitor what's
being cached in real time to allow us to pinpoint some possible caching issues.
Since this object isn't strongly typed when reading from it, we need to dynamically
cast each entries to it's concrete type.
Most of the cached items are of IEnumerable where T can be any classes we use in our
project or new ones that could eventually be added as the project goes further.
Can someone give me a pointer on how to do this?
Thank you very much.
Use ASP.NET MVC itself.
public ActionResult Index()
{
return View(HttpRuntime.Cache)
}
and for the view
Html.DisplayForModel()
You will want to use a custom object template (basically take the MVC template and turn off the depth restriction).
http://bradwilson.typepad.com/blog/2009/10/aspnet-mvc-2-templates-part-3-default-templates.html
On the object template you will want to alter
else if (ViewData.TemplateInfo.TemplateDepth > 1) { %>
<%= ViewData.ModelMetadata.SimpleDisplayText %>
And change the > 1 to either be a higher number like 5-10 or just completely remove this depth check (I'd probably start with a 5 and go from there).
You could try to Json serialize it using the JavaScriptSerializer class. This way you don't need to cast to the original type as the Serialize method can take any object and output it in a humanly readable JSON format. It might choke on some complex types and if this happens you may also try the Newtonsoft Json.NET.
It's important to highlight that the cache exists in the App Domain for the MVC web app so wouldn't be accessible externally, you'd need to either implement some in-app monitoring or inter-App Domain comms to enable an external app to request the cache data from your MVC App Domain.
Well, I think what you are asking for is a way of determining what the type parameters of a generic type is at runtime - in your example the situation is complicated because you are after an interface not an object instance.
Nethertheless this is still pretty straightforward, the following example should at least point you in the right direction on how to do this:
object obj= new List<string>();
Type type = obj.GetType();
Type enumerable = type.GetInterfaces().FirstOrDefault(t => t.IsGenericType && t.GetGenericTypeDefinition() == typeof(IEnumerable<>));
if (enumerable != null)
{
Type listType = enumerable.GetGenericArguments()[0];
if (listType == typeof(string))
{
IEnumerable<string> e = obj as IEnumerable<string>;
}
}
But I can't really see how this helps you solve your underlying problem of monitoring your cache?
In the past when attempting to monitor the performance of caches I've found creating my own simple Perfmon counters very helpful for monitor purposes - as a basic example start with a "# Entries" counter (which you increment whenever an item is added to the cache and decrement whenever an item is removed from the cache), and add counters as that you think would be useful as you go - a cache hit counter and a cache miss counter are normally pretty useful too.
You can also have your Perfmon counter break down caching information by having many instances of your counters, one for each type being cached (or in your case more likely the generic IEnumerable type being cached) - just as the "Process" perfmon counter group has an instance for each process on your system, you would have an instance for each type in the cache (plus you should also add a "_Total" instance or similar).
Using Perfmon counters by recording operations on the cache allows you to monitor your cache performance in a fair amount of detail with very little runtime performance overhead.

Persist List<int> through App Shutdowns

Short Version
I have a list of ints that I need to figure out how to persist through Application Shutdown. Not forever but, you get the idea, I can't have the list disappear before it is dealt with. The method for dealing with it will remove entry's in the list.
What are my options? XML?
Background
We have a WinForm app that uses Local SQL Express DB's that participate in Merge Replication with a Central Server. This will be difficult to explain but we also have(kind of) an I-Series 400 Server that a small portion of data gets written to as well. For various reasons the I-Series is not available through replication and as such all "writes" to it need to be done while it is available.
My first thought to solve this was to simply have a List object that stored the PK that needed to be updated. Then, after a successful sync, I would have a method that checks that list and calls the UpdateISeries() once for each PK in the list. I am pretty sure this would work, except in a case where they shut down innappropriately or lost power, etc. So, does anyone have better ideas on how to solve this? XML file maybe, though I have never done that. I worry about actually creating a Table in SQL Express because of Replication....maybe unfounded but...
For reference, UpdateISeries(int PersonID) is an existing Method in a DLL that is used internally. Re-writting it, as a potential solution to this issue, really isn't viable at the time.
Sounds like you need to serialize and deserialize some objects.
See these .NET topics to find out more.
From the linked page:
Serialization is the process of converting the state of an object into a form that can be persisted or transported. The complement of serialization is deserialization, which converts a stream into an object. Together, these processes allow data to be easily stored and transferred.
If it is not important for the on-disk format to be human readable, and you want it to be as small as possible, look at binary serialization.
Using the serialization mechanism is probably the way to go. Here is an example using the BinaryFormatter.
public void Serialize(List<int> list, string filePath)
{
using (Stream stream = File.OpenWrite(filePath))
{
var formatter = new BinaryFormatter();
formatter.Serialize(stream, list);
}
}
public List<int> Deserialize(string filePath)
{
using (Stream stream = File.OpenRead(filePath)
{
var formatter = new BinaryFormatter();
return (List<int>)formatter.Deserialize(stream);
}
}
If you already have and interact with a SQL database, use that, to get simpler code with fewer dependencies. Replication can be configured to ignore additional tables (even if you have to place them in another schema). This way, you can avoid a number of potential data corruption problems.

Categories