C# - Updating a row using LINQ - c#

I am in the process of improving a console app and at the moment I cant get it to update rows instead of just creating a new row with the newer information in it.
class Program
{
List<DriveInfo> driveList = DriveInfo.GetDrives().Where(x => x.IsReady).ToList<DriveInfo>(); //Get all the drive info
Server server = new Server(); //Create the server object
ServerDrive serverDrives = new ServerDrive();
public static void Main()
{
Program c = new Program();
c.RealDriveInfo();
c.WriteInToDB();
}
public void RealDriveInfo()
{
//Insert information of one server
server.ServerID = 0; //(PK) ID Auto-assigned by SQL
server.ServerName = string.Concat(System.Environment.MachineName);
//Inserts ServerDrives information.
for (int i = 0; i < driveList.Count; i++)
{
//All Information used in dbo.ServerDrives
serverDrives.DriveLetter = driveList[i].Name;
serverDrives.TotalSpace = driveList[i].TotalSize;
serverDrives.DriveLabel = driveList[i].VolumeLabel;
serverDrives.FreeSpace = driveList[i].TotalFreeSpace;
serverDrives.DriveType = driveList[i].DriveFormat;
server.ServerDrives.Add(serverDrives);
}
}
public void WriteInToDB()
{
//Add the information to an SQL Database using Linq.
DataClasses1DataContext db = new DataClasses1DataContext(#"sqlserver");
db.Servers.InsertOnSubmit(server);
db.SubmitChanges();
What I would like it to use to update the information would be the RealDriveInfo() Method so instead of creating new entries it updates the currently stored information by running the method then inserting the information from the method and if needed will enter a new entry instead of simply entering new entries every time it has newer information.
At the moment it is running the method, gathering the relevant data then entering it in as a new row in both tables.
Any help would be appreciated :)

It's creating a new db entry each time because you are making a new server object each time, then calling InsertOnSubmit() - which inserts (creates) a new record.
I'm not entirely sure what you are trying to do, but a db update would involve selecting an existing record, modifying it, then attaching it back to the data context and calling SubmitChanges().
This article on Updating Entities (Linq toSQL) might help.

The problem is that you are trying to achieve Update functionality with a tool that is designed to provide object-oriented quering. LINQ allows for updating exisitng records, but you have to use it in a proper way to achieve this.
The proper way is to fetch data you want to update from the DB, perform modifications and then flush it back to the DB. So, assuming there are table named Servers in your data context, here's an abstract example:
DataClasses1DataContext db = new DataClasses1DataContext(#"sqlserver");
var servers = db.Servers.Where(srv=>srv.ID>1000); //extracting all servers with ID > 100 using lambda expression
foreach (server in servers){
server.Memory *=2; //let's feed them up with memory
}
db.Servers.SubmitChanges();
Another way to achieve this is to create an entity, than attach it to the DataContext using Table.Attach method, but it's quite a slippery slope, so I wouldn't recommend you taking it unless you have your LINQ skills improved.
For a detailed description, see
SubmitChanges
Lambda Expressions

I understand what is being asked, and I do not have an easy answer.
Example, you have a form of values, several of the values are changed, maybe some calculated. Or the form can contain a new record.
You create a record of the values
myrecord = new MyRecord()
Then fill in myRecord. doing what ever validation/calculations you want before you even touch the database itself.
//GetID either returns an existing ID or it returns a zero if this is a new record.
myrecord.id = GetIDForRecordOrZeroIfANewRecord(uniqueName);
myrecord.value1 = txtValue1.text;
myrecord.value2 = (DateTime)dtDate.value;
and so on through the fields.
You now have a record, if id is zero you can add it as a new record. But if id is an existing record you seem to have no choice with Linq except to have a function that writes each value from myrecord, so you have to have a function that contains something like -
var thisRecord = from n in mydatacontext.MyTable
where n.id == myrecord.id
select n;
thisrecord.value1 = myrecord.value1;
thisrecord.value2 = myrecord.value2;
and so on through all fields.
I do it, but it seems long winded when I already have all of the information ready in myrecord. A simple function of
mydatacontext.MyTable.Update(myrecord);
Would be ideal. Simmilar in fact to what I do with stored SQL functions in other databases, it simplifies the transfer of a record that is an update rather than new.

Related

RavenDB does not store my new documents

This is my code of storing a new list of my data to documents:
using (var session = DocumentStoreHolder.Store.OpenSession())
{
//This list is used to put in the database -- Careful!!!
List<classA> List_01 = new List<classA>();
//This list is from database, used to compare with the new coming data.
//We will check with the list got from data.
List<ClassA> List_02 = session.Query<classA>().ToList();
//This uses the method Except to remove the duplicates between the new data
//and the current list of matches in database.Return data is the new match
//that is not existing in DB
List_01 = List_02.Except(List_00, new IdComparer()).ToList();
foreach (classA data in List_01)
{
//Why does it not store data in DB?!?!?!?!?
session.Store(data);
}
session.SaveChanges();
}
After I compare two lists together. The problem is that the session seems not store my data after all. I don't know what is wrong in my code. Any idea?
In your code, you're calling .Store(_match). Did you mean to call .Store(data)?
Also, note that your objects will be in the database after you call .SaveChanges. Calling .Store assigns the ID and tells Raven you want to store them, but it won't take affect until .SaveChanges is called.

Inserting many rows with Entity Framework is extremely slow

I'm using Entity Framework to build a database. There's two models; Workers and Skills. Each Worker has zero or more Skills. I initially read this data into memory from a CSV file somewhere, and store it in a dictionary called allWorkers. Next, I write the data to the database as such:
// Populate database
using (var db = new SolverDbContext())
{
// Add all distinct skills to database
db.Skills.AddRange(allSkills
.Distinct(StringComparer.InvariantCultureIgnoreCase)
.Select(s => new Skill
{
Reference = s
}));
db.SaveChanges(); // Very quick
var dbSkills = db.Skills.ToDictionary(k => k.Reference, v => v);
// Add all workers to database
var workforce = allWorkers.Values
.Select(i => new Worker
{
Reference = i.EMPLOYEE_REF,
Skills = i.GetSkills().Select(s => dbSkills[s]).ToArray(),
DefaultRegion = "wa",
DefaultEfficiency = i.TECH_EFFICIENCY
});
db.Workers.AddRange(workforce);
db.SaveChanges(); // This call takes 00:05:00.0482197
}
The last db.SaveChanges(); takes over five minutes to execute, which I feel is far too long. I ran SQL Server Profiler as the call is executing, and basically what I found was thousands of calls to:
INSERT [dbo].[SkillWorkers]([Skill_SkillId], [Worker_WorkerId])
VALUES (#0, #1)
There are 16,027 rows being added to SkillWorkers, which is a fair amount of data but not huge by any means. Is there any way to optimize this code so it doesn't take 5min to run?
Update: I've looked at other possible duplicates, such as this one, but I don't think they apply. First, I'm not bulk adding anything in a loop. I'm doing a single call to db.SaveChanges(); after every row has been added to db.Workers. This should be the fastest way to bulk insert. Second, I've set db.Configuration.AutoDetectChangesEnabled to false. The SaveChanges() call now takes 00:05:11.2273888 (In other words, about the same). I don't think this really matters since every row is new, thus there are no changes to detect.
I think what I'm looking for is a way to issue a single UPDATE statement containing all 16,000 skills.
One easy method is by using the EntityFramework.BulkInsert extension.
You can then do:
// Add all workers to database
var workforce = allWorkers.Values
.Select(i => new Worker
{
Reference = i.EMPLOYEE_REF,
Skills = i.GetSkills().Select(s => dbSkills[s]).ToArray(),
DefaultRegion = "wa",
DefaultEfficiency = i.TECH_EFFICIENCY
});
db.BulkInsert(workforce);

LINQ to SharePoint 2010 getting error "All new entities within an object graph must be added/attached before changes are submitted."

I've been having a problem for some time, and I've exhausted all means of figuring this out for myself.
I have 2 lists in a MS Sharepoint 2010 environment that are holding personal physician data for a medical group...nothing special just mainly text fields and a few lookup choice fields.
I am trying to write a program that will migrate the data over from List A to List B. I am using LINQ to Sharepoint to accomplish this. Everything compiles just fine, but when it runs and hits the SubmitChanges() method, I get a runtime error that states:
"All new entities within an object graph must be added/attached before changes are submitted."
this issue must be outside of my realm of C# knowledge because I simply cannot find the solution for it. The problem is DEFINITELY stemming from the fact that some of the columns are of type "Lookup", because when I create a new "Physician" entity in my LINQ query, if I comment out the fields that deal with the lookup columns, everything runs perfectly.
With the lookup columns included, if I debug and hit breakpoints before the SubmitChanges() method, I can look at the new "Physician" entities created from the old list and the fields, including data from the lookup columns, looks good, the data is in there the way I want it to be, it just flakes out whenever it tries to actually update the new list with the new entities.
I have tried several methods of working around this error, all to no avail. In particular, I have tried created a brand new EntityList list and calling the Attach() method after each new "Physician" Entity is created, but to no avail, it just sends me around in a bunch of circles, chasing other errors such as "ID cannot be null", "Cannot insert entities that have been deleted" etc.,
I am no farther now than when I first got this error and any help that anyone can offer would certainly be appreciated.
Here is my code:
using (ProviderDataContext ctx = new ProviderDataContext("http://dev"))
{
SPSite sitecollection = new SPSite("http://dev");
SPWeb web = sitecollection.OpenWeb();
SPList theOldList = web.Lists.TryGetList("OldList_Physicians");
//Create new Physician entities.
foreach(SPListItem l in theOldList.Items)
{
PhysiciansItem p = new PhysiciansItem()
{
FirstName = (String)l["First Name"],
Title = (String)l["Last Name"],
MiddleInitial = (String)l["Middle Init"],
ProviderNumber = Convert.ToInt32(l["Provider No"]),
Gender = ConvertGender(l),
UndergraduateSchool =(String)l["UG_School"],
MedicalSchool = (String)l["Med_School"],
Residency = (String)l["Residency"],
Fellowship = (String)l["Fellowship"],
Internship = (String)l["Internship"],
PhysicianType = ConvertToPhysiciantype(l),
Specialty = ConvertSpecialties(l),
InsurancesAccepted = ConvertInsurance(l),
};
ctx.Physicians.InsertOnSubmit(p);
}
ctx.SubmitChanges(); //this is where it flakes out
}
}
//Theses are conversion functions that I wrote to convert the data from the old list to the new lookup columns.
private Gender ConvertGender(SPListItem l)
{
Gender g = new Gender();
if ((String)l["Sex"] == "M")
{
g = Gender.M;
}
else g = Gender.F;
return g;
}
//Process and convert the 'Physician Type', namely the distinction between MD (Medical Doctor) and
//DO (Doctor of Osteopathic Medicine). State Regualtions require this information to be attached
//to a physician's profile.
private ProviderTypesItem ConvertToPhysiciantype(SPListItem l)
{
ProviderTypesItem p = new ProviderTypesItem();
p.Title = (String)l["Provider_Title:Title"];
p.Intials = (String)l["Provider_Title"];
return p;
}
//Process and convert current Specialty and SubSpecialty data into the single multi-choice lookup column
private EntitySet<Item> ConvertSpecialties(SPListItem l)
{
EntitySet<Item> theEntityList = new EntitySet<Item>();
Item i = new Item();
i.Title = (String)l["Provider Specialty"];
theEntityList.Add(i);
if ((String)l["Provider SubSpecialty"] != null)
{
Item theSubSpecialty = new Item();
theSubSpecialty.Title = (String)l["Provider SubSpecialty"];
theEntityList.Add(theSubSpecialty);
}
return theEntityList;
}
//Process and add insurance accepted.
//Note this is a conversion from 3 boolean columns in the SP Environment to a multi-select enabled checkbox
//list.
private EntitySet<Item> ConvertInsurance(SPListItem l)
{
EntitySet<Item> theEntityList = new EntitySet<Item>();
if ((bool)l["TennCare"] == true)
{
Item TenncareItem = new Item();
TenncareItem.Title = "TennCare";
theEntityList.Add(TenncareItem);
}
if ((bool)l["Medicare"] == true)
{
Item MedicareItem = new Item();
MedicareItem.Title = "Medicare";
theEntityList.Add(MedicareItem);
}
if ((bool)l["Commercial"] == true)
{
Item CommercialItem = new Item();
CommercialItem.Title = "Commercial";
theEntityList.Add(CommercialItem);
}
return theEntityList;
}
}
So this may not be the answer you're looking for, but it's what's worked for me in the past. I've found that updating lookup fields using Linq to Sharepoint to be quite frustrating. It frequently doesn't work, or doesn't work efficiently (forcing me to query an item by ID just to set the lookup value).
You can set up the entity so that it has an int property for the lookup id (for each lookup field) and a string property for the lookup value. If, when you generate the entities using SPMetal, you don't generate the list that is being looked up then it will do this on it's own. What I like to do is (using your entity as an example)
Generate the entity for just that one list (Physicians) in some temporary folder
Pull out the properties for lookup id & value (there will also be private backing fields that need to come along for the ride too) for each of the lookups (or the ones that I'm interested in)
Create a partial class file for Physicians in my actual project file, so that regenerating the entire SPMetal file normally (without restricting to just that list) doesn't overwrite changes
Paste the lookup id & value properties in this partial Physicians class.
Now you will have 3 properties for each lookup field. For example, for PhysicianType there will be:
PhysicianType, which is the one that is currently there. This is great when querying data, as you can perform joins and such very easily.
PhysicianTypeId which can be occasionally useful for queries if you only need ID as it makes it a bit simpler, but mostly I use it whenever setting the value. To set a lookup field you only need to set the ID. This is easy, and has a good track record of actually working (correctly) in my experiences.
PhysicianTypeValue which could be useful when performing queries if you just need the lookup value, as a string (meaning it will be the raw value, rather than something which is already parsed if it's a multivalued field, or a user field, etc. Sometimes I'd rather parse it myself, or maybe just see what the underlying value is when doing development. Even if you don't use it and use the first property, I often bring it along for the ride since I'm already doing most of the work to bring the PhysicianTypeId field over.
It seems a bit hacky, and contrary to the general design of linq-to-SharePoint. I agree, but it also has the advantage of actually working, and not actually being all that hard (once you get the rhythm of it down and learn what exactly needs to be copied over to move the properties from one file to another).

Finalise SQLite 3 statement

I'm developing metro app using Windows 8 release preview and C#(VS 2012), I'm new to SQLite, I integrated SQLite 3.7.13 in my App and it is working fine, Observe my code below
var dbPath = Path.Combine(Windows.Storage.ApplicationData.Current.LocalFolder.Path, "Test.db");
using (var db = new SQLite.SQLiteConnection(dbPath))
{
var data = db.Table<tablename>().Where(tablename => tablename.uploaded_bool == false && tablename.Sid == 26);
try
{
int iDataCount = data.Count();
int id;
if (iDataCount > 0)
{
for (int i = 0; i < iDataCount; i++)
{
Elements = data.ElementAt(i);
id = Elements.id;
/*
Doing some code
*/
}
int i = db.Delete<tablename>(new tablename() { Sid = 26 });
}
}
catch (Exception ex)
{
}
}
where "Sid" is column in my database and with number "26" i will get n number of rows
So, using a for loop i need to do some code and after the for loop I need to delete records of Sid(26) in database, So at this line
int i = db.Delete<tablename>(new tablename() { Sid = 26 });
I'm getting unable to close due to unfinalised statements exception, So my question is how to finalise the statement in sqlite3,Apparently SQLite3 has a finalize method for destroying previous DB calls but I am not sure how to implement this. Please help me.
Under the covers sqlite-net does some amazing things in an attempt to manage queries and connections for you.
For example, the line
var data = db.Table<tablename>().Where(...)
Does not actually establish a connection or execute anything against the database. Instead, it creates an instance of a class called TableQuery which is enumerable.
When you call
int iDataCount = data.Count();
TableQuery actually executes
GenerateCommand("count(*)").ExecuteScalar<int>();
When you call
Elements = data.ElementAt(i);
TableQuery actually calls
return Skip(index).Take(1).First();
Take(1).First() eventually calls GetEnumerator, which compiles a SQLite command, executes it with TOP 1, and serializes the result back into your data class.
So, basically, every time you call data.ElementAt you are executing another query. This is different from standard .NET enumerations where you are just accessing an element in a collection or array.
I believe this is the root of your problem. I would recommend that instead of getting the count and using a for(i, ...) loop, you simply do foreach (tablename tn in data). This will cause all records to be fetched at once instead of record by record in the loop. That alone may be enough to close the query and allow you to delete the table during the loop. If not, I recommend you create a collection and add each SID to the collection during the loop. Then, after the loop go back and remove the sids in another pass. Worst case scenario, you could close the connection between loops.
Hope that helps.

Optimize query on unknown database

A third party application creates one database per project. All the databases have the same tables and structure. New projects may be added at anytime so I can't use any EF schema.
What I do now is:
private IEnumerable<Respondent> getListRespondentWithStatuts(string db)
{
return query("select * from " + db + ".dbo.respondent");
}
private List<Respondent> query(string sqlQuery)
{
using (var sqlConx = new SqlConnection(Settings.Default.ConnectionString))
{
sqlConx.Open();
var cmd = new SqlCommand(sqlQuery, sqlConx);
return transformReaderIntoRespondentList(cmd.ExecuteReader());
}
}
private List<Respondent> transformReaderIntoRespondentList(SqlDataReader sqlDataReader)
{
var listeDesRépondants = new List<Respondent>();
while (sqlDataReader.Read())
{
var respondent = new Respondent
{
CodeRépondant = (string)sqlDataReader["ResRespondent"],
IsActive = (bool?)sqlDataReader["ResActive"],
CodeRésultat = (string)sqlDataReader["ResCodeResult"],
Téléphone = (string)sqlDataReader["Resphone"],
IsUnContactFinal = (bool?)sqlDataReader["ResCompleted"]
};
listeDesRépondants.Add(respondent);
}
return listeDesRépondants;
}
This works fine, but it is deadly slow (20 000 records per minutes). Do you have any hints on what strategy should be faster? For info, what is really slow is transformReaderIntoRespondentList method
Thanks!!
Generally speaking anything SELECT * FROM is bad practice, but it could also be resulting in you having to pull back more data than is actually required. The transform is operating on only a few columns are more columns than required being returned? Consider replacing with:
private IEnumerable<Respondent> getListRespondentWithStatuts(string db)
{
return query("select ResRespondent, ResActive, ResCodeResult, Resphone, ResCompleted from " + db + ".dbo.respondent");
}
Also, gaurd against SQL-Injection attacks; concating strings for SQL queries is very dangerous.
When pulling data from a DataReader, I find that using the non-named lookups work best:
var respondent = new Respondent
{
CodeRépondant = sqlDataReader.GetString(0),
IsActive = sqlDataReader.IsDBNull(1) ? (Boolean?)null : sqlDataReader.GetBoolean(1),
CodeRésultat = sqlDataReader.GetString(2),
Téléphone = sqlDataReader.GetString(3),
IsUnContactFinal = sqlDataReader.IsDBNull(4) ? (Boolean?)null : sqlDataReader.GetBoolean(4)
};
I have not explcicitly tested the performance difference in a long while; but that used to make a notable difference. The ordinal checks did not have to do a named lookup and also avoided boxing/unboxing values.
Other than that, without more info it is hard to say... do you need all 20,000 records?
UPDATE
Ran a simple local test case with 300,000 records and reduced the time to load all data by almost 50%. I imagine these results will vary depending on the type of data being retrieved; but it still does make a difference on overall execution time. That being said, in my environment we are talking a drop from 650ms to just over 300ms.
NOTE
If respondent is a view, what is likely "really slow" is the database building up the result set; although the data reader will start processing information as soon as records are available, the ultimate bottleneck will be the database itself and/or network latency. Other than the above optimizations, there is not going to be much that you can do with your code unless you can index the view/table to optimize the query and or reduce the information required.

Categories