I need to develop a software to monitor a value from a pressure transducer using a PLC and store the values in a datababe. The problem is i need to read de values every 20ms. Im using this code to save the data using entity framework and SQL. Im using a text box to see if the timer can handle the speed and confront with the SQL
Records made with the text box:
26/06/2017 - 10: 46:35.236
26/06/2017 - 10: 46:35.256
26/06/2017 - 10: 46:35.276
26/06/2017 - 10: 46:35.296
private void mmTimer_Tick(object sender, System.EventArgs e)
{
counter++;
lblCounter.Text = counter.ToString();
txtDT.AppendText(DateTime.Now.ToString("dd/MM/yyyy - HH: mm:ss.FFF\n"));
using (DatabaseContext db = new DatabaseContext())
{
storeDataBindingSource.DataSource = db.StoreDataList.ToList();
StoreData objStoreData = storeDataBindingSource.Current as StoreData;
{
var _StoreData = new StoreData
{
DateTime = DateTime.Now.ToString("dd/MM/yyyy - HH: mm:ss.FFF")
};
db.StoreDataList.Add(_StoreData);
db.SaveChanges();
}
}
}
But when i look at the SQL Table the time values dont keep the same 20ms in every insert probably because of the huge amount of data that are beeing saved every time. Maybe i should use a buffer and insert all at once.
Any sugestion? Thanks in advance.
Any suggestion
use a buffer and insert all at once.
Definitely buffer readings. As a further optimization you can bypass SaveChanges() (which performs row-by-row inserts) and use a TVP or SqlBulkCopy to insert batches into SQL Server.
Related
I have an ASP.NET MVC project using Dapper to read data from a database and I need to export to Excel.
Dapper is fast! ExecuteReader takes only 35 seconds.
But list.Add(InStock); spends too much time! Over 1020 seconds!
Do you have any idea why this is?
public List<InStock> GetList(string stSeId, string edSeId, string stSeDay, string edSeDay, string qDate)
{
List<InStock> list = new List<InStock>();
InStock InStock = null;
IDataReader reader;
using (var conn = _connection.GetConnection())
{
try
{
conn.Open();
//******************Only 35 seconds*****
reader = conn.ExecuteReader(fileHelper.GetScriptFromFile("GetInStock"),
new { STSeId = stSeId, EDSeId = edSeId, STSeDay = stSeDay, EDSeDay = edSeDay, qDate = qDate });
//*************************************
//******************Over 1020 seconds**********
while (reader.Read())
{
InStock = new InStock();
InStock.ColA = reader.GetString(reader.GetOrdinal("ColA"));
InStock.ColB = reader.GetString(reader.GetOrdinal("ColB"));
InStock.ColC = reader.GetString(reader.GetOrdinal("ColC"));
list.Add(InStock);
}
//*********************************************
return list;
}
catch (Exception err)
{
throw err;
}
}
}
It's the database.
From Retrieve data using a DataReader,
The DataReader is a good choice when you're retrieving large amounts of data because the data is not cached in memory.
The key clue for your performance concern regards "because the data is not cached in memory". While strictly an implementation detail, each call to Read() gets new data from the database, while the List<InStock>.Add() call is just adding the new InStock to the list.
There are orders of magnitude of difference in processing times between disk access (even SSDs) compared to RAM. And theres orders of magnitude of difference between network requests and disk access. There's not really a conceivable way that anything other than the database access is the cause of most of your run time.
--
As a side note, you're going to exceed the maximum number of rows in an Excel worksheet.
I have to record the coordinate-data (vector of x,y,z) of an eye-tracking System and save it, for later evaluation. The whole eye-tracking System is integrated inside a Head-mounted-Display and the software runs over Unity.
After some research, I figured out that saving the Data in a CSV file would probably the easiest way. This is what I got so far:
void update()
{
string filePath = #"C:\Data.csv";
string delimiter = ",";
Vector3 leftGazeDirection = smiInstance.smi_GetLeftGazeDirection();
Vector3 rightGazeDirection = smiInstance.smi_GetRightGazeDirection();
float[][] output = new float[][]{
new float[]{leftGazeDirection.x},
new float[]{leftGazeDirection.y},
new float[]{leftGazeDirection.z},
new float[]{rightGazeDirection.x},
new float[]{rightGazeDirection.y},
new float[]{rightGazeDirection.z} };
int length = output.GetLength(0);
StringBuilder sb = new StringBuilder();
for (int index = 0; index < length; index++)
sb.AppendLine(string.Join(delimiter, output[index]));
File.WriteAllText(#"C:\Data.csv", sb.ToString());
}
What this gives me out is a CSV file with the Vector of the latest Position of the Gazedirection. What I need would be a Record of all the Gazedirections that were made in one Session. Is it possible to get something like this?
Can I somehow modify my Code to achieve this or should I try something completely different?
Since I'm very newbie to unity and programming in general I just have a lack of vocabulary and don't know what to search for to solve my problem..
I would be very thankful if somebody could help me. :)
Welcome to StackOverflow. A good question, and well set out.
Presumably after you save this data away, you want to do something with it. I would suggest that your life is going to be a lot easier if you were to create a database to store your data. There are tons of tutorials on this sort of thing, and since you are already writing in C# it should not be too hard for you.
I would be creating a SQL Server database - either the Express version or Developer Version, both would be free for you.
I would steer away from trying Entity Framework or similar at this stage, I would just use the basic SQLClient to connect and write to your database.
Once you start using this then adding something like a Session column to separate one session from the next becomes easy, and all sorts of analysis you might want to do onthe data will also become much easier.
Hope this helps, and good luck with your project.
Yes. You can have data for one entire session. Look no further than File.AppendAllText. In this scenario I'm assuming that you have 6 values for two gaze pointers. In that case you don't need to write it down as multidimensional array as it is just wasting allocated memory.
Here we can proceed to save it as 6 values for each iteration of your loop.
string filePath = #"C:\Data.csv";
string delimiter = ",";
void Start()
{
if(File.Exists(filePath))
File.Delete(filePath);
}
void Update
{
Vector3 leftGazeDirection = smiInstance.smi_GetLeftGazeDirection();
Vector3 rightGazeDirection = smiInstance.smi_GetRightGazeDirection();
float[] output = new float[]{
leftGazeDirection.x,
leftGazeDirection.y,
leftGazeDirection.z,
rightGazeDirection.x,
rightGazeDirection.y,
rightGazeDirection.z };
int length = output.Length;
StringBuilder sb = new StringBuilder();
for (int index = 0; index < length; index++)
sb.AppendLine(output[index],delimiter));
if(!File.Exists(filePath))
File.WriteAllText(filePath, sb.ToString());
else
File.AppendAllText(filePath, sb.ToString());
}
AppendAllText will keep on appending file till the end of execution. Note that this solution has downside that is at the start we will delete the file for each session so you will need to manually keep track of each session.
So if you want to keep a bunch of files related to each session and not to overrride the same file we can include the date and time stamp while creating each file. So instead of deleting old file for each new session we are creating file for each session and writing in it. Only start method will need to change to handle datetime stamp for file name appending. Rest of the Update loop will be same.
string filePath = #"C:\Data";
string delimiter = ",";
void Start()
{
filePath = filePath + DateTime.Now.ToString("yyyy-mm-dd-hh-mm-ss") + ".csv";
}
I am developing a test application using C# and .NET. I am a programmer (embedded) but I am not very familiar with this environment.
I have made an application that collects sensor data from an Arduino and uses FileHelpers API to populate a list of data for exporting to CSV.
// Initializing Log generation
FileHelperEngine<logItems> engine = new FileHelperEngine<logItems>();
List<logItems> logItems = new List<logItems>();
And the log save event
private void stopButton_Click(object sender, EventArgs e)
{
tmrGUI.Enabled = false; // Stop sampling
double maxValue = pressureTable.Max();
PeakValueIndicator.Text = maxValue.ToString("0.00");
engine.HeaderText = "Sample,Pressure,Time";
engine.WriteFile(string.Concat(DateTime.Now.ToString("yyyyMMdd_HHmmss"), "_", TestNameControl.Text, ".csv"), logItems);
logGenerator();
TestNameControl.Text = string.Empty;
startButton.Enabled = false;
stopButton.Enabled = false;
The application works, however if I run the data collection more than one time without closing the program between runs it will append the new sensor data at the end of the previous data list.
Is there a way to reset the list either by using the FileHelpers API or memory resetting the list using regular C#?
Please note that I am attempting to use stored procedures to insert and update table records. When I add the stored procedure to the database and update the *.edmx file, it sees the stored procedure I've added; however, I previously did the following with a different project:
var db = new MyMarinaEntities();
db.prBoatInsert(registrationNumber, manufacturer, modelYear, length, customerID);
Now, however, when I type "db." intellisense doesn't see the insert stored procedures. Also, when I search in the InlandMarinaEntities.Designer.cs file, it doesn't have any
public int prBoatUpdate(Nullable<global::System.Int32> boatID, global::System.String registrationNumber, global::System.String manufacturer, Nullable<global::System.Int32> modelYear, Nullable<global::System.Int32> length, Nullable<global::System.Int32> customerID)
function. Does anyone have any idea as to why it is not adding prBoatUpdate to the *.Designer.cs file?
Alternatively, I understand that MEF can generate Insert, Update and Delete operations for each table; however, when I generate the *.edmx file, I don't see any of these operations added, and I don't see any option to add them when going through the wizard to generate the *.edmx file. What am I missing? Please note that I am using Visual Studio 2010 and SQL Server 2008. TIA.
Please note that I determined how to add and update database items using the MEF auto-generated functions instead of using stored procedures. Here is how you load an object from the database:
private void LoadBoat(int boatID)
{
using (var db = new MyMarinaEntities())
{
var boat = db.Boats.SingleOrDefault(b => (b.BoatID == boatID));
this.chkIsRental.Checked = boat.IsRental;
this.chkInactive.Checked = boat.Inactive;
this.txtLength.Text = boat.Length;
this.txtManufacturer = boat.Manufacturer;
// ...
}
}
Here is how to save changes to the boat:
protected void btnSave_click(object sender, EventArgs args)
{
using (var dm = new MyMarinaEntities())
{
MyMarinaEntities boat;
boat.IsRental = this.chkIsRental.Checked;
boat.Inactive = this.chkInactive.Checked;
boat.Length = this.txtLength.Text;
boat.Manufacturer = this.txtManufacturer;
// ...
if (boatID.Value == "")
dm.AddObject("Boats", boat);
dm.SaveChanges();
}
}
So MEF not only saves you from writing lots of code for Object Relational Mapping (ORM), it also saves you from writing SQL code for stored procedures or commands.
My query seems to be stalling every so many passes through the query.
status_text.Text = "Check existing records...";
status_text.Refresh();
using (StreamReader reader = new StreamReader(df_text_filename))
{
using (StreamWriter writer = new StreamWriter(df_text_filename + "_temp"))
{
while ((product = reader.ReadLine()) != null)
{
if (product != _aff_svc.DFHeaderProd)
{
df_product = _product_factory.GetProductData(_vsi, product);
}
status_text.Text = "Checking for existing record of vendor record ID " + df_product.SKU;
status_text.Refresh();
if (_pctlr.GetBySKU(df_product.SKU) != null)
{
continue;
}
writer.WriteLine(product);
Application.DoEvents();
}
writer.Close();
}
reader.Close();
}
System.IO.File.Delete(df_text_filename);
System.IO.File.Move(df_text_filename + "_temp", df_text_filename);
The code quickly runs through the GetBySKU about 10 times, pauses for about a second or so, then quickly does another ten records. This occurs throughout my processes, not just with this particular query.
It also occurs whether or not I have the Application.DoEvents() fire.
The other problem is that it is not consistent. I can be working like this for a few hours, then all of a sudden, it will zip through the loop as intended (expected).
My SQL server is running on the same machine as the program.
I looked into dedicating resources to the server to mitigate this behavior, but have found nothing.
It looks as if you're program is parsing a text file for product info and then as it parses, in a while loop you're executing a couple SQL queries. It's almost always a bad idea to make SQL round trips inside a loop.
Instead, I would look into parsing the file, gathering all the product ideas, closing the file and then make one call to sql passing a/many TVPs (table valued parameters) to a sproc and return all the data you need from that sproc - possibly as many tables.
EDIT:
You mentioned in the comments that the file is very large with lots of processing. You could consider batching the SQL work in lets say something like 100?
Also, if you're SQL isn't tuned it would continually slow down as more data is written. There's not enough info in the question to analyze the indexes, query plans etc... but have a look at that as the data set grows.
I will work on a batch solution later, however, this works much faster than the previous code. No pauses at all.
List<Product> _prod_list = new List<Product>();
_prod_list = ProductDataFactory.GetProductListByVendor(vendor_name);
if (_prod_list.Count() > 0)
{
using (StreamReader reader = new StreamReader(df_text_filename))
{
using (StreamWriter writer = new StreamWriter(df_text_filename + "_temp"))
{
while ((product = reader.ReadLine()) != null)
{
if (product != _aff_svc.DFHeaderProd)
{
df_product = _product_factory.GetProductData(_vsi, product);
}
if (_prod_list.Find(o => o.SKU == df_product.SKU) != null)
{
continue;
}
writer.WriteLine(product);
}
writer.Close();
}
reader.Close();
}
System.IO.File.Delete(df_text_filename);
System.IO.File.Move(df_text_filename + "_temp", df_text_filename);
}
Just pulling a list of product objects and query it for existing records if there are any; if not, it skips the whole process of course. No need to hit the db in the loop either.
Thanks.