Pause in an ASP.NET Chart Series - c#

The problem: I've got an ASP.NET 4.0 chart object which dynamically generates a series for latency data for a modem in the field. The overall page does a lot more and the chart also displays some other stuff, but it's outside the scope of this. The series is being generated from SQL grabbing the information from a database and what I'd like to see is the the chart literally skip when the remote goes offline. I'm displaying the last hour's worth of data from when the modem was in the network, but let's say the last time it was in the network was under an hour ago and it just went offline, so I want to see its state as it went offline.
The conditions: Essentially, what if a modem was in the network, went offline, came back in, went offline, and came back all in under an hour? One would hope that there would be gaps in the chart series. At the moment, the way the SQL query handles this is that it ignores both NULLs and latency values under 300ms because that's technically an impossible value for satellite.
The question: If I were to tweak my query to not throw out the 0's, is there a way I could get it to show gaps in my graph? Can the charts support this? Would it instead require multiple series (which would be a bear to implement)? Is there another way I'm not thinking of?
The caveat: I can post code if need be, but it's not strictly necessary since this is more of a conceptual / is it possible kind of question.
I've been working on this project for weeks and am nearly done, so I've had to come back to address this issue and so few people have a good grasp on chart controls that I figured this would be the best place to ask since I haven't found anything after tweaking the query and chart settings or searching on Google for a few hours.
Thanks so much in advance for your help.
Update: It turns out there is relevant code. I had completely forgotten there were different ways to dynamically generate chart series. It's true that it could be point-by-point. If I were doing it that way, it would certainly be easier. Instead, my series handler looks like this:
protected Boolean Chart_A_Line(string query, string seriesName, int queryType, string connnectionString)
{
SqlConnection con = new SqlConnection(connnectionString);
Boolean chartEmpty = true;
using (con)
{
SqlCommand command = new SqlCommand(query, con);
switch (queryType)
{
case 0:
command.Parameters.AddWithValue("#RemoteId", remote);
break;
case 1:
command.Parameters.AddWithValue("#IRID", inrouteId);
break;
}
command.Parameters.AddWithValue("#NMS", nms);
con.Open();
SqlDataReader chartReader = command.ExecuteReader();
if (chartReader.HasRows)
{
chartEmpty = false;
LatencyCNChart.Series[seriesName].Points.DataBindXY(chartReader, "Time", chartReader, "Data");
}
con.Close();
return chartEmpty;
}
}

I'm not too sure how you're adding data to your chart, but when you add a new point to a data series, you can set its IsEmpty property to true (as explained here).
You can also use DataManipulator.InsertEmptyPoints().
As long as you know all the 0 values in your data definitely mean a gap value, then you can just make the appropriate call for each 0 value to set a gap in the graph.

Related

C#: Get Feedback from SqlConnection Object?

I am currently facing an issue where it takes quite a while to process information from a server, and I would like to provide active feedback for the user to know what is going on while the application appears to be just sitting around.
A little about the application: it allows the user to pull all databases from a specified server along with all content within those databases. This can take quite a while sometimes, since some of our databases can reach 2TB in size. Each server can contain hundreds of databases and so as a result, if I try to load a server with 100 databases, and 30% of those databases are over 100GB in size, it takes a good couple of minutes before the application is able to run effectively again.
Currently I just have a simple loading message that says: "Please wait, this could take a while...". However, in my opinion, this is not really sufficient for something that can take a few minutes.
So as a result, I am wondering if there is a way to track the progress of the SqlConnection object as it is executing the specified query? If so, what kind of details would I be able to provide and are there any readily available resources to look over and better understand the solution?
I am hopeful that there is a way to do this without having to recreate the SqlConnection object altogether.
Thank you all for your help!
EDIT
Also, as another note; I am NOT looking for handouts of code here. I am looking for resources that will help me in this situation if any are available. I have been looking into this issue for a few days already, I figure the best place to ask for help is here at this point.
Extra
A more thorough explanation: I am wondering if there is a way to provide the user with names of databases, tables, views, etc that are currently being received.
For example:
Loading Table X From Database Y On Server Z
Loading Table A From Database B On Server Z
The SqlConnection has an InfoMessage-Event to which you can assign a method.
Furthermore you have to set the FireInfoMessageEventOnUserErrors-Property to true.
No you can do something like
private void OnInfoMessage(object sender, SqlInfoMessageEventArgs e)
{
for (var index = 0; index < e.Errors.Count; index++)
{
var message = e.Errors[index];
//use the message object
}
}
Note that you should only evaluate messages with an errorcode lower then 11 (everything above is a 'real' error).
Depending on what command you are using sometime the server already generates such info messages (for example at VERIFY BACKUP).
Anyways you can also use this to report progress form a stored procedure or a query.
Pseudocode (im not an sqlguy):
RAISEERROR('START', 1,1) WITH NOWAIT; -- Will raise your OnInfoMessage-method
WHILE
-- Execute something
RAISEERROR('Done something', 1,1) WITH NOWAIT; -- Will raise your OnInfoMessage-method
-- etc.
END
Have fun with parsing, cause remember: such messages can also be generated by the server itself (so it is probably a good idea to start your own, relevant "errors" with a satic sequence which cannot occur under nomal circumstances).

C# SQL query blocks server memory

I'm a bit newbie still and I have been assigned with the task of maintaining previosuly done code.
I have a web that simulates SQL Management Studio, limitating deleting options for example, so basic users don't screw our servers.
Well, we have a function that expects a query or queries, it works fine, but our server RAM gets blown up with complex queries, maybe it's not that much data, but its casting xml and all that stuff that I still don't even understand in SQL.
This is the actual function:
public DataSet ExecuteMultipleQueries(string queries)
{
var results = new DataSet();
using (var myConnection = new SqlConnection(_connectionString))
{
myConnection.Open();
var sqlCommand = myConnection.CreateCommand();
sqlCommand.Transaction = myConnection.BeginTransaction(IsolationLevel.ReadUncommitted);
sqlCommand.CommandTimeout = AppSettings.SqlTimeout;
sqlCommand.CommandText = queries.Trim();
var dataAdapter = new SqlDataAdapter { SelectCommand = sqlCommand };
dataAdapter.Fill(results);
return results;
}
}
I'm a bit lost, I've read many different answers but either I don't understand them properly or they don't solve my problems in any way.
I know I could use Linq-toSql- or Entity, I tried them but I really don't know how to use them with an "unknown" query, I could try to research more anyway so if you think they will help me approaching a solution, by any means, I will try to learn it.
So to the point:
The function seems to stop at dataAdapter.Fill(results) when debugging, at that point is where the server tries to answer the query and just consume all its RAM and blocks itself. How can I solve this? I thought maybe by making SQL return a certain amount of data, store it in a certain collection, then continue returning data, and keep going until there is no more data to return from SQL, but I really don't know how to detect if there is any data left to return from SQL.
Also I thought about reading and storing in two different threads, but I don't know how the data that is in one thread can be stored in other thread async (and even less if it solves the issue).
So, yes, I don't have anything clear at all, so any guidance or tip would be highly appreciated.
Thanks in advance and sorry for the long post.
You can use pagination to fetch only part of the data.
Your code will be like this:
dataAdapter.Fill(results, 0, pageSize);
pageSize can be at size you want (100 or 250 for example).
You can get more information in this msdn article.
In order to investigate, try the following:
Start SQL profiler (it is usually installed along with SSMS and can be started from Management Studio, Tools menu)
Make sure you fill up some filters (either NT username or at least the database you are profiling). This is to catch as specific (i.e. only your) queries as possible
Include starting events to see when your query starts (e.g. RPC:Starting).
Start your application
Start the profiler before issuing the query (fill the adapter)
Issue the query -> you should see the query start in the profiler
Stop the profiler not to catch other queries (it puts overhead on SQL Server)
Stop the application (no reason to mess with server until the analysis is done)
Take the query within SQL Management Studio. I expect a SELECT that returns a lot of data. Do not run as it is, but put a TOP to limit its results. E.g. SELECT TOP 1000 <some columns> from ....
If the TOPed select runs slowly, you are returning too much data.
This may be due to returning some large fields such as N/VARCHAR(MAX) or VARBINARY(MAX). One possible solution is to exclude these fields from the initial SELECT and lazy-load this data (as needed).
Check these steps and come back with your actual query, if needed.

How to keep all date selection in calendar?

I've created a code that read dates from a database and put them into the Calendar control. All working good, but if the user select a date in the Calendar, after the population, all the date selected through code disappear. There's another way to flag the date that are previously added from the database population?
public void setFixturesControl()
{
Database.m_dbConnection.Open();
string query = "select * from date";
SQLiteCommand input = new SQLiteCommand(query, Database.m_dbConnection);
SQLiteDataReader reader = input.ExecuteReader();
while (reader.Read())
{
if (MainWindow.AppWindow.League.SelectedItem.ToString().Equals(reader["caption"].ToString()))
{
MainWindow.AppWindow.Calendar.SelectedDates.Add(DateTime.Parse(reader["data"].ToString()));
}
}
Database.m_dbConnection.Close();
}
There's a way to prevent the dates disappear after a user click? Or something that allow me to recognize the dates added by database population? Thanks.
After a long search on the net and a large documentation on the Calendar class, I could see two things:
To get what I want I have to take advantage of a custom control
Try to act via code by creating a delegate, but this is not really a
simple thing
I ran then in the first point, in particular, from the post of David Veeneman, it addresses exactly my problem.
The final result will be like this:
Exactly what I want.
In the post it is all documented, so it would be pointless to do a copy paste here. Thank this member that for me at least was able to solve a failure in the control of Microsoft. Unfortunately I am increasingly convinced that WPF is still immature and that still requires a bit 'of years to complete. I hope my answer is help, if anyone manages to solve the problem in a more elegant, well it responds well. Thank You.

Why are 700k database rows taking over 15 seconds to load into memory?

Using C# and .NET 3.5 and with either a ADO connection or OLEDB connection, filling a DataTable or DataSet with 700k rows of 3 columns each, takes over 15 seconds.
Executing the actual select statement on the DB takes less than a second. The DB is on a different machine to the one querying it, processing the data. (Perhaps this adds time?)
The data looks like this:
public class Node
{
public DateTime Timestamp;
public float Value;
public string Name;
}
Doing it with a SqlDataReader and calling reader.Read(), then manually putting the data in a new instance of the above class, adding it to a List<Node> also takes over 15 seconds.
Code looks like this:
List<Node> data = new List<Node>();
while (reader.Read())
{
Node n = new Node();
n.Timestamp = (DateTime)reader["Timestamp"];
n.Value = (float)reader["Value"];
n.NodeName = (string)reader["NodeName"];
data.Add(n);
}
I measured this using a StopWatch class in release mode with optimization turned on in project properties.
I get that it has to iterate each record, but I would have expected any machine today to be able to iterate 700k records in a few seconds, but not more.
What could be the reasons this takes over 15 seconds? Am I unreasonable to expect that this should be much faster?
EDIT Doing SqlDataReader.Read() by itself also takes over 15 seconds.
I think the problem lies in the container you're using. The List<> is being dynamically resized a lot. Try the following procedure instead:-
run query with a COUNT clause to get the number of records, only select a single column
List<Node> data = new List<Node>(count from above);
run normal query
fill List<> as above
This will prevent the list from constantly resizing.
Alternatively, to see if this is the problem, replace List<> with LinkedList<> as this doesn't have the resizing issues that List<> does.
It should be the network speed between database and machine you are executing the code at.
What also happens in your loop, is that the values from your query are unboxed. It might be worth it to try the GetString, GetFloat, etc methods, since you have so many records.
List<Node> data = new List<Node>();
while (reader.Read())
{
Node n = new Node();
n.Timestamp = reader.GetDateTime(0); // TODO: Check column numbers
n.Value = reader.GetFloat(1);
n.NodeName = reader.GetString(2);
data.Add(n);
}
No conversions are performed in these methods.
Remarks
No conversions are performed; therefore, the data retrieved
must already be a string, or an exception is generated.
I'm reading a lot of guesses, which could be right, but they are still guesses.
If you run it under the debugger and manually pause it a few times, and each time display the stack, you will be using the random pausing method.
It will tell you exactly what's taking the time and why - no guesswork.
If you want to use a profiler, you need one that samples on wall-clock time.
Otherwise you will have to choose between a) a sampler that gives you line-level inclusive percent, with no visibility into IO time, or b) an instrumenter, that only gives you function-level inclusive percent.
Neither one tells you why the time is being spent, only how much.
Whatever you choose, ignore the temptation to look at self time, which is misleading at best in any app that spends all its time in subfunctions, and it totally ignores IO.
If it is not a code issue then suspect it has to do with your query plan then.
Make sure you are setting the right options before executing the query. and they are of the same state on .NET and MSSQL.
One interesting option that has been found to cause a performance hit before is the ARITHABOIRT being enabled on SQL and off on .NET.
Try adding SET ARITHABORT ON before your query in the command.
Refer to :
Slow performance of SqlDataReader

The riddle of the working broken query

I was going through some old code that was written in years past by another developer at my organization. Whilst trying to improve this code, I discovered that the query it uses had a very bad problem.
OdbcDataAdapter financialAidDocsQuery =
new OdbcDataAdapter(
#"SELECT a.RRRAREQ_TREQ_CODE,
b.RTVTREQ_SHORT_DESC,
a.RRRAREQ_TRST_DESC,
RRRAREQ_STAT_DATE,
RRRAREQ_EST_DATE,
a.RRRAREQ_SAT_IND,
a.RRRAREQ_SBGI_CODE,
b.RTVTREQ_PERK_MPN_FLAG,
b.RTVTREQ_PCKG_IND,
a.RRRAREQ_MEMO_IND,
a.RRRAREQ_TRK_LTR_IND,
a.RRRAREQ_DISB_IND,
a.RRRAREQ_FUND_CODE,
a.RRRAREQ_SYS_IND
FROM FAISMGR.RRRAREQ a, FAISMGR.RTVTREQ b
WHERE a.RRRAREQ_TREQ_CODE = b.RTVTREQ_CODE
and a.RRRAREQ_PIDM = :PIDM
AND a.RRRAREQ_AIDY_CODE = :AidYear ",
this.bannerOracle);
financialAidDocsQuery.SelectCommand.Parameters.Add(":PIDM", OdbcType.Int, 32).Value = this.pidm;
financialAidDocsQuery.SelectCommand.Parameters.Add(":AidYear", OdbcType.Int, 32).Value = this.aidYear;
DataTable financialAidDocsResults = new DataTable();
financialAidDocsQuery.Fill(financialAidDocsResults);
FADocsGridView.DataSource = financialAidDocsResults;
FADocsGridView.DataBind();
The problem is that the column a.RRRAREQ_TRST_DESC does not exist. A fact you learn very quickly when running it in Oracle SQL Developer.
The strange thing?
This code works.
The gridview binds successfully. (It doesn't try to bind to that field.) And it's been in production for years.
So, my question is...why? I've never seen a bad query work. I've never seen Oracle allow it or a data provider hack around it.
Does anyone have any idea what's going on here?
Hmmm...A few things to check:
Does this code actually run? It may seem silly to suggest this, but there may be a newer file that replaced this one.
Is an exception being squelched by your code? (Anyone who would name columns like that is definitely capable of squelching those pesky exceptions)
Is the exception being squelched by 3rd party code? (Not as likely, but sometimes 3rd party code prefers to use annoying error codes instead of exceptions).
Past those suggestions, I'm not sure.
EDIT:
Revisiting the 2nd point, if you are working in ASP.NET, check that there is no global-level exception handler that is squelching exceptions. I ran into that problem on one site that I worked on and found dozens of exceptions in a single day.
Try running
select * from v$sql where sql_fulltext like '%a.RRRAREQ_TRST_DESC%'
shortly after you bind the grid. That will tell you if the statement was actually seen by Oracle. Note that you should only see the above query if it was not seen by Oracle.
Use ODBC trace log to see if this query is really send to database, and see what database returns. Then use any other ODBC based database tool and check if this query work from this tool. As an ultimate test you can write simple Python script. Easiest way it to use ActiveState Python 2.x with odbc module included. Test code can look like:
import odbc
connection = odbc.odbc('dnsname/user/password')
cursor = connection.cursor()
cursor.execute("select ...")
for row in cursor.fetchall():
print '\t'.join([str(r) for r in row])
If there was no error in your program and an error in other tools then compare theirs ODBC traces.
If I understand what the original author was trying to do, and with Banner that is never easy to figure out, then this query should be correct:
SELECT a.rrrareq_treq_code,
b.rtvtreq_short_desc,
c.rtvtrst_desc,
rrrareq_stat_date,
rrrareq_est_date,
a.rrrareq_sat_ind,
a.rrrareq_sbgi_code,
b.rtvtreq_perk_mpn_flag,
b.rtvtreq_pckg_ind,
a.rrrareq_memo_ind,
a.rrrareq_trk_ltr_ind,
a.rrrareq_disb_ind,
a.rrrareq_fund_code,
a.rrrareq_sys_ind
FROM faismgr.rrrareq a,
faismgr.rtvtreq b,
faismgr.rtvtrst c
WHERE a.rrrareq_treq_code = b.rtvtreq_code
AND a.rrrareq_trst_code = c.rtvtrst_code
AND a.rrrareq_pidm = :PIDM
AND a.rrrareq_aidy_code = :AidYear;
Well, let's file this in the false alarm category.
I decided to have our VAT send a copy of the DLL from test. I pulled it apart with reflector and found, much to my embarrassment, that the query is right. Which makes sense.
I still can't figure out why my working copy would have one incorrect field_name. To my knowledge, I had never touched this file before this week. But, SVN doesn't have any history showing this error in previous versions.
So strange...maybe I'm losing my mind.
Thanks for all of the quality feedback on this question. I certainly learned some new trouble shooting techniques and for that I'm very appreciative. :)
Happy coding,
Clif

Categories