Execute Pipelined Functions in ODP.NET - c#

I want to select data from a pipelined function in C# "just in time". That means the function pipes a row every second (like a status report) an I would like to fetch the data in C# immediately.
So far I have the following:
Oracle.DataAccess.Client.OracleConnection con = new Oracle.DataAccess.Client.OracleConnection("my_connection_string");
con.Open();
Oracle.DataAccess.Client.OracleCommand cmd = new Oracle.DataAccess.Client.OracleCommand("SELECT * FROM TABLE(MYPACKAGE.TEST_PIPELINE(10))", con);
cmd.CommandType = CommandType.Text;
Oracle.DataAccess.Client.OracleDataReader reader = cmd.ExecuteReader();
reader.FetchSize = 244; //record-size in Bytes
while (reader.Read())
{
System.Diagnostics.Debug.WriteLine("Now: " + DateTime.Now.ToString("HH:mm:ss.ffff"));
System.Diagnostics.Debug.WriteLine("ID: " + reader.GetValue(0));
System.Diagnostics.Debug.WriteLine("Text: " + reader.GetValue(1));
}
My sample function returns n (the only Function-Parameter) rows with a sleep of one second before the PIPE ROW. If I run this code I have to wait ten seconds until I get ten rows at once.
BUT if I run it a second time it works perfectly, I get one row every second (ten rows in total). Maybe just because of Statement Caching, when I add the line
cmd.AddToStatementCache = false;
I get a blocks of ten lines even at the second run.
So the question is: Anyone has an idea how to get the ten lines "just in time" (line by line every second) when I execute the code for the first time?
Thanks a lot!
Cheers
Christian

I tried to reproduce your function.
CREATE OR REPLACE PACKAGE BODY PHXDBA.PIPETEST as
FUNCTION TENSECOND RETURN TENSECOND_TT
PIPELINED AS
ctr NUMBER;
ts_ot TENSECOND_OT := TENSECOND_OT(NULL);
BEGIN
FOR ctr IN 1..10
LOOP
ts_ot.CNT := ctr;
PIPE ROW (ts_ot);
SYS.DBMS_LOCK.SLEEP(1);
END LOOP;
END;
END PIPETEST;
/
Unfortunately this always returns after 10 seconds, even in RapidSQL. So either I've implemented it wrong or the SLEEP function disrupts the piped rows coming back.

Related

Why ExecuteScalar takes time for first call?

Below is my program to measure the time taken by ExecuteScalar for multiple iteration.
static void Main(string[] args)
{
string sqlConnectionString = "Data Source=.\\SQLEXPRESS;Initial Catalog=Test;Integrated Security=True";
SqlConnection connection = new SqlConnection(sqlConnectionString);
for(int i=0; i <4; i++){
Stopwatch stopWatch = new Stopwatch();
string sqlCommand = "Insert into TestTable (SNO, Name) values (" + i + ",' " + i + "')";
SqlCommand command = new SqlCommand(sqlCommand, connection);
connection.Open();
stopWatch.Start();
var result = command.ExecuteScalar();
stopWatch.Stop();
connection.Close();
Console.WriteLine("Time elapsed to insert row " + i + " : " + stopWatch.ElapsedMilliseconds);
}
Console.ReadKey();
}
Output:
Time elapsed to insert row 0 : 3
Time elapsed to insert row 1 : 1
Time elapsed to insert row 2 : 0
Time elapsed to insert row 3 : 0
My question is why it is taking 3 milliseconds for first iteration and for remaining it is lesser than that.
Thanks in advance.
This is due to Connection Pooling. Once you establish a connection (regardless of if you close it or not), as long as your connection string remains unchanged, the connections are pooled which leads to quicker consecutive executions.
Most likely the establishment of the connection which isn't really ended on close but rather returned to a connection pool and reused on the next iteration.
In general, you may also factor in query plan caching and actual data caching from the DBMS, but in this case that wouldn't really apply for INSERT operations (still, the necessary metadata for it might be cold during the first iteration).
First of all you should be using ExecuteNonQuery() method instead. Now talking about the time ; first time it has to establish the connection and then execute the query but for the later iteration that's not the case any more.

How to update data faster in sql using c#

I want to update my database but I think my code takes a lot of time in doing it. It takes about 20secs or more in updating. Is it possible to make it faster? If so please help me.
This is my code:
for (int i = 0; i < listView1.Items.Count; i++)
{
if (listView1.Items[i].SubItems[13].Text.ToString() == ("ACTIVE") || listView1.Items[i].SubItems[13].Text.ToString() == ("Active"))
{
for (int x = 0; x < listView1.Items[i].SubItems.Count; x++)
{
string a = listView1.Items[i].SubItems[7].Text;
TimeSpan time = Convert.ToDateTime(DateTime.Now.ToString("MMMM dd, yyyy")) - Convert.ToDateTime(a.ToString());
int days = (int)time.TotalDays;
listView1.Items[i].SubItems[11].Text = days.ToString() + " day(s)";
Class1.ConnectToDB();
Class1.sqlStatement = "Update tblhd set aging = '" + days.ToString() + " day(s)" + "'";
Class1.dbcommand = new SqlCommand(Class1.sqlStatement, Class1.dbconnection);
Class1.dbcommand.ExecuteReader();
}
}
}
It seems that you could do it with a single update statement:
UPDATE tblhd set aging=DATEDIFF(day, DateField, GETDATE())+" day(s)" WHERE ItemId=...
But it's generally not a good idea to store user-friendly labels like 'day(s)' in the database.
Actually, it is hard to say what is your SQL request suggested to do.
- Why are you using database?
- What are you storing there?
- Why are you inserting 'day(s)' string into a database instead of days integer value?
- Why are you updating ALL rows every time?
- Why are you updating (and overwriting) the same rows every time?
Please, describe your model and scenario, so, we understand how you want it work like and help you.
For your information: now your algorithm sets all rows' aging value to the last ListView's row's days value. It overwrites previously stored and recently updated data and, thus, this for loop is absolutely useless.
Each time your for loop is making a call to DB, which is not an efficient way to do this.
You can create a stored procedure which will make a single call to your DB.
do not open connection multiple time.
use using statement for connection creation using (SqlConnection
connection = Class1.ConnectToDB())
and us sql with parameter or store procedures
try to convert this day string into int so that you do not have to
convert it every time
use ExecuteNonQuery instead of ExecuteReader

Padding 0's - MySQL

So, I have a column that is my key column and auto-increments, so it can't be varchar or anything fun.
Please hold back the "Erhmahgerd urse werb contrerls" as I like to control my own HTML flow and don't like handing it over to .NET. I've never had good experiences with that (and I like my code to be compliant). I wouldn't like this to be a flame war or anything - I just want to pad with zeroes. I feel the need to say this because it's happened way too many times before.
So, anyway.
DataTable tabledata = new DataTable();
using (OdbcConnection con = new OdbcConnection(conString)) {
using (OdbcCommand com = new OdbcCommand("SELECT * FROM equipment_table", con)) {
OdbcDataAdapter myAdapter = new OdbcDataAdapter();
myAdapter.SelectCommand = com;
try {
con.Open();
myAdapter.Fill(tabledata);
} catch (Exception ex) {
throw (ex);
} finally {
con.Close();
}
}
}
Response.Write("<table id=\"equipment_listtable\"><thead><tr><th>Equipment ID</th><th>Equipment Name</th><th>Equipment Description</th><th>Type</th><th>In Use?</th><th>iOS Version (if applicable)</th><th>Permission Level</th><th>Status</th><th>Asset Tag</th><th>Details</th><th>Change</th><th>Apply</th></tr></thead>");
foreach (DataRow row in tabledata.Rows) {
int counter = (int)row["Equipment_ID"];
Response.Write("<tr>");
foreach (var item in row.ItemArray) {
Response.Write("<td>" + item + "</td>");
}
Response.Write("This stuff is irrelevant to my problem, so it is being left out... It uses counter though, so no complaining about not using variables...");
}
Response.Write("</table>");
As you can imagine, the value of my key column comes out like so in the generated table:
1
10
11
12
13
14
15
16
17
18
19
20
2
21
etc. I'd like to fix this with 0 padding. What is the best way to do this? Is there a way to target a SPECIFIC field while I'm generating the table? I've looked into DataRow.Item, but I've always found the MSDN documentation to be a bit difficult to comprehend.
Alternatively, could I SELECT * and then use mysql's lpad on ONE specific field within the *?
Thanks!
SELECT * is generally not a good idea to use. It inevitably causes more problems than the time it saves by writing the query.
This will allow you to use a LPAD on the column.
I was about to suggest using something like:
Response.Write("" + item.ToString.PadLeft(2, '0')+ "");
But since you are just looping round each item and rendering them all the same way, the above would pad every cell.
So I think your best option is to change your query to specify every column. Then you can pad the field as you want.
Or use an ORDER BY if you are only concerned they aren't being ordered correctly (ie, ordered as chars not ints).
alternatively, create a variable for each cell read from the database and render each seperately.
this will give you more customisation options, should you requite them.
You really should always specify your column names explicitly and not use * anyway - see here.
If you insist on using * then just bring the padded value in as another field:
SELECT *,LPAD("Equipment_ID", 2, '0') as Equipment_ID_Padded FROM equipment_table
Remember LPAD will truncate if your Equipment_ID is longer than 2 digits.
A better solution may be to just pad the values in code using String.Format or ToString("D2");
string paddedString = string.Format("{0:d2}", (int)row["Equipment_ID"]));
You can add padding in C# by using .ToString("D" + number of leading zeros);
eg. if counter = 34 and you call counter.ToString("D5"), you'll get 00034.
If you're using strings, the easiest way would be to convert.toInt32() and then apply the above.
If you'd rather keep using strings, just look into --printf whups wrong language-- String.Format.

multiple hits in loop after the break command

I've got a strange problem. I'm creating a NUI for application and I binded some simple gestures to right and left arrow. The problem is when I start application. When I make gesture for the first time my application is hitting 2 times in a row. After that it works 100% as I want. Only the start is the problem.
I'm adding two Joints and timestamp to my history struct which is put into the ArrayList
this._history.Add(new HistoryItem()
{
timestamp = timestamp,
activeHand = hand,
controlJoint = controlJoint
}
);
then in foreach loop I'm comparing data
if (Math.Abs((hand.Position.X - item.controlJoint.Position.X)) < MainWindow.treshold && Math.Abs((hand.Position.Y - item.controlJoint.Position.Y)) < MainWindow.verticalTreshold)
If it hits I instantly break the lopp with
break;
after that I clear the history ArrayList
this._history.Clear();
So I don't get it. Why after the start it hits two times in a row ?
// edit
history ArrayList initialization
private List<HistoryItem> _history = new List<HistoryItem>(16);
in loop
foreach (HistoryItem item in this._history)
{
if ((hand.Position.X - item.controlJoint.Position.X) < MainWindow.treshold)
{
float tmp = (hand.Position.X - controlJoint.Position.X);
MainWindow.slideNumber++;
this._logger.Log("Next slide: " + MainWindow.slideNumber);
this._logger.Log(hand.Position.X + " - " + controlJoint.Position.X + " = " + tmp + " | " + MainWindow.treshold);
this.startTime = 0;
this.gestureStart = false;
answerFlag = true;
System.Windows.Forms.SendKeys.SendWait("{Right}");
break;
}
}
Now. As you can see i'm breaking here. So this code shouldn't be invoked second time in a row
How this clears something
// edit 2
I'll also add that to get into this part of code the gestureStart flag need to be set to true. As you can see after getting into the 'if' part here I'm setting it to false. So it is impossible that the code instantly can get to this part
// edit 3 WORKING WORKAROUND
I've created kind of workaround. I've added time control. I'm comparing timestamp of invoking the code and timestamp of last gesture recognition. If its too fast ( i meen couple of ms which it impossible to make ) I don't allow to hit an arrow. I'm not sure if it's a perfect solution but it is a working solution
Ok my problem was the code. Ofc a small bug untracable in the debug. I used one function to analyse a history of frames.
The method was working in 2 modes. I`ve detached that and created 2 diffrent methods, each for each task and now it works great

Looping through DELETE in code vs in stored procedure

I have been struggling with deleting massive quantities of old data from a database. Each of 5 different tables has as many as 50M rows that need to be deleted. No single delete statement could handle that quantity of data, so I have to loop through deleting a few at a time. My question is to whether there is any noticeable performance increase in looping within a stored procedure instead of looping in the application code. Now for the specifics, I am using DB2 (9.7 CE), and coding in C#. For my stored procedure I use:
--#SET TERMINATOR ;
DROP PROCEDURE myschema.purge_orders_before;
--#SET TERMINATOR #
CREATE PROCEDURE myschema.purge_orders_before (IN before_date TIMESTAMP)
DYNAMIC RESULT SETS 1
P1: BEGIN
DECLARE no_data SMALLINT DEFAULT 0;
DECLARE deadlock_encountered SMALLINT DEFAULT 0;
DECLARE deadlock_condition CONDITION FOR SQLSTATE '40001';
DECLARE CONTINUE HANDLER FOR NOT FOUND
SET no_data = 1;
-- The deadlock_encountered attribute is throw-away,
-- but a continue handler needs to do something,
-- i.e., it's not enough to just declare a handler,
-- it has to have an action in its body.
DECLARE CONTINUE HANDLER FOR deadlock_condition
SET deadlock_encountered = 1;
WHILE (no_data = 0 ) DO
DELETE FROM
(SELECT 1 FROM myschema.orders WHERE date < before_date FETCH FIRST 100 ROWS ONLY );
COMMIT;
END WHILE;
END P1
#
--#SET TERMINATOR ;
Whose approach was unceremoniously lifted from this thread. My programmatic approach is as follows:
public static void PurgeOrdersBefore( DateTime date ) {
using ( OleDbConnection connection = DatabaseUtil.GetInstance( ).GetConnection( ) ) {
connection.Open( );
OleDbCommand command = new OleDbCommand( deleteOrdersBefore, connection );
command.Parameters.Add( "#Date", OleDbType.DBTimeStamp ).Value = date;
int rows = 0;
int loopRows = 0;
int loopIterations = 0;
log.Info( "starting PurgeOrdersBefore loop" );
while ( true ) {
command.Transaction = connection.BeginTransaction( );
loopRows = command.ExecuteNonQuery( );
command.Transaction.Commit( );
if ( loopRows <= 0 ) {
break;
}
if ( log.IsDebugEnabled ) log.Debug( "purged " + loopRows + " in loop iteration " + loopIterations );
loopIterations++;
rows += loopRows;
}
if ( log.IsInfoEnabled ) log.Info( "purged " + rows + " orders in " + loopIterations + " loop iterations" );
}
}
I performed a VERY primitive test in which I printed a timestamp at the start and finish and broke out of the loop after 10,000 in each. The outcome of said test was that the stored procedure took slightly over 6 minutes to delete 10,000 rows and the programmatic approach took just under 5 minutes. Being as primitive as it was, I imagine the only conclusion I can draw is that their is likely going to be very minimal difference in practice and keeping the loop in the C# code allows for much more dynamic monitoring.
All that said, does anyone else have any input on the subject? Could you explain what kind of hidden benefits I might receive were I to use the stored procedure approach? In particular, if Serge Rielau keeps an eye on this site, I would love to hear what you have to say (seems to be that he is the ninja all the others refer to when it comes to DB2 nonsense like this...)
-------------- Edit ---------------------
How about an export of some sort followed by a LOAD REPLACE? Has anyone done that before? Is there an example that I could follow? What implications would that have?
If the number of records to delete is a large fraction of the total, it can be cheaper to copy the good records into a temporary table, empty the original table, and copy the temp table back. The optimal way to do this is not consistent across RDBMSes; for example, some support TRUNCATE and others do not.
Try using the TOP command. I assume that you have problems with the size of the log file (which is why you can't just use a Delete from table command).
So you could write your query like so:
DELETE TOP 10000
FROM myschema.orders
WHERE date < before_date
Then loop over this command until the rows deleted = 0;

Categories