C# Populating GTT does not preserve rows - c#

I have a package that allows me to create a table from the output of a stored procedure REF_CURSOR variable. I use this with DevPress XPO Source in order to return large results to my client application.
I used to create a solid table, add a key, index it and return the new table name to the client which is provided to the XPO source, and it is working. However, using solid tables is not the best solution, so I started using a GTT.
If I execute the package in TOAD, my data is preserved, but if I execute the command from C#, there is no data in my table right after execution. The connection is not closed yet, so I am not 100% sure why my data is not there.
Is there something in the Connection Context that I can set to make sure that all executions happen in the same session? There is an execute immediate statement to populate the table, and I think that TOAD might use the same context when a I execute the package.
Here is some of my code:
FUNCTION Build_Table_from_Cursor(REF_CURSOR SYS_REFCURSOR, ID NUMBER, AddKeyField CHAR) RETURN VARCHAR2 AS
QueryCursor SYS_REFCURSOR;
CursorNumber NUMBER;
p_tablename varchar2(30);
pk_name varchar2(30);
BEGIN
QueryCursor := REF_CURSOR;
CursorNumber := DBMS_SQL.TO_CURSOR_NUMBER(QueryCursor);
p_tablename := 'TEMPTABLE';
UTIL.create_table_from_cursor(CursorNumber, p_tablename); --This creates the GTT with all the columns
Execute immediate 'TRUNCATE TABLE ' || p_tablename; --To Add the key this must be done otherwise there is an error
pk_name := substr(p_tablename, INSTR(p_tablename, '.') + 1);
IF(AddKeyField = 'Y') THEN --Sometimes the Key field already exists
EXECUTE IMMEDIATE 'ALTER TABLE ' || p_tablename || ' ADD (KEY_FIELD_ NUMBER)';
END IF;
EXECUTE IMMEDIATE 'CREATE UNIQUE INDEX ' || p_tablename || 'KEY_INDEX ON ' || p_tablename || ' (KEY_FIELD_)';
EXECUTE IMMEDIATE 'ALTER TABLE ' || p_tablename || ' ADD CONSTRAINT pk_' || pk_name || ' PRIMARY KEY( KEY_FIELD_ )';
QueryCursor := DBMS_SQL.TO_REFCURSOR(CursorNumber);
PDS.UTIL.POPULATE_TABLE_FROM_CURSOR(QueryCursor, p_tablename, 1000); --This populates the table
EXECUTE IMMEDIATE 'UPDATE ' || p_tablename || ' SET KEY_FIELD_ = ROWNUM';
COMMIT;
return p_tablename;
END Build_Table_from_Cursor;
This works perfectly when I execute in TOAD.
When I run this
using (var conn = factory.CreateConnection(Dal.ConnectionStrings[connectionString].ConnectionString))
{
conn.Open();
using (var cmd = factory.CreateCommand(CommandType.StoredProcedure, storedProcedureName))
{
var storedProcedureRow = commandExecuteDataSet.StoredProcedure[0];
foreach (var parametersRow in commandExecuteDataSet.Parameters)
{
cmd.Parameters.Add(CustomDbProviderFactory.CreateParameter(parametersRow.Name, parametersRow.Value ?? "", GetDBTypeFromString(parametersRow.OracleDbType)));
}
cmd.Parameters.Add(CustomDbProviderFactory.CreateParameter(storedProcedureRow.RefCursorName, DbType.Object, ParameterDirection.Output, true));
cmd.ExecuteNonQuery();
var refCursor = cmd.Parameters[storedProcedureRow.RefCursorName].Value;
cmd.Parameters.Clear();
cmd.CommandText = "SERVERMODE_UTIL.Build_Table_from_Cursor";
cmd.Parameters.Add(CustomDbProviderFactory.CreateParameter("REF_CURSOR", refCursor));
cmd.Parameters.Add(CustomDbProviderFactory.CreateParameter("ID", biID));
cmd.Parameters.Add(CustomDbProviderFactory.CreateParameter("AddKeyField", "Y"));
cmd.Parameters.Add(CustomDbProviderFactory.CreateParameter("p_tablename", DbType.String, ParameterDirection.ReturnValue));
cmd.ExecuteNonQuery();
var tempTableName = cmd.Parameters["p_tablename"].Value.ToString();
tempTableName = tempTableName.Substring(tempTableName.IndexOf(".") + 1);
}
}
As part of a larger package, this is the code that is executed to create the GTT
l_statement := 'CREATE GLOBAL TEMPORARY TABLE ' || l_tablename || ' (' || CHR(13) || CHR(10) || l_statement || CHR(13) || CHR(10) || ') ON COMMIT PRESERVE ROWS';
execute immediate l_statement;

Related

oracle stored procedure return resultset

Can I define the stored procedure without using the RefCursor ? (like "return refcursor")
I do not want to use OracleDbType.RefCursor because it is not sent as dbparameter in other databases.
Also DbParameter.DbType = OracleDbType.RefCursor; does not supported
I do not want to define "retval IN OUT SYS_REFCURSOR" in the code below. Is there another way?
CREATE OR REPLACE procedure SYSTEM.customer_select_row(
p_email IN CUSTOMER.Email%TYPE,
p_password IN CUSTOMER."Password"%TYPE,
retval IN OUT SYS_REFCURSOR
)
IS
BEGIN
OPEN retval FOR
SELECT CustomerId, FirstName, LastName FROM CUSTOMER
WHERE Email = p_email AND "Password" = p_password
END customer_select_row;
You could use a pipeline Function,
It is a function that works exacltly as a table
you can call it this way
SELECT *
FROM TABLE(TEST_PIPELINE.STOCKPIVOT(10));
the TEST_PIPELINE.STOCKPIVOT(10) is a function
you can build it this way:
create or replace PACKAGE TEST_PIPELINE AS
-- here you declare a type record
type t_record is record
(
field_1 VARCHAR2(100),
field_2 VARCHAR2(100));
-- declare a table type from your previously created type
TYPE t_collection IS TABLE OF t_record;
-- declare that the function will return the collection pipelined
FUNCTION StockPivot(P_LINES NUMBER) RETURN t_collection PIPELINED;
END;
/
create or replace PACKAGE BODY TEST_PIPELINE IS
FUNCTION StockPivot(P_LINES NUMBER) RETURN t_collection PIPELINED IS
-- declare here a type of the record
T_LINE T_RECORD;
BEGIN
-- here is a loop example for insert some lines on pipeline
FOR I IN 1..P_LINES LOOP
-- inser data on your line this way
T_LINE.field_1 := 'LINE - ' || I;
T_LINE.field_2 := 'LINE - ' || I;
-- then insert insert the line for result (this kind of functions should not have a return statement)
PIPE ROW (T_LINE );
END LOOP;
END;
END;

PostgreSQL error: query string argument of EXECUTE is null

I have a table called evidence with a trigger which calls a stored procedure which basically does table partitioning by month. However I get an obscure error when I start inserting lots of rows under load:
Npgsql.NpgsqlException: query string argument of EXECUTE is null
Severity: ERROR Code: 22004 at Npgsql.NpgsqlState.<ProcessBackendResponses_Ver_3>d__a.MoveNext() in c:\C#Apps\github.npgsql.Npgsql.stock\src\Npgsql\NpgsqlState.cs:line890 at Npgsql.ForwardsOnlyDataReader.GetNextResponseObject() in c:\C#Apps\github.npgsql.Npgsql.stock\src\Npgsql\NpgsqlDataReader.cs:line 1175 at
Npgsql.ForwardsOnlyDataReader.GetNextRowDescription() in c:\C#Apps\github.npgsql.Npgsql.stock\src\Npgsql\NpgsqlDataReader.cs:line 1191 at
Npgsql.ForwardsOnlyDataReader.NextResult() in c:\C#Apps\github.npgsql.Npgsql.stock\src\Npgsql\NpgsqlDataReader.cs:line 1377 at
Npgsql.NpgsqlCommand.ExecuteNonQuery() in c:\C#Apps\github.npgsql.Npgsql.stock\src\Npgsql\NpgsqlCommand.cs:line523
My system has automatic retry functionality and eventually every record gets inserted into the database, but after many many exceptions when the load is high.
Database is PostgreSQL 9.3 on a CentOS 6 server and client is C# .NET using Npgsql driver.
Table:
CREATE TABLE evidence
(
id uuid NOT NULL,
notification_id uuid NOT NULL,
feedback character varying(200),
result character varying(20),
trigger_action_type character varying(200),
trigger_action_id uuid,
data_type integer NOT NULL,
data bytea,
name character varying(30),
CONSTRAINT pk_evidence PRIMARY KEY (id)
);
Trigger:
CREATE TRIGGER evidence_move_to_partition_tables
BEFORE INSERT
ON evidence
FOR EACH ROW
EXECUTE PROCEDURE partition_evidence_by_month();
Trigger Function:
CREATE OR REPLACE FUNCTION partition_evidence_by_month()
RETURNS trigger AS
$BODY$
DECLARE
_notification_id uuid;
_raised_local_time timestamp without time zone;
_table_name character varying(35);
_start_date timestamp without time zone;
_end_date timestamp without time zone;
_table_space character varying(50) := 'ls_tablespace2';
_query text;
BEGIN
_notification_id := NEW.notification_id;
SELECT raised_local_time FROM notifications WHERE id=_notification_id INTO _raised_local_time;
_start_date := date_trunc('month', _raised_local_time);
_end_date := _start_date + '1 month'::interval;
_table_name := 'evidence-' || to_char(_start_date, 'YYYY-MM');
-- check to see if table already exists
PERFORM 1
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
AND c.relname = _table_name
AND n.nspname = 'public';
-- if the table doesn't exist, then create it now
IF NOT FOUND THEN
-- create partition table
_query := 'CREATE TABLE public.' || quote_ident(_table_name) || ' ( ) INHERITS (public.evidence)';
EXECUTE _query;
-- alter owner
--EXECUTE 'ALTER TABLE public.' || quote_ident(_table_name) || ' OWNER TO postgres';
-- add index
--EXECUTE 'ALTER TABLE public.' || quote_ident(_table_name) || ' ADD PRIMARY KEY (id)';
END IF;
-- move the data to the partition table
EXECUTE 'INSERT INTO public.' || quote_ident(_table_name) || ' VALUES ($1.*)' USING NEW;
RETURN NULL;
END;
$BODY$ LANGUAGE plpgsql VOLATILE COST 100;
Calling Code:
using (var cmd = db.CreateCommand())
{
cmd.CommandText = #"INSERT INTO evidence
(id, notification_id, feedback, result, trigger_action_type,
trigger_action_id, data_type, data, name)
VALUES (#id,#nid,#feedback,#result,#tat,#taid,#dt,#data,#name)";
cmd.Parameters.AddWithValue("#id", evItem.ID);
cmd.Parameters.AddWithValue("#nid", evItem.NotificationID);
cmd.Parameters.AddWithValue("#feedback", evItem.Feedback);
cmd.Parameters.AddWithValue("#result", evItem.Result);
cmd.Parameters.AddWithValue("#tat", evItem.TriggerActionType);
cmd.Parameters.AddWithValue("#taid", evItem.TriggerActionID);
cmd.Parameters.AddWithValue("#dt", (int)evItem.DataType);
cmd.Parameters.AddWithValue("#data", evItem.Data);
cmd.Parameters.AddWithValue("#name", evItem.Name);
cmd.ExecuteNonQuery();
}
Why would this bizarre error appear only when the system is under load? What can I do to prevent it happening?
Thanks!
The error message is
query string argument of EXECUTE is null
You have two EXECUTE commands:
_query := 'CREATE TABLE public.'
|| quote_ident(_table_name) || ' ( ) INHERITS (public.evidence)';
EXECUTE _query;
...
EXECUTE 'INSERT INTO public.'
|| quote_ident(_table_name) || ' VALUES ($1.*)' USING NEW;
The only part that can be NULL is table_name.
The only chance for table_name to become NULL is here:
SELECT raised_local_time FROM notifications WHERE id=_notification_id
INTO _raised_local_time;
So the cause must be one of two reasons:
NEW.notification_id is NULL.
There is no row in notifications for the given NEW.notification_id.
Try this modified trigger function for debugging:
CREATE OR REPLACE FUNCTION partition_evidence_by_month()
RETURNS trigger AS
$func$
DECLARE
_table_name text;
BEGIN
SELECT 'evidence-' || to_char(raised_local_time, 'YYYY-MM')
FROM public.notifications -- schema-qualify to be sure
WHERE id = NEW.notification_id
INTO _table_name;
IF _table_name IS NULL THEN
RAISE EXCEPTION '_table_name is NULL. Should not occur!';
END IF;
IF NOT EXISTS ( -- create table if it does not exist
SELECT 1
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
AND c.relname = _table_name
AND n.nspname = 'public') THEN
EXECUTE 'CREATE TABLE public.'
|| quote_ident(_table_name) || ' ( ) INHERITS (public.evidence)';
END IF;
EXECUTE 'INSERT INTO public.'
|| quote_ident(_table_name) || ' VALUES $1' -- Use NEW row directly
USING NEW; -- write data to the partition table
RETURN NULL;
END
$func$ LANGUAGE plpgsql;
Remove unused variables and simplify code. (This is obviously a simplified example.)
Among other things, you don't need date_trunc() at all. Simply feed the original timestamp to to_char().
No point in using varchar(n). Simply use text or varchar.
Avoid too many assignments where unnecessary - comparatively expensive in PL/pgSQL.
Add a RAISE to check my hypothesis.
If you get the error message, discriminating between the two possible causes would be the next step. Should be trivial ...

Large SQL script calls every 5min that is crashing IIS pool?

Context:
I have a dozen of servers.
Each server have a IIS with a site that executes the following large SQL script every 5 minutes.
On some servers, the pool that hosts the site crash. The pool contains this site only.
I need to recycle the pool after each crash... with my hands currently.
So there is an issue with the site and, I think, with the large SQL script.
The C# code that calls the SQL script:
string root = AppDomain.CurrentDomain.BaseDirectory;
string script = File.ReadAllText(root + #"..\SGBD\select_user_from_all_bases.sql").Replace("$date", dtLastModif);
string connectionString = #"Data Source=(local);Integrated Security=SSPI";
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
var command = new SqlCommand(script, connection);
var reader = command.ExecuteReader();
var users = new List<UserModel>();
while (reader.Read())
{
users.Add(new UserModel()
{
dbName = String.Format("{0}", reader[0]),
idExternal = int.Parse(String.Format("{0}", reader[1])),
firstname = String.Format("{0}", reader[2]),
lastname = String.Format("{0}", reader[3]),
login = String.Format("{0}", reader[4]),
password = String.Format("{0}", reader[5]),
dtContractStart = reader[6] != DBNull.Value ? (DateTime?)reader[6] : null,
dtContractEnd = reader[7] != DBNull.Value ? (DateTime?)reader[7] : null,
emailPro = String.Format("{0}", reader[8]),
emailPerso = String.Format("{0}", reader[9])
});
}
return users;
}
And the SQL script:
USE master
DECLARE db_names CURSOR FOR
SELECT name FROM sysdatabases WHERE [name] LIKE 'FOO_%' AND [name] NOT LIKE 'FOO_TRAINING_%'
DECLARE #db_name NVARCHAR(100)
DECLARE #query NVARCHAR(MAX)
DECLARE #queryFinal NVARCHAR(MAX)
SET #query = ''
OPEN db_names
FETCH NEXT FROM db_names INTO #db_name
WHILE ##FETCH_STATUS = 0
BEGIN
SET #query = #query + 'SELECT ''' + #db_name + ''', id_salarie, nom, prenom, login COLLATE SQL_Latin1_General_CP1_CI_AS, password COLLATE SQL_Latin1_General_CP1_CI_AS, date_arrivee, date_depart, email COLLATE SQL_Latin1_General_CP1_CI_AS, persoMail COLLATE SQL_Latin1_General_CP1_CI_AS FROM [' + #db_name + '].dbo.utilisateurs WHERE dt_last_modif >= ''$date'' UNION '
FETCH NEXT FROM db_names INTO #db_name
END
DEALLOCATE db_names
SET #queryFinal = left(#query, len(#query)-6)
EXEC sp_executesql #queryFinal
More information about servers:
Server0 : 8 databases, 1050 users, no crash
Server1 : 88 databases, 18954 users, crash often
Server2 : 109 databases, 21897 users, crash often
Server3 : 26 databases, 1612 users, no crash
etc
Questions :
What is the issue with the script ? Any idea how I can stop crashs ?
And if no solution, how can I automatically recycle the pool?
Have you tried to make shure that the reader is cloesd after usage, too?
using(var reader = command.ExecuteReader()) { ...
I am not shure if the closed connection
using (var connection = new SqlConnection(connectionString))
takes care of the command and the reader resources.
I would do a few things here... if your problem is that persistent. First, I WOULD NOT generate one complete sql query trying to get data from all those tables all at once. Next, the queries are querying, and implied might be trying to LOCK the records associated with the query for POSSIBLE update... even though you are not probably going to be doing that.
I would add a WITH (NOLOCK) on the from tables.
select columns from yourTable WITH(NOLOCK) where...
This prevents any overhead with locking all the pages associated with the query.
Now, how to better handle your loop. Immediately BEFORE your fetch loop, I would create a temp table of the expected output results... something like
(unsure of column name lenghts for your structures...
create #C_TempResults
( fromDBName char(20),
id_salarie int,
nom char(10),
prenom char(10),
login char(10),
password char(10),
date_arivee datetime,
date_depart datetime,
email char(60),
persoMail char(60) );
then, in your loop where you are already cycling through all the tables you are querying, instead of building a concatenated SQL statement to execute at the end, just run ONE AT A TIME, and insert into the temp table like...
(same beginning to prepare your fetch cursor...)
BEGIN
SET #query = 'INSERT INTO #C_TempResults '
+ ' SELECT ''' + #db_name + ''' as fromDBName, id_salarie, nom, prenom, '
+ 'login COLLATE SQL_Latin1_General_CP1_CI_AS, '
+ 'password COLLATE SQL_Latin1_General_CP1_CI_AS, '
+ 'date_arrivee, date_depart, '
+ 'email COLLATE SQL_Latin1_General_CP1_CI_AS, '
+ 'persoMail COLLATE SQL_Latin1_General_CP1_CI_AS '
+ 'FROM [' + #db_name + '].dbo.utilisateurs WITH (NOLOCK) '
+ 'WHERE dt_last_modif >= ''$date'' ';
-- Run this single query now, get the data and release any "lock" resources
EXEC sp_executesql #queryFinal
-- now, get the next database to query from and continue
FETCH NEXT FROM db_names INTO #db_name
END
DEALLOCATE db_names
-- FINALLY, just run your select from the temp table that has everything all together...
select * from #C_TempResults;
-- and get rid of your "temp" table
drop table #C_TempResults;

SQL Server stored procedure executes in query analyzer, but does not function properly from C#

I have a stored procedure that updates my database.
It runs great in query analyzer, but when I try to run it from my C# web app, the table is not updated.
I am receiving the following error, so I set ARITHABORT to "ON", but I still receive the error.
UPDATE failed because the following SET options have incorrect settings: 'ARITHABORT'. Verify that SET options are correct for use
with indexed views and/or indexes on computed columns and/or query
notifications and/or xml data type methods.
C# Code:
p = new SqlParameter("#userAnswers", SqlDbType.VarChar, 1000);
p.Value = "";
p.Value = exam.ExamQuestions.Rows[0].ItemArray[0] + ":" + exam.UserAnswerChoices[0];
for (int x = 1; x < exam.NumQuestions; x++)
{
p.Value+= ", " + exam.ExamQuestions.Rows[x].ItemArray[0] + ":" + exam.UserAnswerChoices[x];
}
conn.query("insertAnswers", p);
return p.Value.ToString();
Stored procedure code:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ARITHABORT ON
GO
ALTER PROCEDURE [dbo].[insertAnswers]
#userAnswers VarChar(1000)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #input XML
Set #input = '<Questions><Question id="' + Replace(Replace(#userAnswers, ':', '">'), ', ', '</Question><Question id="') + '</Question>' + '</Questions>'
;WITH ParsedXML AS
(
SELECT
ID = C.value('(#id)[1]', 'int'),
ColumnName = C.value('(.)[1]', 'varchar(10)')
FROM #input.nodes('/Questions/Question') AS T(C)
)
UPDATE CourseQuestions
SET a = CASE WHEN p.ColumnName = 'a' THEN t.a + 1 ELSE t.a END,
b = CASE WHEN p.ColumnName = 'b' THEN t.b + 1 ELSE t.b END,
c = CASE WHEN p.ColumnName = 'c' THEN t.c + 1 ELSE t.c END,
d = CASE WHEN p.ColumnName = 'd' THEN t.d + 1 ELSE t.d END,
e = CASE WHEN p.ColumnName = 'e' THEN t.e + 1 ELSE t.e END,
f = CASE WHEN p.ColumnName = 'f' THEN t.f + 1 ELSE t.f END
FROM CourseQuestions t
INNER JOIN ParsedXml p ON t.QuestionID = p.ID
END
#userAnswers is basically a comma-separated string (since I can't send an array to SQL Server). It looks like: '1:d, 2:a, 3:b'
A few things to check;
Execution/General + Advanced options in Management Studio, ensure that you don't have some strange things set in there. The usual suspects which are on by default are:
QUOTED_IDENTIFIER
ANSI_NULL_DFLT_ON
ANSI_PADDING
ANSI_WARNINGS
ANSI_NULLS
CONCAT_NULL_YIELDS_NULL
ARITHABORT (which you already checked)
Is there a chance you could actually post the Query being executed, otherwise we're flying blind a bit.
SET ARITHABORT ON is not within Stored Procedure
ALTER PROCEDURE [dbo].[insertAnswers]
#userAnswers VarChar(1000)
AS
BEGIN
SET NOCOUNT ON
SET ARITHABORT ON
.....
Haven't tested. If that doesn't work try
strSQL = "SET ARITHABORT ON" & chr(13) & chr(10) & "EXEC MySPRoc ..."
conn.Execute strSQL
Wild stab here, but could the C# code be handling text that already has : or ' within it? Or is the concatenated string that you are passing greater than 1000 characters?
On another note you should be using StringBuilder for the concatenations as it provides better performance in most instances, especially when you are doing many concatenations.

Returning a recordset from Oracle to .Net using an Oracle function that takes arguments

I currently have the following funciton in an oracle database that returns a concatenated string seperated by pipes. It's a legacy application that is being updated to use .net 3.5. The exiisting application concatenates the returned result set into a VARCHAR2 data type. What I want to do is return the entire result set back to my .net client. The MS SQL equivalent of what I'm trying to accomplish is a simple "SELECT * FROM TBL WHERE id = #id" I'm not use to some of the concepts Oracle uses. I't seems like e blend of OOP and SQL querying. I've read multiple examples on this but can't seem to find exactly what I'm looking for. Can you please help?
CREATE OR REPLACE FUNCTION DOCSADMIN.GET_DOCS (
RECID IN NUMBER) -- RECORD ID
RETURN VARCHAR2 -- CONCATENATED STRING WITH PIPES
IS
RETVAL VARCHAR2(5000) :='';
DOCSTRING VARCHAR2(5000) :='';
DOCNAME VARCHAR2(5000) :='';
DOCNUMBER NUMBER;
STATUS VARCHAR2(5000) :='';
DOCTYPE VARCHAR2(5000) :='';
EDITDATE DATE :='';
/******************************************************************************
NAME: GET_DOCS
PURPOSE: Pulls associated docs from profile table
******************************************************************************/
CURSOR GETDOCINFO IS SELECT DOCNUMBER, DOCNAME, CUSTOM_STATUS, DOCUMENTTYPES.DESCRIPTION, LAST_EDIT_TIME
FROM PROFILE, DOCUMENTTYPES, FORMS WHERE NAD_APID = IN_APID AND PROFILE.FORM = FORMS.SYSTEM_ID AND
DOCUMENTTYPE = DOCUMENTTYPES.SYSTEM_ID AND FORM_NAME = 'DOCS_PROFILE' ORDER BY DOCNUMBER;
BEGIN
OPEN GETDOCINFO;
--GET THE FIRST RECORD
FETCH GETDOCINFO INTO DOCNUMBER, DOCNAME, STATUS, DOCTYPE, EDITDATE;
--LOOP THROUGH ALL ASSOCIATED DOCS AND GRAB INFO
WHILE GETDOCINFO%FOUND LOOP
BEGIN
DOCSTRING := DOCNUMBER || '|~|' || DOCNAME || '|~|' || STATUS || '|~|' || DOCTYPE || '|~|' || WS_EDITDATE;
RETVAL := RETVAL || DOCSTRING || '|^|';
GOTO STARTOVER;
END;
<<STARTOVER>>
FETCH GETDOCINFO INTO DOCNUMBER, DOCNAME, STATUS, DOCTYPE, EDITDATE;
END LOOP;
CLOSE GETDOCINFO;
RETURN RETVAL;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
WHEN OTHERS THEN
-- Consider logging
the error and then re-raise
RAISE;
END GET_DOCS;
/
Well, you could convert the function in to a procedure and have an OUT parameter of the SYS_REFCURSOR type. With Oracle and .Net you can pass back a cursor and iterate through that as a reader.
Sample Oracle procedure:
CREATE OR REPLACE PROCEDURE TEST_SCHEMA.TEST_PROCEDURE (
out_DATA OUT SYS_REFCURSOR;
) AS
BEGIN
OPEN out_DATA FOR
SELECT col1,
col2
FROM TEST_SCHEMA.TEST_TABLE;
END test_procedure;
Sample .Net end:
using (OracleConnection connection = new OracleConnection("connstring"))
using (OracleCommand command = connection.CreateCommand()) {
command.CommandType = CommandType.StoredProcedure;
command.CommandText = "TEST_SCHEMA.TEST_PROCEDURE";
command.Parameters.Add("out_DATA", OracleType.Cursor)
.Direction = ParameterDirection.Output;
connection.Open();
command.ExecuteNonQuery();
OracleDataReader reader =
command.Parameters["out_DATA"].Value as OracleDataReader;
if (reader != null) {
using (reader) {
while(reader.Read()) {
string col1 = reader["col1"] as string;
string col2 = reader["col2"] as string;
}
}
}
}
Be sure to close the cursor after you're done using it (accomplished above by the using (reader) statement).
So in your case, you could probably create a procedure that outputs the original cursor in your function, then just iterate over the cursor in .Net as listed above. Just a note, the column names from the Oracle side are important and will match what you're using in .Net.
What I have so far compiles fine.
CREATE OR REPLACE PROCEDURE DOCSADMIN.GET_DOCS_SP ( IN_APID IN NUMBER, out_DATA OUT SYS_REFCURSOR )
AS
BEGIN
OPEN out_DATA FOR
SELECT DOCNUMBER, DOCNAME, CUSTOM_STATUS, DOCUMENTTYPES.DESCRIPTION, LAST_EDIT_TIME
FROM PROFILE, DOCUMENTTYPES, FORMS WHERE APID = IN_APID AND PROFILE.FORM = FORMS.SYSTEM_ID AND
DOCUMENTTYPE = DOCUMENTTYPES.SYSTEM_ID AND FORM_NAME = 'PROFILE' ORDER BY DOCNUMBER;
END GET_DOCS_SP;
/
However, I've run into another situation and would appreciate your input. If I wanted to call the following from a sql database using OPENQUERY how would I do so? The legacy version that was returning the concatenated string looked like the following.
SELECT * FROM OPENQUERY (TESTSERVER, 'SELECT DOCSADMIN.GET_DOCS_SP (26) AS DOCINFO FROM DUAL')
Do I just remove the as DOCINFO FROM DUAL clause?
Thanks

Categories