Query extremely slow in code but fast in SSMS - c#

I have a fairly simple query that I keep getting timeouts (it takes over three minutes to complete, I stopped it early so I could post this question) on when it is running in code, however when I run the same query from the same computer in Sql Server Management Studio the query will only take 2532 ms the first query when the data is not cached on the server and 524 ms for repeated queries.
Here is my c# code
using (var conn = new SqlConnection("Data Source=backend.example.com;Connect Timeout=5;Initial Catalog=Logs;Persist Security Info=True;User ID=backendAPI;Password=Redacted"))
using (var ada = new SqlDataAdapter(String.Format(#"
SELECT [PK_JOB],[CLIENT_ID],[STATUS],[LOG_NAME],dt
FROM [ES_HISTORY]
inner join [es_history_dt] on [PK_JOB] = [es_historyid]
Where client_id = #clientID and dt > #dt and (job_type > 4 {0}) {1}
Order by dt desc"
, where.ToString(), (cbShowOnlyFailed.Checked ? "and Status = 1" : "")), conn))
{
ada.SelectCommand.Parameters.AddWithValue("#clientID", ClientID);
ada.SelectCommand.Parameters.AddWithValue("#dt", dtpFilter.Value);
//ada.SelectCommand.CommandTimeout = 60;
conn.Open();
Logs.Clear();
ada.Fill(Logs); //Time out exception for 30 sec limit.
}
here is my code I am running in SSMS, I pulled it right from ada.SelectCommand.CommandText
declare #clientID varchar(200)
set #clientID = '138'
declare #dt datetime
set #dt = '9/19/2011 12:00:00 AM'
SELECT [PK_JOB],[CLIENT_ID],[STATUS],[LOG_NAME],dt
FROM [ES_HISTORY]
inner join [es_history_dt] on [PK_JOB] = [es_historyid]
Where client_id = #clientID and dt > #dt and (job_type > 4 or job_type = 0 or job_type = 1 or job_type = 4 )
Order by dt desc
What is causing the major discrepancy for the difference in time?
To keep the comment section clean, I will answer some FAQ's here.
The same computer and logon is used for both the application and ssms.
Only 15 rows are returned in my example query. However, es_history contains 11351699 rows and es_history_dt contains 8588493 rows. Both tables are well indexed and the execution plan in SSMS says they are using index seeks for the look-ups so they are fast lookups. The program is behaving as if it is not using the indexes for the C# version of the query.

Your code in SSMS is not the same code you run in your application. This line in your application adds a NVARCHAR parameter:
ada.SelectCommand.Parameters.AddWithValue("#clientID", ClientID);
while in the SSMS script you declare it as VARCHAR:
declare #clientID varchar(200)
Due to the rules of Data Type Precedence the Where client_id = #clientID expression in your query is not SARG-able where #clientID is of type NVARCHAR (I'm making a leap of faith and assume that client_id column is of type VARCHAR). The application thus forces a table scan where the SSMS query can do a quick key seek. This is a well know and understood issue with using Parameters.AddWithValue and has been discussed in many articles before, eg. see How Data Access Code Affects Database Performance. Once the problem is understood, the solutions are trivial:
add parameters with the constructor that accepts a type: Parameters.Add("#clientID", SqlDbType.Varchar, 200) (and do pass in the explicit length to prevent cache pollution, see Query performance and plan cache issues when parameter length not specified correctly
or cast the parameter in the SQL text: where client_id = cast(#clientID as varchar(200)).
The first solution is superior because it solves the cache pollution problem in addition to the SARG-ability problem.
I would also recommend you read Slow in the Application, Fast in SSMS? Understanding Performance Mysteries

Had the same issue:
Call Stored procedure from code: 30 seconds+
Call Same Stored procedure from SSMS: milliseconds.
Call SQL from within stored procedure from code: milliseconds.
Solution:
Drop Stored Procedure, then recreate exactly the same stored procedure, now both returning in milliseconds. No code change.

Run the profiler on your c# connection - there may be other activity going on that you are not aware of.

Capture the execution plan from both SSMS when you manually run your query and then from Profiler when you are running your application. Compare and contrast.

Run DBCC FREEPROCCACHE, as suggested here, just to make sure the problem isn't due to a stale query execution plan.

Related

OUTPUT INSERTED Id/SCOPE_IDENTITY() returns null in C# (ASP.NET Core Razor Pages) SQL query, works fine in SQL Server Management Studio

I have the following parameterized SQL query, which inserts data into a table, gets the (automatically incremented) Id value of the new row, and then inserts that new value into another table:
DECLARE #engId_newtable TABLE(engId INT);
DECLARE #engId_new INT;
INSERT INTO eng (locationTypeId, engNumb, projectId, engTypeId, capacity, cylNo, employeeId, bore, stroke, crankOffset, rodLength, cr, crankThrow, pinOffset, engDesc)
OUTPUT INSERTED.engId INTO #engId_newtable
SELECT a.locationTypeId, #engNumb, b.projectId, c.engTypeId, #capacity, #cylNo, d.employeeId, #bore, #stroke, #crankOffset, #rodLength, #cr, #crankThrow, #pinOffset, #engDesc
FROM dbo.locationType AS a,dbo.project AS b,dbo.engType AS c,dbo.employees AS d
WHERE a.locationType = #locationType AND b.projectCode = #projectCode AND b.engineType = #engineType AND c.engType = #engType AND d.userName = #userName ;
SELECT #engId_new = engId FROM #engId_newtable;
INSERT INTO oil (engId, oilQuantity, oilTypeId)
SELECT #engId_new, #oilQuantity, a.oilTypeId
FROM dbo.oilType AS a
WHERE a.oilManufacturer = #oilManufacturer AND a.oilRange = #oilRange AND a.oilGrade = #oilGrade ;
The query works perfectly when executed in SQL Server Management Studio (SSMS), but when I try to run it using my ASP.NET Core C# (Razor Pages) project, it fails with the following error:
System.Data.SqlClient.SqlException: 'Cannot insert the value NULL into
column 'engId', table 'TestEng.dbo.oil'; column does not allow nulls.
INSERT fails. The statement has been terminated.'
So it appears that the OUTPUT INSERTED command is not working. I have also tried using SELECT #engId_new = SCOPE_IDENTITY(), and even ##IDENTITY, but they also return null and give the same error.
I have similar queries (using OUTPUT INSERTED) in other areas of my project, which work fine. This query was also working correctly until earlier today, when we changed the server the SQL database is running on - maybe I need to change some database/table settings?
Any help is much appreciated, I'm really at a loss here. Thanks.
EDIT: Showing how the query is executed
Query is built programatically - I know the query building functions work as all my other queries work fine - the query I'm trying to execute is exactly as shown here (it is returned by the query builder as a single string):
"DECLARE #engId_newtable TABLE(engId INT); DECLARE #engId_new INT; INSERT INTO eng (locationTypeId, engNumb, projectId, engTypeId, capacity, cylNo, employeeId, bore, stroke, crankOffset, rodLength, cr, crankThrow, pinOffset, engDesc) OUTPUT INSERTED.engId INTO #engId_newtable SELECT a.locationTypeId, #engNumb, b.projectId, c.engTypeId, #capacity, #cylNo, d.employeeId, #bore, #stroke, #crankOffset, #rodLength, #cr, #crankThrow, #pinOffset, #engDesc FROM dbo.locationType AS a,dbo.project AS b,dbo.engType AS c,dbo.employees AS d WHERE a.locationType = #locationType AND b.projectCode = #projectCode AND b.engineType = #engineType AND c.engType = #engType AND d.userName = #userName ;SELECT #engId_new = engId FROM #engId_newtable; INSERT INTO oil (engId, oilTypeId) SELECT #engId_new, a.oilTypeId FROM dbo.oilType AS a WHERE a.oilManufacturer = #oilManufacturer AND a.oilRange = #oilRange AND a.oilGrade = #oilGrade ;"
Query Execution function, called from razor page model, query and connection passed as parameters:
public void ExecuteVoid(SqlConnection con, string query, List<SqlColumn> columns, List<SqlFilter> filters)
{
SqlCommand cmd = new SqlCommand(query, con);
Parameterize(cmd, columns, filters);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}
The Parameterize function just adds all the relevant parameters to the query. Again, like the query builder function, I know it works because all my other queries execute as expected.
EDIT 2: SQL Server Profiler
Shown below is an image showing all the events relevant to this query that I can see in SQL Server Profiler:
All parts of the query are written out as expected, all parameters are present, but the query still fails.
I found the problem! I was using HttpContext.Session.GetString to get the value of a WHERE clause in my first query (to the eng table). I had changed the name of the string elsewhere in my code, and forgotten to change it in the query. So, the problem had nothing to do with SQL at all, really.
I can say that SQL server profiler was crucial in finding this error, so if you're having trouble with parameterized sql queries Visual C#, look over your queries there and you may highlight some unseen issues.

Strange EF6 performance issue on ExecuteStoreCommand

I have a strange performance issue with executing a simple merge SQL command on Entity Framework 6.
First my Entity Framework code:
var command = #"MERGE [StringData] AS TARGET
USING (VALUES (#DCStringID_Value, #TimeStamp_Value)) AS SOURCE ([DCStringID], [TimeStamp])
ON TARGET.[DCStringID] = SOURCE.[DCStringID] AND TARGET.[TimeStamp] = SOURCE.[TimeStamp]
WHEN MATCHED THEN
UPDATE
SET [DCVoltage] = #DCVoltage_Value,
[DCCurrent] = #DCCurrent_Value
WHEN NOT MATCHED THEN
INSERT ([DCStringID], [TimeStamp], [DCVoltage], [DCCurrent])
VALUES (#DCStringID_Value, #TimeStamp_Value, #DCVoltage_Value, #DCCurrent_Value);";
using (EntityModel context = new EntityModel())
{
for (int i = 0; i < 100; i++)
{
var entity = _buffer.Dequeue();
context.ContextAdapter.ObjectContext.ExecuteStoreCommand(command, new object[]
{
new SqlParameter("#DCStringID_Value", entity.DCStringID),
new SqlParameter("#TimeStamp_Value", entity.TimeStamp),
new SqlParameter("#DCVoltage_Value", entity.DCVoltage),
new SqlParameter("#DCCurrent_Value", entity.DCCurrent),
});
}
}
Execution time ~20 seconds.
This looks a little bit slow so I tried the same command to run direct in management studio (also 100 times in a row).
SQL Server Management Studio:
Execution time <1 second.
Ok that is strange!?
Some tests:
First I compare both execution plans (Entity Framework and SSMS) but they are absolutely identical.
Second I tried is using a transaction inside my code.
using (PowerdooModel context = PowerdooModel.CreateModel())
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
for (int i = 0; i < 100; i++)
{
context.ContextAdapter.ObjectContext.ExecuteStoreCommand(command, new object[]
{
new SqlParameter("#DCStringID_Value", entity.DCStringID),
new SqlParameter("#TimeStamp_Value", entity.TimeStamp),
new SqlParameter("#DCVoltage_Value", entity.DCVoltage),
new SqlParameter("#DCCurrent_Value", entity.DCCurrent),
});
}
dbContextTransaction.Commit();
}
catch (Exception)
{
dbContextTransaction.Rollback();
}
}
}
Third I added 'OPTION(recompile)' to avoid parameter sniffing.
Execution time still ~10 seconds. What is still very poor performance.
Question: what am I doing wrong? Please give me a hint.
-------- Some more tests - edited 18.11.2016 ---------
If I execute the commands inside a transaction (like above), the following times comes up:
Duration complete: 00:00:06.5936006
Average Command: 00:00:00.0653457
Commit: 00:00:00.0590299
Is it not strange that the commit nearly takes no time and the average command takes nearly the same time?
In SSMS you are running one long batch that contains 100 separate MERGE statements. In C# you are running 100 separate batches. Obviously it is longer.
Running 100 separate batches in 100 separate transactions is obviously longer than 100 batches in 1 transaction. Your measurements confirm that and show you how much longer.
To make it efficient use a single MERGE statement that processes all 100 rows from a table-valued parameter in one go. See also Table-Valued Parameters for .NET Framework
Often a table-valued parameter is a parameter of a stored procedure, but you don't have to use a stored procedure. It could be a single statement, but instead of multiple simple scalar parameters you'd pass the whole table at once.
I never used entity framework, so I can't show you a C# example how to call it. I'm sure if you search for "how to pass table-valued parameter in entity framework" you'll find an example. I use DataTable class to pass a table as a parameter.
I can show you an example of T-SQL stored procedure.
At first you define a table type that pretty much follows the definition of your StringData table:
CREATE TYPE dbo.StringDataTableType AS TABLE(
DCStringID int NOT NULL,
TimeStamp datetime2(0) NOT NULL,
DCVoltage float NOT NULL,
DCCurrent float NOT NULL
)
Then you use it as a type for the parameter:
CREATE PROCEDURE dbo.MergeStringData
#ParamRows dbo.StringDataTableType READONLY
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
MERGE INTO dbo.StringData WITH (HOLDLOCK) as Dst
USING
(
SELECT
TT.DCStringID
,TT.TimeStamp
,TT.DCVoltage
,TT.DCCurrent
FROM
#ParamRows AS TT
) AS Src
ON
Dst.DCStringID = Src.DCStringID AND
Dst.TimeStamp = Src.TimeStamp
WHEN MATCHED THEN
UPDATE SET
Dst.DCVoltage = Src.DCVoltage
,Dst.DCCurrent = Src.DCCurrent
WHEN NOT MATCHED BY TARGET THEN
INSERT
(DCStringID
,TimeStamp
,DCVoltage
,DCCurrent)
VALUES
(Src.DCStringID
,Src.TimeStamp
,Src.DCVoltage
,Src.DCCurrent)
;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
-- TODO: handle the error
ROLLBACK TRANSACTION;
END CATCH;
END
Again, it doesn't have to be a stored procedure, it can be just a MERGE statement with one table-valued parameter.
I'm pretty sure that it would be much faster than your loop with 100 separate queries.
Details on why there should be HOLDLOCK hint with MERGE: “UPSERT” Race Condition With MERGE
A side note:
It is not strange that Commit in your last test is very fast. It doesn't do much, because everything is already written to the database. If you tried to do Rollback, that would take some time.
It looks like the difference is that you are using OPTION (RECOMPILE) in your EF code only, but when you run the code in SSMS there is no recompile.
Recompiling 100 times would certainly add some time to execution.
Related to the answer from #Vladimir Baranov I take a look how Entity Framework supports bulk operations (for me bulk merge).
Short answer... no! Bulk delete, update and merge operations are not supported.
Then I found this sweet little library!
zzzproject Entity Framework Extensions
It provides a finished extension that works like #Vladimir Baranov descripted.
Create a temporary table in SQL Server.
Bulk Insert data with .NET SqlBulkCopy into the temporary table.
Perform a SQL statement between the temporary table and the destination table.
Drop the temporary table from the SQL Server.
You can find a short interview by Jonathan Allen about the functions and how it can dramatically improve Entity Framework performance with bulk operations.
For me it really kicks up the performance from 10 seconds to under 1 second. Nice!
I want to be clear, I'm not working or be related to them. This is no advertisement. Yes the library is commercial, but basic version is free.

ADO.NET and SSMS interpret SP params differently or SQL profiler has a bug?

Try to be brief. Here is my table and SP in SQL Server 2008 R2 database:
CREATE TABLE dbo.TEST_TABLE
(
ID INT NOT NULL IDENTITY (1,1) PRIMARY KEY,
VALUE varchar(23)
)
GO
CREATE PROCEDURE USP_TEST_PROCEDURE
(
#Param1 varchar(23)
)
AS
BEGIN
INSERT INTO TEST_TABLE (VALUE) VALUES (#Param1)
END
GO
Here is the C# code:
using (SqlConnection con = GetConection())
{
using (var cmd = new SqlCommand("USP_TEST_PROCEDURE", con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#Param1", SqlDbType.DateTime, 23).Value = "2013-12-10 12:34:56.789";
cmd.Connection.Open();
cmd.ExecuteNonQuery();
}
}
Pay attention that I intentionally specified the type of #Param1 as DateTime.
Now launch SQL Server Profiler and run the .NET code. When done, don’t stop the profiler, just look at the output. I see the following text for the RPC:Completed EventClass:
exec USP_TEST_PROCEDURE #Param1='2013-12-10 12:34:56.790'
Now copy this text, open SSMS, paste to the new query window and run it without any changes. Now stop the profiler and make sure that the SSMS produced the same text in the profiler as ADO.NET app:
exec USP_TEST_PROCEDURE #Param1='2013-12-10 12:34:56.790'
Now select from the TEST_TABLE table. I see the following result:
ID&nbspVALUE
1 &nbspDec 10 2013 12:34PM
2 &nbsp2013-12-10 12:34:56.790
The question is why the two identical commands (from SQL Server Profiler’s point of view) produce different results?
It seems like profiler does not show correctly what is going on behind the scene. After I ran the .NET app I expected to see the following:
DECLARE #Param1 Datetime = '2013-12-10 12:34:56.789';
exec USP_TEST_PROCEDURE #Param1=#Param1
In this case it would be clear why I have 790 milliseconds instead of 789. This is because the datetime type is not very precise. It is rounded to increments of .000, .003, or .007 seconds. See: datetime (Transact-SQL) It would be also clear why I have ‘Dec 10 2013 12:34PM’ instead of ‘2013-12-10 12:34:56.790’. This is because ‘mon dd yyyy hh:miAM (or PM)’ format is used by default when datetime is converted to string. See CAST and CONVERT (Transact-SQL) The following example demonstrates that:
DECLARE #Param1 Datetime = '2013-12-10 12:34:56.789';
DECLARE #Param1_varchar varchar(23);
SELECT #Param1 --2013-12-10 12:34:56.790
SET #Param1_varchar = #Param1;
SELECT #Param1_varchar; --Dec 10 2013 12:34PM
GO
If my understanding is correct, then I would like to rephrase the question: Why SQL Server Profiler shows something strange and confusing? Are there any options to correct this?
Your application is sending a datetime, but since you chose a varchar column, it is implicitly converted on the way in. Just like what happens with print:
DECLARE #d DATETIME = GETDATE();
SELECT #d; -- this is still a datetime
PRINT #d; -- this is implicitly converted to a string
If you don't like this confusing behavior, then stop mixing data types. There is absolutely no good reason to store a datetime value in a varchar column.

Insert query times out in C# web app, runs fine from SQL Server Management Studio

I'm attempting to get an insert query to run from my C# web application. When I run the query from SQL Server Management Studio, the insert query takes around five minutes to complete. When run from the application, it times out after thirty minutes (yes minutes, not seconds).
I've grabbed the actual SQL statement from the VS debugger and run it from Mgmt Studio and it works fine.
All this is running from my development environment, not a production environment. There is no other SQL Server activity while the query is in progress. I'm using SQL Server 2008 R2 for development. MS VS 2010 Express, Asp.Net 4.0. SQL Server Mgmt Studio 10.
There is a similar question to this that was never answered: SQL server timeout 2000 from C# .NET
Here's the SET options from: dbcc useroptions
Option MgtStudio Application
----------------------- -------------- --------------
textsize 2147483647 -1
language us_english us_english
dateformat mdy mdy
datefirst 7 7
lock_timeout -1 -1
quoted_identifier SET SET
arithabort SET NOT SET
ansi_null_dflt_on SET SET
ansi_warnings SET SET
ansi_padding SET SET
ansi_nulls SET SET
concat_null_yields_null SET SET
isolation level read committed read committed
Only textsize and arithabort are different.
Any ideas why there is such a difference in query execution time and what I may be able to do to narrow that difference?
I'm not sure how useful including the query will be, especially since it would be too much to include the schema. Anyway, here it is:
INSERT INTO GeocacherPoints
(CacherID,
RegionID,
Board,
Control,
Points)
SELECT z.CacherID,
z.RegionID,
z.Board,
21,
z.Points
FROM (SELECT CacherID,
gp.RegionID,
Board=gp.Board + 10,
( CASE
WHEN (SELECT COUNT(*)
FROM Geocache g
JOIN GeocacheRegions r
ON ( r.CacheID = g.ID )
WHERE r.RegionID = gp.RegionID
AND g.FinderPoints >= 5) < 20 THEN NULL
ELSE (SELECT SUM(y.FinderPoints) / 20
FROM (SELECT x.FinderPoints,
ROW_NUMBER() OVER (ORDER BY x.FinderPoints DESC, x.ID) AS Row
FROM (SELECT g.FinderPoints,
g.ID
FROM Geocache g
JOIN Log l
ON ( l.CacheID = g.ID )
JOIN Geocacher c
ON ( c.ID = l.CacherID )
JOIN GeocacheRegions r
ON ( r.CacheID = g.ID )
WHERE YEAR(l.LogDate) = #Year
AND g.FinderPoints >= 5
AND c.ID = gp.CacherID
AND r.RegionID = gp.RegionID) x) y
WHERE y.Row <= 20)
END ) Points
FROM GeocacherPoints gp
JOIN Region r
ON r.RegionID = gp.RegionID
WHERE gp.Control = 21
AND r.RegionType IN ( 'All', 'State' )
AND gp.Board = #Board - 10) z
WHERE z.Points IS NOT NULL
AND z.Points >= 1
ARITHABORT is often misdiagnosed as the cause.
In fact since version 2005 when ANSI_WARNINGS is on (as it is in both your connections) ARITHABORT is implicitly on anyway and this setting has no real effect.
However it does have a side effect. To allow for the cases where ANSI_WARNINGS is off the setting of ARITHABORT is used as one of the plan cache keys which means that sessions with different settings for this cannot share each other's plans.
The execution plan cached for your application cannot be reused when you run the query in SSMS except if they both have the same plan cache key so it gets a new plan compiled that "sniffs" the parameter values that are currently under test. The plan for your application was likely compiled for different parameter values. This issue is known as "parameter sniffing".
You can retrieve and compare both execution plans with something like
SELECT usecounts, cacheobjtype, objtype, text, query_plan, value as set_options
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
where text like '%INSERT INTO GeocacherPoints (CacherID,RegionID,Board,Control,Points)%'
and attribute='set_options' and text not like '%this query%'
The parameters section in the XML tells you the compile time value of the parameters.
See Slow in the Application, Fast in SSMS? Understanding Performance Mysteries for more.
Execution Plans
You've supplied the estimated execution plans rather than the actual execution plans but it can be seen that only the first query plan is parameterised and it was compiled for the following values.
<ParameterList>
<ColumnReference Column="#Dec31" ParameterCompiledValue="'2013-12-31'" />
<ColumnReference Column="#Jan1" ParameterCompiledValue="'2013-01-01'" />
<ColumnReference Column="#Board" ParameterCompiledValue="(71)" />
</ParameterList>
The second execution plan uses variables rather than parameters. That changes things significantly.
DECLARE #Board INT
DECLARE #Jan1 DATE
DECLARE #Dec31 DATE
SET #Board=71
SET #Jan1='January 1, 2013'
SET #Dec31='December 31, 2013'
INSERT INTO GeocacherPoints
SQL Server does not sniff the specific value of variables and generates a generic plan similar to using the OPTIMIZE FOR UNKNOWN hint. The estimated row counts in that plan are much higher than in the first plan.
You don't state which is the fast plan and which is the slow plan. If the one using variables is faster then likely you need to update statistics you may well be encountering the issue described here Statistics, row estimations and the ascending date column if the one using parameters is faster then you will be able to achieve variable sniffing and get it to take account of the actual variable values by using the OPTION (RECOMPILE) hint.
If you are using SqlCommand and you don't specify a value for CommandTimeout, it will automatically timeout after 30 seconds.
This question seems to crop up periodically, this link was at the top of the list:
Query times out from web app but runs fine from management studio. ARITHABORT definitely appears to be the culprit.

Sql Connection Catalog changes after statement

Issue:
After running an Sql command with SqlCommand() on a database that then inserts data into another database, all following statements error with ExceptionInvalid object name.
Question:
Why is this happening?
Additional Information:
I know how to fix it by adding The Temp database name before the table on the select portion but since it is being run in the context of that database that shouldn't be necessary and is not when I run the statements individually in SQL management studio
Program Logic:
Create and fill temp database (All tables ASI_...)
In context of temp database select data and then insert it into another database (#AcuDB)
Repeat Step 2 for X queries
Insertion code:
if (TempD.State == System.Data.ConnectionState.Closed) TempD.Open();
Command = new SqlCommand(temp, TempD);
Command.CommandTimeout = 0;
Command.ExecuteNonQuery();
Sample Sql being run that errors after previous similar statement:
insert into #AcuDB..Batch (CompanyID,BranchID,
Module,BatchNbr,CreditTotal,DebitTotal,ControlTotal,CuryCreditTotal,CuryDebitTotal,CuryControlTotal,CuryInfoID,LedgerID,BatchType,Status,AutoReverse,AutoReverseCopy,OrigModule,OrigBatchNbr,DateEntered,Released,Posted,LineCntr,CuryID,ScheduleID,NoteID,CreatedByID,CreatedByScreenID,CreatedDateTime,LastModifiedByID,LastModifiedByScreenID,LastModifiedDateTime,Hold,Description,Scheduled,Voided,FinPeriodID,TranPeriodID)
select 2,
1,Module,BatchNbr,CreditTotal,DebitTotal,ControlTotal,CuryCreditTotal,CuryDebitTotal,CuryControlTotal,i.CuryInfoID,
isnull((select a.LedgerID from #AcuDB..ledger a where a.LedgerCD =
b.LedgerID),0)
[LedgerID],BatchType,Status,AutoReverse,AutoReverseCopy,OrigModule,OrigBatchNbr,
DateEntered
[DateEntered],Released,Posted,LineCntr,b.CuryID,ScheduleID,NoteID,
'B5344897-037E-4D58-B5C3-1BDFD0F47BF9' [CreatedByID], '00000000'
[CreatedByScreenID], GETDATE() [CreatedDateTime],
'B5344897-037E-4D58-B5C3-1BDFD0F47BF9' [LastModifiedByID], '00000000'
[LastModifiedByScreenID], GETDATE()
[LastModifiedDateTime],Hold,Description,Scheduled,Voided,b.FinPeriodID,TranPeriodID
from Temp..ASI_GLBatch b inner join #AcuDB..CurrencyInfo i on
i.CuryEffDate = b.DateEntered cross join #AcuDB..glsetup g where
b.companyID = #CpnyCD and b.branchID = #BranchCD
Going across databases like this is always precarious due to the way SQL will try to imply contexts. In this case, unless #AcuDB contains the fully-qualified address that includes both the database and the schema, you're going to get errors because of the way you're switching contexts around. Get a reading on what #AcuDB contains and try to run the batch in a stored procedure. Set up a separate instance to sandbox the scenario if you have to. The C# end of this is going to continue to complicate things until you cut it out for a little bit and make sure your SQL is good. After you're sure it's okay, integrate it back into the C# code and work from there.

Categories