Select only first four lines, from Sql text field - c#

Sql Server 2008 Express >> Visual Web Developer >> C#
I'm pulling records out of the table like this:
SELECT Name, Category, Review FROM ReviewTable
This works fine but the Review field Type in SQL Server is text and is very long (think a magazine article).
I only want to pull the first four lines from the Review field for each row, and display them in my repeater control. These lines will be like a teaser of the article.
Is this possible? How can it be accomplished?

This will return this first 1000 characters from the review.
SELECT Name, Category, CAST(Review AS VARCHAR(1000) FROM ReviewTable
If you must have the first 4 lines you need to use some split function. This could work:
CREATE FUNCTION [dbo].[Split]
(
#SearchString VARCHAR(8000),
#Separator VARCHAR(5),
#MaxItems INT
)
RETURNS #strtable TABLE (strval VARCHAR(8000))
AS
BEGIN
DECLARE #tmpStr VARCHAR(8000), #intSeparatorLength INT, #counter int
IF #MaxItems IS NULL
SET #MaxItems = 2147483647 -- max int
SET #intSeparatorLength = LEN(#Separator)
SET #Counter = 0
SET #tmpStr = #SearchString
WHILE 1=1 BEGIN
INSERT INTO #strtable VALUES ( SUBSTRING(#tmpStr, 0 ,CHARINDEX(#Separator, #tmpStr)))
SET #tmpStr = SUBSTRING(#tmpStr,CHARINDEX(#Separator,#tmpStr)+LEN(#Separator),8000)
SET #counter = #counter + 1
IF (CHARINDEX(#Separator,#tmpStr) < 1 OR #counter >= #MaxItems)
BREAK
END
RETURN
END
Usage: select * from dbo.split('aaa**bbbb**CCCC**dddd**eeee**dffff**ggggg', '**', 4)

Well ,the first for lines may be a bit more difficult, but why don't you just put out the first 250 characters or so?
SELECT Name, Category, SubString(Review, 1, 250) AS Review FROM ReviewTable

If your database server is in the same local network as your web server, I think I'd probably select the entire field, since you're accessing it at all. You'll still have to do a lookup to access any data in that field, so sql performance-wise for finding the data is a non-issue. The only downside of retrieving the entire field would be the amount of data passed between the servers. Thus: if they're in the same network, I'd say this would definitely be cheaper than tampering with each record during selection. It also gives you the ability to cache your response, so that you don't have to hit the database again when the user wants to see the full version of the text.
But, to answer your question, the below should probably do it, altho it looks rather tacky
SELECT Name, Category, left(convert(varchar(8000), Review), charindex('\n', convert(varchar(8000), Review), charindex('\n', convert(varchar(8000), Review), charindex('\n', convert(varchar(8000), Review), charindex('\n', convert(varchar(8000), Review))+1)+1)+1)-1) FROM ReviewTable
...hrrm, yeah, really, i'd consider my first paragraph

Related

How to insert Huge dummy data to Sql server

Currently development team is done their application, and as a tester needs to insert 1000000 records into the 20 tables, for performance testing.
I gone through the tables and there is relationship between all the tables actually.
To insert that much dummy data into the tables, I need to understand the application completely in very short span so that I don't have the dummy data also by this time.
In SQL server is there any way to insert this much data insertion possibility.
please share the approaches.
Currently I am planning with the possibilities to create dummy data in excel, but here I am not sure the relationships between the tables.
Found in Google that SQL profiler will provide the order of execution, but waiting for the access to analyze this.
One more thing I found in Google is red-gate tool can be used.
Is there any script or any other solution to perform this tasks in simple way.
I am very sorry if this is a common question, I am working first time in SQL real time scenario. but I have the knowledge on SQL.
Why You don't generate those records in SQL Server. Here is a script to generate table with 1000000 rows:
DECLARE #values TABLE (DataValue int, RandValue INT)
;WITH mycte AS
(
SELECT 1 DataValue
UNION all
SELECT DataValue + 1
FROM mycte
WHERE DataValue + 1 <= 1000000
)
INSERT INTO #values(DataValue,RandValue)
SELECT
DataValue,
convert(int, convert (varbinary(4), NEWID(), 1)) AS RandValue
FROM mycte m
OPTION (MAXRECURSION 0)
SELECT
v.DataValue,
v.RandValue,
(SELECT TOP 1 [User_ID] FROM tblUsers ORDER BY NEWID())
FROM #values v
In table #values You will have some random int value(column RandValue) which can be used to generate values for other columns. Also You have example of getting random foreign key.
Below is a simple procedure I wrote to insert millions of dummy records into the table, I know its not the most efficient one but serves the purpose for a million records it takes around 5 minutes. You need to pass the no of records you need to generate while executing the procedure.
IF EXISTS (SELECT 1 FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[DUMMY_INSERT]') AND type in (N'P', N'PC'))
BEGIN
DROP PROCEDURE DUMMY_INSERT
END
GO
CREATE PROCEDURE DUMMY_INSERT (
#noOfRecords INT
)
AS
BEGIN
DECLARE #count int
SET #count = 1;
WHILE (#count < #noOfRecords)
BEGIN
INSERT INTO [dbo].[LogTable] ([UserId],[UserName],[Priority],[CmdName],[Message],[Success],[StartTime],[EndTime],[RemoteAddress],[TId])
VALUES(1,'user_'+CAST(#count AS VARCHAR(256)),1,'dummy command','dummy message.',0,convert(varchar(50),dateadd(D,Round(RAND() * 1000,1),getdate()),121),convert(varchar(50),dateadd(D,Round(RAND() * 1000,1),getdate()),121),'160.200.45.1',1);
SET #count = #count + 1;
END
END
you can use the cursor for repeat data:
for example this simple code:
Declare #SYMBOL nchar(255), --sample V
#SY_ID int --sample V
Declare R2 Cursor
For SELECT [ColumnsName]
FROM [TableName]
For Read Only;
Open R2
Fetch Next From R2 INTO #SYMBOL,#SY_ID
While (##FETCH_STATUS <>-1 )
Begin
Insert INTO [TableName] ([ColumnsName])
Values (#SYMBOL,#SY_ID)
Fetch Next From R2 INTO #SYMBOL,#SY_ID
End
Close R2
Deallocate R2
/*wait a ... moment*/
SELECT COUNT(*) --check result
FROM [TableName]

Storing ASP.NET style security in SQL Server xml column?

I would like to store security information for records in a SQL Server database. The security info would ideally be in the same form as what you might see in a config file, for consistency purposes:
<authorization>
<allow roles="Admins"/>
<allow users="SomeGuy,SomeOtherGuy"/>
<deny users="*"/>
</authorization>
I'd then like to be able to query the database for everything that a particular user is permitted access to, given their username and a list of their roles.
Does anyone have a suggestion on how best to do this? Or am I going about this the wrong way?
An easy brute force solution would be to just read every row in the database and pull each security rule XML into some class that will do the evaluation for me - but obviously that's going to be slow and on large tables will be unreasonable.
Another thing that comes to mind is making a child table of some kind which includes a priority of some kind to indicate the order in which each allow or deny node should be applied. However, I have quite a few tables that need this feature, and if I can avoid creating a ton of child tables, that would be ideal.
Though I have limited experience with XML columns in SQL Server, I can probably build an XML query to determine if a user is allowed - something starting with (/authorization/allow/#users)[1], perhaps. However, the order of the nodes matters, so while I could probably find a node that matches a given name or role, I don't know how to do any sort of set-based operation to check whether the user is denied or allowed based on which comes first.
So, given a user name and a comma delimited list of roles, what is the best way to check that person's access rights on a particular row in the database?
Well, i've come up with a solution, but it's not ideal. For 10,000 records, it takes 5 seconds to return all of the rows which match the security profile. This isn't a total disaster, and it does work, but i'll have to come back to this problem later to improve it.
Here's how i solved it. Keep in mind that i only worked on this for a few hours.
Before i could really do anything, i knew i was going to need a function to compare two comma delimited lists. I need to have a user's roles in a list, and see if any of those roles appear in the authorization settings stored in my xml column, as detailed in the original post. For this, i made two functions.
The first function is a commonly seen one to do string splitting using xml:
IF EXISTS (
SELECT * FROM sysobjects WHERE id = object_id(N'ufnSplitStrings')
AND xtype IN (N'FN', N'IF', N'TF')
)
DROP FUNCTION ufnSplitStrings
GO
CREATE FUNCTION dbo.ufnSplitStrings
(
#List NVARCHAR(MAX),
#Delimiter NVARCHAR(255)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
(
SELECT Item = y.i.value('(./text())[1]', 'nvarchar(4000)')
FROM
(
SELECT x = CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>')
+ '</i>').query('.')
) AS a CROSS APPLY x.nodes('i') AS y(i)
);
With that function established, i could then create another function which would then do the comparison i wanted:
IF EXISTS (
SELECT * FROM sysobjects WHERE id = object_id(N'ufnContainsAny')
AND xtype IN (N'FN', N'IF', N'TF')
)
DROP FUNCTION ufnContainsAny
GO
CREATE FUNCTION dbo.ufnContainsAny(#List1 NVARCHAR(MAX), #List2 NVARCHAR(MAX))
RETURNS int
AS
BEGIN
DECLARE #Ret AS INT = 0
SELECT #Ret = COUNT(*) FROM dbo.ufnSplitStrings(#List1, ',') x
JOIN dbo.ufnSplitStrings(#List2, ',') y ON x.Item = y.Item
RETURN #Ret
END;
GO
Finally, i could use that function to assemble my main UserIsAuthorized function.
IF EXISTS (
SELECT * FROM sysobjects WHERE id = object_id(N'ufnUserIsAuthorized')
AND xtype IN (N'FN', N'IF', N'TF')
)
DROP FUNCTION ufnUserIsAuthorized
GO
CREATE FUNCTION dbo.ufnUserIsAuthorized(#SecurityRules XML, #UserName NVARCHAR(64), #UserRoles NVARCHAR(MAX))
RETURNS int
AS
BEGIN
DECLARE #ret int = 0;
DECLARE #AuthType NVARCHAR(32);
DECLARE #authRules Table (a nvarchar(32), u nvarchar(max), r nvarchar(max), o int)
INSERT INTO #authRules
SELECT
a = value.value('local-name(.[1])', 'varchar(32)'),
u = ',' + value.value('#users', 'varchar(max)') + ',',
r = ',' + value.value('#roles', 'varchar(max)') + ',',
o = value.value('for $i in . return count(../*[. << $i]) + 1', 'int')
FROM #SecurityRules.nodes('//allow,//deny') AS T(value)
SELECT TOP 1 #AuthType = a FROM #authRules
WHERE CHARINDEX(',' + #UserName + ',', u) > 0 OR CHARINDEX(',*,', u) > 0 OR dbo.ufnContainsAny(r, #UserRoles) > 0 OR CHARINDEX(',*,', r) > 0
GROUP BY a
ORDER BY MIN(o)
IF (#AuthType IS NOT NULL AND #AuthType = 'allow')
SET #ret = 1;
RETURN #ret;
END;
That function splits up the xml allow and deny nodes into a table which contains the authorization type (allow or deny), the users list, the roles list, and finally the order in which the particular node appears in the document. Finally, i can grab the first node where i find the user or one of the user's roles. If that node is "allow", then i return a 1.
Yeah, it's a bit horrendous because we're declaring a table in every single call. I tried various little tests where i only looked for the user name (to avoid having to make any calls to the ufnContainsAny), but the performance didn't change. I also tried changing the "o" column to a simple identity column, since i'm selecting all nodes - this would allow it to skip what i thought might be a time consuming calc of getting the order of the node. But that also didn't affect the performance.
So, not surprisingly this method needs work. If anyone has any suggestions, i'm all ears.
My initial usage of this feature will be very few rows, so i can use this in the interim until i come up with a better solution (or abandon this method altogether).
EDIT:
The performance can be dramatically improved by just skipping the DECLARE table / INSERT. Instead, we can do this:
SELECT TOP 1 #AuthType = a FROM
(
SELECT
a = value.value('local-name(.[1])', 'varchar(32)'),
u = ',' + value.value('#users', 'varchar(max)') + ',',
r = ',' + value.value('#roles', 'varchar(max)') + ',',
o = value.value('for $i in . return count(../*[. << $i]) + 1', 'int')
FROM #SecurityRules.nodes('//allow,//deny') AS T(value)
) AS sec
WHERE CHARINDEX(',' + #UserName + ',', u) > 0 OR CHARINDEX(',*,', u) > 0 OR dbo.ufnContainsAny(r, #UserRoles) > 0 OR CHARINDEX(',*,', r) > 0
GROUP BY a
ORDER BY MIN(o)

Concurrent access to database - preventing two users from obtaining the same value

I have a table with sequential numbers (think invoice numbers or student IDs).
At some point, the user needs to request the previous number (in order to calculate the next number). Once the user knows the current number, they need to generate the next number and add it to the table.
My worry is that two users will be able to erroneously generate two identical numbers due to concurrent access.
I've heard of stored procedures, and I know that that might be one solution. Is there a best-practice here, to avoid concurrency issues?
Edit: Here's what I have so far:
USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[sp_GetNextOrderNumber]
AS
BEGIN
BEGIN TRAN
DECLARE #recentYear INT
DECLARE #recentMonth INT
DECLARE #recentSequenceNum INT
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- get the most recent numbers
SELECT #recentYear = Year, #recentMonth = Month, #recentSequenceNum = OrderSequenceNumber
FROM dbo.OrderNumbers
WITH (XLOCK)
WHERE Id = (SELECT MAX(Id) FROM dbo.OrderNumbers)
// increment the numbers
IF (YEAR(getDate()) > IsNull(#recentYear,0))
BEGIN
SET #recentYear = YEAR(getDate());
SET #recentMonth = MONTH(getDate());
SET #recentSequenceNum = 0;
END
ELSE
BEGIN
IF (MONTH(getDate()) > IsNull(#recentMonth,0))
BEGIN
SET #recentMonth = MONTH(getDate());
SET #recentSequenceNum = 0;
END
ELSE
SET #recentSequenceNum = #recentSequenceNum + 1;
END
-- insert the new numbers as a new record
INSERT INTO dbo.OrderNumbers(Year, Month, OrderSequenceNumber)
VALUES (#recentYear, #recentMonth, #recentSequenceNum)
COMMIT TRAN
END
This seems to work, and gives me the values I want. So far, I have not yet added any locking to prevent concurrent access.
Edit 2: Added WITH(XLOCK) to lock the table until the transaction completes. I'm not going for performance here. As long as I don't get duplicate entries added, and deadlocks don't happen, this should work.
you know that SQL Server does that for you, right? You can you a identity column if you need sequential number or a calculated column if you need to calculate the new value based on another one.
But, if that doesn't solve your problem, or if you need to do a complicated calculation to generate your new number that cant be done in a simple insert, I suggest writing a stored procedure that locks the table, gets the last value, generate the new one, inserts it and then unlocks the table.
Read this link to learn about transaction isolation level
just make sure to keep the "locking" period as small as possible
Here is a sample Counter implementation. Basic idea is to use insert trigger to update numbers of lets say, invoices. First step is to create a table to hold a value of last assigned number:
create table [Counter]
(
LastNumber int
)
and initialize it with single row:
insert into [Counter] values(0)
Sample invoice table:
create table invoices
(
InvoiceID int identity primary key,
Number varchar(8),
InvoiceDate datetime
)
Stored procedure LastNumber first updates Counter row and then retrieves the value. As the value is an int, it is simply returned as procedure return value; otherwise an output column would be required. Procedure takes as a parameter number of next numbers to fetch; output is last number.
create proc LastNumber (#NumberOfNextNumbers int = 1)
as
begin
declare #LastNumber int
update [Counter]
set LastNumber = LastNumber + #NumberOfNextNumbers -- Holds update lock
select #LastNumber = LastNumber
from [Counter]
return #LastNumber
end
Trigger on Invoice table gets number of simultaneously inserted invoices, asks next n numbers from stored procedure and updates invoices with that numbers.
create trigger InvoiceNumberTrigger on Invoices
after insert
as
set NoCount ON
declare #InvoiceID int
declare #LastNumber int
declare #RowsAffected int
select #RowsAffected = count(*)
from Inserted
exec #LastNumber = dbo.LastNumber #RowsAffected
update Invoices
-- Year/month parts of number are missing
set Number = right ('000' + ltrim(str(#LastNumber - rowNumber)), 3)
from Invoices
inner join
( select InvoiceID,
row_number () over (order by InvoiceID desc) - 1 rowNumber
from Inserted
) insertedRows
on Invoices.InvoiceID = InsertedRows.InvoiceID
In case of a rollback there will be no gaps left. Counter table could be easily expanded with keys for different sequences; in this case, a date valid-until might be nice because you might prepare this table beforehand and let LastNumber worry about selecting the counter for current year/month.
Example of usage:
insert into invoices (invoiceDate) values(GETDATE())
As number column's value is autogenerated, one should re-read it. I believe that EF has provisions for that.
The way that we handle this in SQL Server is by using the UPDLOCK table hint within a single transaction.
For example:
INSERT
INTO MyTable (
MyNumber ,
MyField1 )
SELECT IsNull(MAX(MyNumber), 0) + 1 ,
"Test"
FROM MyTable WITH (UPDLOCK)
It's not pretty, but since we were provided the database design and cannot change it due to legacy applications accessing the database, this was the best solution that we could come up with.

Wish to observe stored procedures that exist on multiple database and across multiple servers

Having an issue on MS SQL 2005 Enterprise multiple servers where I want to get a collection of meta data across multiple servers and multiple databases. I saw on Stack Overflow a good example on using the magical sp_MSforeachdb that I altered a little bit below. Basically a MS stored procedure is being ran dynamically and it's looking for anytime a database (?) is like a name like 'Case(fourspaces)'. This is great and it gives me what I want but for only a single server. I want to do this for more, is it possible SQL gurus?
Example thus far:
SET NOCOUNT ON
DECLARE #AllTables table (CompleteTableName varchar(256))
INSERT INTO #AllTables (CompleteTableName)
EXEC sp_msforeachdb 'select distinct ##SERVERNAME+''.''+ ''?'' + ''.'' + p.name from [?].sys.procedures p (nolock) where ''?'' like ''Case____'''
SELECT * FROM #AllTables ORDER BY 1
Is there a way though to do this in SQL, Linq, or ADO.NET to perform this clever built in stored procedure that inserts into a table variable to do this multiple times across servers BUT...... Put that in one set. As far as I know you CANNOT switch servers in a single session in SQL Management Studio but I would love to be proved wrong on that one.
EG: I have a production environment with 8 Servers, each of those servers has many databases. I could run this multiple times but I was hoping that if the servers were linked already I could do this from the sys views somehow. However I am on an environment using SQL 2005 and got MS's download for the sys views and it looks like the sys.servers is on an island unto itself where the SERVERID does not seem to join to anything else.
I would be willing to use an ADO.NET reader or LINQ in a C# environment and possibly call the above TSQL code multiple times but ...... Is there a more efficient way to get the info directly in TSQL IF the servers are LINKED SERVERS? Just curious.
The overall purpose of this operation is for deployment purposes to see how many procedures exist across all servers and databases. Now we do have SQL compare from Redgate but I am unaware if it can script procs that don't exist to exist the same as set A. Even if it could I would like to try to make something on my own if feasible.
Any help is much appreciated and if you need further clarification please ask.
I figured it out, once you set up linked servers you can merely extend the linked server name to the left of the object to qualify it more distinctly.
EG instead of sp_msforeachdb I can do (Servername).MASTER..sp_msforeachdb. I can then iterate through my servers if they are LINKED(they are in my case) from the sys.servers table.
I did some things that would slow things down with my left join and that I store everything at once and then examine with a 'like' statement instead of an explicit qualifier. But overall I think this solution will provide an end user with flexibility to not know the exact name of an object to hunt for. I also like that I can now use this with SSIS, SSRS and ADO.NET as the procedure can do the hunting iteration for me and I do not have to do something in an apps memory but on the SQL server's. I'm sure others may have better ideas but I did not hear anything so this is mine:
Complete solution below:
Create PROC [PE].[DeployChecker]
(
#DB VARCHAR(128)
, #Proc VARCHAR(128)
)
AS
BEGIN
--declare variable for dynamic SQL
DECLARE
#SQL VARCHAR(512)
, #x int
-- remove temp table if it exists as it should not be prepopulated.
IF object_ID('tempdb..#Procs') IS NOT NULL
DROP TABLE tempdb..#Procs
-- Create temp table to catch built in sql stored procedure
CREATE TABLE #Procs --DECLARE #Procs table
(
ServerName varchar(64)
, DatabaseName VARCHAR(128)
, ObjectName VARCHAR(256)
)
SET #X = 1
-- Loops through the linked servers with matching criteria to examine how MANY there are. Do a while loop while they exist.
-- in our case the different servers are merely incrementing numbers so I merely do a while loop, you could be more explicit if needed.
WHILE #X <= (SELECT count(*) FROM sys.servers WHERE name LIKE 'PCTRSQL_')
BEGIN
-- for some reason I can't automate the 'sp_msforeachdb' proc to take dynamic sql but I can set a variable to do it and then run it.
SET #SQL = 'Insert Into #Procs Exec PCTRSQL' + CAST(#X AS VARCHAR(2)) + '.MASTER..sp_msforeachdb ' +
'''select ##SERVERNAME, ''''?'''', name from [?].sys.procedures (nolock) where ''''?'''' like ''''%' + #DB + '%'''' '''
Exec (#SQL)
SET #X = #X + 1
END
;
-- Find distinct Server detail
WITH s AS
(
SELECT Distinct
ServerName
, DatabaseName
FROM #Procs
)
-- do logic search in the select statement to see if there is a proc like what is searched for
, p AS
(
SELECT
ServerName
, DatabaseName
, CASE WHEN ObjectName LIKE '%' + #Proc + '%' THEN ObjectName END AS ProcName
FROM #Procs
where ObjectName LIKE '%' + #Proc + '%'
)
-- now do a left join from the distinct server cte to the lookup for the proc cte, we want to examine ALL the procs that match a critera
-- however if nothing eixsts we wish to show a NULL value of a single row for a reference to the Servername and Database
SELECT
s.ServerName
, s.DatabaseName
, p.ProcName
, CAST(CASE WHEN ProcName IS NOT NULL THEN 1 ELSE 0 END AS bit) AS ExistsInDB
FROM s
LEFT JOIN p ON s.ServerName = p.ServerName
AND s.DatabaseName = p.DatabaseName
ORDER BY DatabaseName, ServerName, ProcName
END

MySql Batching Stored Procedure Calls with .Net / Connector?

Is there a way to batch stored procedure calls in MySql with the .Net / Connector to increase performance?
Here's the scenario... I'm using a stored procedure that accepts a few parameters as input. This procedure basically checks to see whether an existing record should be updated or a new one inserted (I'm not using INSERT INTO .. ON DUPLICATE KEY UPDATE because the check involves date ranges, so I can't really make a primary key out of the criteria).
I want to call this procedure a lot of times (let's say batches of 1000 or so). I can of course, use one MySqlConnection and one MySqlCommand instance and keep changing the parameter values, and calling .ExecuteNonQuery().
I'm wondering if there's a better way to batch these calls?
The only thought that comes to mind is to manually construct a string like 'call sp_myprocedure(#parama_1,#paramb_1);call sp_myprocedure(#parama_2,#paramb2);...', and then create all the appropriate parameters. I'm not convinced this will be any better than calling .ExecuteNonQuery() a bunch of times.
Any advice? Thanks!
EDIT: More info
I'm actually trying to store data from an external data source, on a regular basis. Basically I'm taking rss feeds of Domain auctions (from various sources like godaddy, pool, etc.), and updating a table with the auction info using this stored procedure (let's call it sp_storeSale). Now, in this table that the sale info gets stored, I want to keep historical records for sales for a given domain, so I have a domain table, and a sale table. The sale table has a many to one relationship with the domain table.
Here's the stored procedure:
-- --------------------------------------------------------------------------------
-- Routine DDL
-- Note: comments before and after the routine body will not be stored by the server
-- --------------------------------------------------------------------------------
DELIMITER $$
CREATE PROCEDURE `DomainFace`.`sp_storeSale`
(
middle VARCHAR(63),
extension VARCHAR(10),
brokerId INT,
endDate DATETIME,
url VARCHAR(500),
category INT,
saleType INT,
priceOrBid DECIMAL(10, 2),
currency VARCHAR(3)
)
BEGIN
DECLARE existingId BIGINT DEFAULT NULL;
DECLARE domainId BIGINT DEFAULT 0;
SET #domainId = fn_getDomainId(#middle, #extensions);
SET #existingId = (
SELECT id FROM sale
WHERE
domainId = #domainId
AND brokerId = #brokerId
AND UTC_TIMESTAMP() BETWEEN startDate AND endDate
);
IF #existingId IS NOT NULL THEN
UPDATE sale SET
endDate = #endDate,
url = #url,
category = #category,
saleType = #saleType,
priceOrBid = #priceOrBid,
currency = #currency
WHERE
id = #existingId;
ELSE
INSERT INTO sale (domainId, brokerId, startDate, endDate, url,
category, saleType, priceOrBid, currency)
VALUES (#domainId, #brokerId, UTC_TIMESTAMP(), #endDate, #url,
#category, #saleType, #priceOrBid, #currency);
END IF;
END
As you can see, I'm basically looking for an existing record that is not 'expired', but has the same domain, and broker, in which case I assume the auction is not over yet, and the data is an update to the existing auction. Otherwise, I assume the auction is over, it is a historical record, and the data I've got is for a new auction, so I create a new record.
Hope that clears up what I'm trying to achieve :)
I'm not entirely sure what you're trying to do but it sounds kinda house-keeping or maintenance related so I won't be too ashamed at posting the following suggestion.
Why dont you move all of your logic into the database and process it all server side ?
The following example uses a cursor (shock/horror) but it's perfectly acceptable to use them in such circumstances.
If you can avoid using cursors at all - great, but the main point of my suggestion is about moving the logic from your application tier back into the data tier to save on the round trips. You'd call the following sproc once and it would process the entire range of data in single call.
call house_keeping(curdate() - interval 1 month, curdate());
Also, if you can provide just a bit more information about what you're trying to do we might be able to suggest other approaches.
Example stored procedure
drop procedure if exists house_keeping;
delimiter #
create procedure house_keeping
(
in p_start_date date,
in p_end_date date
)
begin
declare v_done tinyint default 0;
declare v_id int unsigned;
declare v_expired_date date;
declare v_cur cursor for
select id, expired_date from foo where
expired_date between p_start_date and p_end_date;
declare continue handler for not found set v_done = 1;
open v_cur;
repeat
fetch v_cur into v_id, v_expired_date;
/*
if <some condition> then
insert ...
else
update ...
end if;
*/
until v_done end repeat;
close v_cur;
end #
delimiter ;
Just incase you think I'm completely mad in suggesting cursors you might want to read this
Optimal MySQL settings for queries that deliver large amounts of data?
Hope this helps :)

Categories