I have a problem with an SQL query.
I use Visual Studio 2015 to test my website, and WebMatrix.Data.Database to do my queries.
Anyways, I am creating a reply system and I use this query to get the replies :
SELECT *
FROM ThreadReply
WHERE ThreadId = " + ThreadId + "
ORDER BY ReplyId DESC
I know there is no prevention against SQL injections so please don't ask me to fix that.
Want I want to add the the query is to start from a certain row and continue for a certain amount of rows, for example; I mean like the LIMIT command where you can select the rows you want to start at but apparently it doesn't work on Visual Studio.
Also, please note, I want the row from the query with the rows that have the WHERE keyword true, so not the row of the actual table.
Here you go, the comments of the sql are pretty clear I think. Also I fix your sql injection, you just need to add the SqlCommand.Parameters
SELECT
*
FROM
ThreadReply
WHERE
ThreadId=#ThreadID
ORDER BY
ReplyId DESC
OFFSET
10 ROWS -- skip 10 rows
FETCH NEXT
10 ROWS ONLY -- take 10 rows
Related
Am developing a C# windows desktop application in Visual Studio and am literally stack with regards to how i should phrase my code when it has been sold in my supermarket system so that it reduces the available stock in the database any assistance rendered will be much appreciated
I tried with this Sql but it failed miserably cause i didnt even have the know the C# to add to it
string query = "update ProductTable set Product_Quantity = (Product_Quantity-'{OrderProductQuantity}') where Product_Name ='{productName}'";
The answer?
Is you don't write any code at all to reduce the stock.
A simple query will get you the current stock
WITH MYstock as
(SELECT StockID,
(SELECT SUM(StockSold) from Invoices WHERE Invoices.StockID = Stock.StockDI) as ItemsSold,
(SELECT SUM(StockAdded) from tblStock WHERE TblStock.StockID) AS ItemsAdded)
FROM tblSTock
)
SELECT * from MySTock, (ItemsAdded - ItemsSold) as InventoryLevel
So, as you can see, you don't have to write any code at all. You simple sum the items added, less the items sold, and you are done.
Now, above is SQL server syntax, and you could say use a MySQL view - I don't know how (or if) aliased columns can be re-used, but above gives you the basic idea here.
Really nice? Maybe you have to edit a sales transaction, or even delete a row - it will automatic update the inventory value - since you compute it on the fly as required.
As a result, your UI code becomes VERY easy to write. You never have to write ANY code to update the inventory amounts, since you are free to calculate the in stock value anytime you want based on above.
I need to run a query on Mysql DB that retrieve the last 10 rows from a table which have no index column. I succeed to create the query-
SET #row_number:=0;
select activity from (
SELECT #row_number:= #row_number+ 1 AS rowNumber, activity
FROM activities a ) as myT
where myT.rowNumber > (select count(activity) from activities)- 10
But i need to run it through C#. And using MysqlCommand i can't create the parameter #row_number. Using command.Parameters.AddWithValue dosen't help
because the parameter need to be assigned when the query is executing.
(using the Parameters.AddWithValue produce the following error =
"..syntax to use near ':= 1 + 1...").
Thanks.
I think what you want is the LIMIT clause:
select activity from activities
LIMIT 10
I believe you'll have to create a stored procedure if you want to use a parameter for the LIMIT amount.
Maybe I'm misunderstanding your question but if you're only trying to grab the first 10 records why not use this:
SELECT TOP 10 [columnName] FROM [Table]
This returns the top 10 results in the database table.
or in C# you can set a variable
int rowNumber = 0;
then use the command.Parameter.add("#rowNumber", rowNumber);
that might work.
Hope this helps.
I am creating an application that takes data from a text file which has sales data from Amazon market place.The market place has items with different names compared to the data in our main database. The application accepts the text file as input and it needs to check if the item exists in our database. If not present I should throw an option to save the item to a Master table or to Sub item table and map it to a master item. My question is if the text file has 100+ items should I hit the database each time to check if the data exists there.Is there any better way of doing that so that we can minimize the database hits.
I have two options that i have used earlier
Hit database and check if it exists in table.
Fill the data in a DataTable and use DataTable.Select to check if it exists.
Can some one tell me the best way to do this?. I have to check two tables (master table, subItem table), maybe 1 at a time. Thanks.
Update:
#Downvoters add an comment .
i am not asking you whats the way to check if an item exists in database.I just want to know the best way of doing that. Should I be hitting database 1000 times if an file has 1000 items? That's my question.
The current query I use:
if exists (select * from [table] where itemname= [itemname] )
select 'True'
else
select 'False'
return
(From Chat)
I would create a Stored Procedure which takes a table valued parameter of all the items that you want to check. You can then use a join (a couple of options here)* to return a result set of items and whether each one exists or not. You can use TVP's from ADO like this.
It will certainly handle the 100 to 1000 row range mentioned in your post. To be honest, I haven't used it in the 1M+ range.
in newer versions of SQL Server, I would prefer TVP's over using an xml input parameter, as it is really quite cumbersome to pack the xml in your .Net code and then unpack it again in your SPROC.
(*) Re Joins : With the result set, you can either just inner join the TVP to your items / product table and check in .Net if the row doesn't exist, or you can do an left outer join with the TVP as the left table, and e.g. ISNULL() missing items to 0 / 'false' etc.
Make it as batch of 100 items to the database. probably a stored procedure might help, since repetitive queries has to be fired. If the data is not changed frequently, you can consider caching. I assume you will be making service calls from ur .net application, so ingest a xml from back end, in batches. Consider increasing batch size based on the filesize.
If your entire application is local, batch size size may very high, as there is no netowrk oberhead, still dont make 100 calls to db.
Try like this
SELECT EXISTS(SELECT * FROM table1 WHERE itemname= [itemname])
SELECT EXISTS(SELECT 1 FROM table1 WHERE itemname= [itemname])
I've got a SQL Reporting Services 2005 report that includes a table on the first page. I have enough room on the first page to show the first 5 items from a list. If there are more than 5 items, I want the list to continue to another table on the second page of the report.
I also want the tables to have a fixed number of rows. For example, the table on the first page always shows 5 rows even if no items exist in the list. This allows the border to still be visible so that the page layout isn't messed up.
Any thoughts on the best way to get this working?
I think that this is best done in the Query / Stored Proc that returns the data rather than in SSRS.
You can do something like this
SELECT TOP 5 FROM
(
SELECT Top 5 *
FROM DummyOrBlankDataFillerView
UNION
SELECT TOP 5 *, Row_Number() over (order by YourColumns) as OrderByClause
FROM ActualQueryThatBringsBackRecords
)
ORDER BY OrderByClause
OrderByClause is ordered by your columns and will have (1,2,3,4,5) and DummyOrBlankDataFillerView should have a column that you get back that has values in the same column as (6, 7, 8, 9, 10).
Then, between the order by, and the `top 5' you should have what you need to display.
I don't think there's an easy way to do this. AFAIK, SSRS won't help you here. You could change your query logic so that it pads out the resultset with a number of 'dummy' rows if the actual number of rows returned is < 5. However this seems like a messy solution.
Probably not exactly the answer you are looking for but you could limit the query or data source the first table is bound to to 5 items or whatever. Then the second table would be bound to a query or data source with just the remaining items.
I don't think there is a way in the report to do this with a property or anything like that.
You will need to union in some blank data when there is none.
Add a calculated row to the dataset called rowcount for example
=rownumber("datasetname")
Then filter the first table for rowcount < 6
I am using VSTS 2008 + C# + .Net 3.0 + ADO.Net + SQL Server 2008. And from ADO.Net I am invoking a stored procedure from SQL Server side. The stored procedure is like this,
SELECT Table1.col2
FROM Table1
LEFT JOIN Table2 USING (col1)
WHERE Table2.col1 IS NULL
My question is, how to retrieve the returned rows (Table1.col2 in my sample) efficiently? My result may return up to 5,000 rows and the data type for Table1.col2 is nvarchar (4000).
thanks in advance,
George
You CANNOT - you can NEVER retrieve that much data efficiently....
The whole point of being efficient is to limit the data you retrieve - only those columns that you really need (no SELECT *, but SELECT (list of fields), which you already do), and only as much rows as you can handle easily.
For instance, you don't want to fill a drop down or listbox where the user needs to pick a single value with thousands of entries - that's just not feasible.
So I guess my point really is: if you really, truly need to return 5000 rows or more, it'll just take its time. There's not much you can do about that (if you transmit 5000 rows with 5000 bytes per row, that's 25'000'000 bytes or 25 megabytes - no magic go make that go fast).
It'll only go really fast if you find a way to limit the number of rows returned to 10, 20, 50 or so. Think: server-side paging!! :-)
Marc
You don't say what you want to do with the data. However, assuming you need to process the results in .NET then reading the results using an SqlDataReader would be the most efficient way.
I'd use exists for one.
SELECT
Table1.col2
FROM
Table1
WHERE
NOT EXISTS (SELECT *
FROM
Table2
WHERE
Table2.col1 = Table1.col1)
The query can be efficient (assume col1 is indexed but covers cols (very wide index of course), but you still have to shovel a lot of data over the network.
It depends what you mean by performance. 5000 rows isn't much for a report but it's a lot for a combo box