I have a table that has about 1 million rows. One of the columns is a string, let's call it column A.
Now I need to work on a list L of about 1,000 strings, mostly one or two words, and I need to find all the records in the table where column A contains one of the 1,000 strings in the list L.
The only way I can think of is to use each string in L to do a full table scan, find if the string is a substring of column A content of each row. But that will be O(n2), and for a million rows it will take a very long time.
Is there a better way? Either in SQL or in C# code?
One million rows is a relatively small number these days. You should be able to pull all strings from column A, along with your table's primary key, into memory, and do a regex search using a very long regex composed from your 1000 strings:
var regex = new Regex("string one|string two|string three|...|string one thousand");
Since regex gets compiled into a final automaton, you would get reasonably fast scanning times for your strings. Once your filtering is complete, collect the IDs, and query full rows from the table using them.
The best way to do is is using linq. Lets say that you have your list
List<string> test = new List<string>{"aaa","ddd","ddsc"};
then using Linq you can constract
var match = YourTable.Where (t=> test.Contains(t.YourFieldName);
I suggest looking into full text search, it won't decrease the count of the operations you have to perform but it will increase the performance.
Assuming you use Sql server (you should always use the relevant tag to specify the rdbms),
you can create a DataTable from your List<string> and send it to a stored procedure as a table valued parameter.
Inside the stored procedure you can use a simple join of that table valued parameter to your table on database_table.col contains(table_parameter.value) (using full text search).
Of course, things will go a lot faster if you create a full text index as suggested in the comments by Glorfindel
Related
Is there any way to get a simple list of strings from a Dapper Query? I don't want to create an object that contains all my field names by type. My query returns one row of data. Sometimes with 2 columns other times with 5 or 20 or 100 and I just want all the values returned as a single list of strings.
Dapper would make it easy to work with multiple rows, single column, via db.Query<string>(...).
For multiple columns single row, you could try:
var x = db.QuerySingle<(string,string)>(...)
(for two columns; add more items to the tuple for more)
This uses the value-tuple approach to read the data column-wise.
However, this is only good for a handful of columns. If you have hundreds of columns and a single row then I suggest transposing your query (perhaps via PIVOT).
In part of my application I have to get the last ID of a table where a condition is met
For example:
SELECT(MAX) ID FROM TABLE WHERE Num = 2
So I can either grab the whole table and loop through it looking for Num = 2, or I can grab the data from the table where Num = 2. In the latter, I know the last item will be the MAX ID.
Either way, I have to do this around 50 times...so would it be more efficient grabbing all the data and looping through the list of data looking for a specific condition...
Or would it be better to grab the data several times based on the condition..where I know the last item in the list will be the max id
I have 6 conditions I will have to base the queries on
Im just wondering which is more efficient...looping through a list of around 3500 items several times, or hitting the database several times where I can already have the data broken down like I need it
I could speak for SqlServer. If you create a StoredProcedure where Num is a parameter that you pass, you will get the best performance due to its optimization engine on execution plan of the stored procedure. Of course an Index on that field is mandatory.
Let the database do this work, it's what it is designed to do.
Does this table have a high insert frequency? Does it have a high update frequency, specifically on the column that you're applying the MAX function to? If the answer is no, you might consider adding an IS_MAX BIT column and set it using an insert trigger. That way, the row you want is essentially cached, and it's trivial to look up.
I really need an expert's help to answer my query.
Here is the scenario:
Im using an sql select query to retrieve a million records.
I need to perform sorting and grouping on the resultant records which im storing in a datatable( in one execution)
and looping through it for grouping and sorting it.
I know this is so childish and not the right way to process it.
How can i manage the million records effectively and apply the grouping and sorting to it?
Really need help out here. Heard of executing the select query batch wise but how to implement the grouping and sorting while we dont have the entire data in hand?
I cannot go for sql order by and group by directly and that's against my requirement.
Here is what i'm doing right now:
I have the following objects, i.e the column names for grouping and Sorting
List<Group> groupList;
List<Sort> sortList;
DataTable reportData; // Here im having the entire records from db
Im looping through the 'reportData' row by row and matches the current and previous row for the custom grouping and sorting. Would like to know how the same can be done when we are using a batchwise execution or any alternative solution is there?
I need to perform sorting and grouping on the resultant records which
im storing in a datatable( in one execution) and looping through it
for grouping and sorting it.
What for?
Seriously.
Do not pull then try plaing smart with a stupid object model behind (and datasets are not particularly smart, sorry).
Group and sort in your select statement, pull the data lready grouped and joined and be done with it.
A million records was a small amount of data for sql server when the original version was release (4.2 it was, a port of sysase sql server) 17 years of so ago. These days it is something that fits likely into the processor thiird level cache and is nothing a proper sql server even realizes it has just processed.
SQL is particulaly good ad doing projects and ever since they indoruced MARS you can even run multiple queries over one connection, which comes in handy here.
So, go back - throw away the dataset and "I try to program a sort algo" and create proper SQL statements to pull the data as you need it.
Sounds like you should implement Partition Pruning. Partitioning will allow for a separation of content like you are requesting in order to have faster queries.
If I understood correctly, in your case, I would create a temporary database table with the structure I want especially to cover my grouping.
Then I would select the records from main tables and insert them to the temporary one appying all modifications including grouping.
A specific index on how you want them sorted should be also applied.
After that, just select from this table, do what you have to do, and finally if the data are not needed any more, delete the temporary table.
I would choose the above solution because a million of records in memory smells trouble to me...
For example:
1. Lets assume that you would like to group them by their DocumentTypeID
var groupByType = reportData.GroupBy(g=>g.DocumentTypeID);
2. Sorting Alphabetically
var sortAlphabetically = reportData.OrderBy(g=>g.DocumentName);
3. Grouping and Sorting
var groupAndSort = reportData.GroupBy(g=>g.DocumentTypeID)
.OrderBy(g=>g.DocumentName);
4. Sort and Group
var groupAndSort = reportData.OrderBy(g=>g.DocumentName)
.GroupBy(g=>g.DocumentTypeID);
5. Multiple Grouping and sorting
var multipleGroupAndSort = reportData.GroupBy(g=>g.DocumentTypeID)
.GroupBy(g=>g.CreatedOnDate.Month)
.OrderBy(g=>g.DocumentName);
so on and so forth...
But I would still discourage bringing million rows to application. It will cost memory. There are of course ways to manage it through stored procedures etc.
I have an object structure that is mimicking the properties of an excel table. So i have a table object containing properties such as title, header row object and body row objects. Within the header row and each body row object, i have a cell object containing info on each cell per row. I am looking for a more efficient way to store this table structure since in one of my uses for this object, i am printing its structure to screen. Currently, i am doing an O(n^2) complexity for printing each row for each cell:
foreach(var row in Table.Rows){
foreach(var cell in row.Cells){
Console.WriteLine(cell.ToString())
}
}
Is there a more efficient way of storing this structure to avoid the n^2? I ask this because this printing functionality exists in another n^2 loop. Basically i have a list of tables titles and a list of tables. I need to find those tables whose titles are in the title list. Then for each of those tables, i need to print their rows and the cells in each row. Can any part of this operation be optimized by using a different data structure for storage perhaps? Im not sure how exactly they work but i have heard of hashing and dictionary?
Thanks
Since you are looking for tables with specific titles, you could use a dictionary to store the tables by title
Dictionary<string,Table> tablesByTitle = new Dictionary<string,Table>();
tablesByTitle.Add(table.Title, table);
...
table = tablesByTitle["SomeTableTitle"];
This would make finding a table an O(1) operation. Finding n tables would be an O(n) operation.
Printing the tables then of cause depends on the number of rows and columns. There is nothing, which can change that.
UPDATE:
string tablesFromGuiElement = "Employees;Companies;Addresses";
string[] selectedTables = tablesFromGuiElement.Split(';');
foreach (string title in selectedTables) {
Table tbl = tablesByTitle[title];
PrintTable(tbl);
}
There isn't anything more efficient than an N^2 operation for outputting an NxN matrix of values. Worst-case, you will always be doing this.
Now, if instead of storing the values in a multidimensional collection that defines the graphical relationship of rows and columns, you put them in a one-dimensional collection and included the row-column information with each cell, then you would only need to iterate through the cells that had values. Worst-case is still N^2 for a table of N rows and N columns that is fully populated (the one-dimensional array, though linear to enumerate, will have N^2 items), but the best case would be that only one cell in that table is populated (or none are) which would be constant-time.
This answer applies to the, printing the table part, but the question was extended.
for the getting the table part, see the other answer.
No, there is not.
Unless perhaps your values follow some predictable distribution, then you could use a function of x and y and store no data at all, or maybe a seed and a function.
You could cache the print output in a string or StringBuider if you require it multiple times.
If there is enough data I guess you might apply some compression algorithm but I wouldn't say that was simpler or more efficient.
I have a page with 26 sections - one for each letter of the alphabet. I'm retrieving a list of manufacturers from the database, and for each one, creating a link - using a different field in the Database. So currently, I leave the connection open, then do a new SELECT by each letter, WHERE the Name LIKE that letter. It's very slow, though.
What's a better way to do this?
TIA
Since you are going to fetch them all anyway, you might find it faster to fetch them in one go and split them into letter-groups in the code.
Looking at it from the other end, why do you need to fetch all the lists just to build a set of links? Shouldn't you fetch a single letter when its link is clicked?
It sounds like you are doing up to 26 queries, which will never be fast. Often a single db query can take at least 40 ms, due to network latency, establishing connection, etc. So, doing this 26 times means that it will take around 40 x 26 ms, or more than one second. Of course, it can take much longer depending on your schema, data set, hardware, etc., but this is a rule of thumb that gives you a rough idea of the impact of queries on overall page render time.
One way I deal with this kind of situation is to use a DataTable. Fetch all the records into the DataTable, and then you can iterate through the alphabet, and use the Select method to filter.
DataTable myData = GetMyData();
foreach(string letter in lettersOfTheAlphabet)
{
myData.Filter(String.Format("Name like '{0}%'", letter));
//create your link here
}
Depending on your model layer you may wish to filter in a different way, but this is the basic idea that should improve the performance a lot.
Assuming you are querying to determine which letters are used, so that you know which links to render, you could actually just query for the letters themselves, like this:
select distinct substring(ManufacturerName, 1, 1) as FirstCharacter
from MyTable
order by 1
get one result set from one query and split that up. There is quite a lot of overhead going out the the database 26 times to do basically the same work!
You could probably do it smarter with a stored procedure. Let the SP return all the information you need in one call, and suddenly you only have one database interaction instead of 26...
Bring back all the items in one set (dataset, etc..), either through stored procedure or query, including the field left(col1,1), and sorting by that field..
select left(col1,1) as LetterGroup, col1, url_column from table1 order by left(col1,1)
Then look through the whole resultset, changing sections when the letter changes.
First letter in the alphabet sucks (sorry) as discriminator. You do not neet to split them actually (you could just ask for "where name like 'a%'), but whatever you run for that gives you on average a 1/26 or so split of the names. Not extremely efficient.
What do you mean with "creating a link - using a different field in the Database" - this sounds like a bad design to me.
there are a couple ways u can do this. 1) create a view in your db that has all the manufactures and their website link and then continue to hit the view for each letter. 2) select all the manufactures once and store it in a .net dataset and then use that dataset to populate your links.
This seems dirty to me, but you could create a first letter CHAR column and trigger to populate it. Have the first letter from the manufacturer name stored in that column and index it. Then select * from table where FirstLetter = 'A'.
Or create a lookup table with rows A - Z and set up foreign key in the manufacturer table. Again you would probably need a trigger to update this information. Then you could inner join the lookup table to the manufacturer table.
Then instead of putting 26 datasets in the page, have a list of links (A-Z) which select and show each dataset one at a time.
If I read you right, you're making a query for every manufacturer to get the "different field" you need to construct the link. If so, that's your problem, not the 26 alphabetic queries (though that's no help).
In a case like that, the faster way is this one query:
SELECT manufacturer_name, manufacturer_id, different_field
FROM manufacturers m
INNER JOIN different_field_table d
ON m.manufacturer_id = d.manufacturer_id
ORDER BY manufacturer_name
In your server code, loop through the records as usual. If you want, emit a heading when the first letter of the manufacturer_name changes.
For additional speed:
Put that in a stored procedure.
Index different_field_table on manufacturer_id.