Retrieving scalar data using .xsd dataset object? - c#

Can someone suggest the best way to retrieve a scalar value when the site uses .xsd files for the data sets? I have such site where before I commit to a insert task I need to verify duplicates.
Back in the day one would just instantiate a new connection and command object and run the query through BLL/DAL - easy job. With this prepackaged xsd file that the Studio creates for you I have no idea how to do it.
Thanks,
Risho

First, i would recommend to add an unique index in your database to ensure that it's impossible to create duplicates.
To answer your question: you can add queries to the automatically created TableAdapters:
How to: Create TableAdapter queries
From MSDN
TableAdapter with multiple queries
Unlike standard data adapters, TableAdapters can contain multiple
queries to fill their associated data tables. You can define as many
queries for a TableAdapter as your application requires, as long as
each query returns data that conforms to the same schema as its
associated data table. This enables loading of data that satisfies
differing criteria. For example, if your application contains a table
of customers, you can create a query that fills the table with every
customer whose name begins with a certain letter, and another query
that fills the table with all customers located in the same state. To
fill a Customers table with customers in a given state you can create
a FillByState query that takes a parameter for the state value: SELECT
* FROM Customers WHERE State = #State. You execute the query by calling the FillByState method and passing in the parameter value like
this: CustomerTableAdapter.FillByState("WA").
In addition to queries that return data of the same schema as the
TableAdapter's data table, you can add queries that return scalar
*(single) values.* For example, creating a query that returns a count of
customers (SELECT Count(*) From Customers) is valid for a
CustomersTableAdapter even though the data returned does not conform
to the table's schema.

Related

migrating an access multi valued field column to c#

I am attempting to use the Microsoft.ACE.OLEDB.12.0 driver to read data from an access database. came upon an odd situation. one of the columns in the access database shows as a comma delimited list of ids.
Wells
________
345,456,7
6,387
when I looked at the column definition in access I thought it would say string but it does not, it says number. so I guess it is storing an array of integers in a single column?
I'm having a tough time getting a data reader to pick this up.
using
var w = DB_Reader.GetValue(DB_Reader.GetOrdinal("Wells"));
results in the error
The provider could not determine the Object value. For example, the
row was just created, the default for the Object column was not
available, and the consumer had not yet set a new Object value.
Well, at the end of the day, you can think of the mutli-value column as in fact a child table.
So, if you looking to migrate a master and child table, then in YOUR database, you need a relational set of tables to re-create what Access is doing behind the scene.
So, lets take a multi-value example and query.
Say we have this sql query in Access:
SELECT ID, Person_Name, FavorateColors FROM tPerson;
But, "favorite colors" is one of those MV columns. (and I should point out with the HUGE movement towards no-sql databases - they also often work this way also - same for XML or JSON data for that matter. However, be it some XML, JSON or Access mutli-value features? Well, you need that child table if you going to adopt a relational data model to represent this data.
Ok, so we run the above query, and you get this output:
In fact, when I used the lookup wizard - I picked a child table called tblColors.
but, how can we explode the above query to dig out the data?
Change the above query to this:
SELECT ID, Person_Name, FavorateColors.Value FROM tPerson
Note how we added ".value" after the MV column name. Now, when you run the query, you get the SAME result as if you had two tables, and did a left join. The parent table rows will like any relational database simple repeat for each child table value, and you get this:
Note how now the PK value and the row is repeating for each child mv value.
So, you are quite much free to query as per above - you get what amounts to a left joined table, and of course the parent record repeats.
So, just like XML, JSON, or in fact a query or a table of data with repeating parent row, and child rows? Well, you quite much forced to write code to split out this data, or re-normalize the data. This of course is far more common when receiving say JSON/XML data, or in fact often say data from a Excel sheet.
So, you have to process out the child record data, and create a relation for that data.
And thus now our question becomes how can we import JSON/XML/Excel data that really should have used two relational database tables.
So, assuming we want to process this data? You process it the same as for any data you have that should have been two related tables in the first place.
it really depends if this is a one time import, or you have to do this all the time?
If it was a one time deal, then I would use Access, and use a make table query based on the above query. You would in fact have to pluck up the PK ID from the child table. In above there is a child table called colors - we just missing that "junction" table in between that Access automatic created. The hidden tables are not exposed, and thus I would simply use a make table query in access, and then add a FK column that is the PK value from the tblColors.

How To can Select**DB Table

If I have a database in each table where the ID field and its appropriate function in any field do not take the administrator behavior so that tables and field contents can be fetched until the serial number is unified without duplicate values
Appropriate in this context using except.
Is there a code that can fetch tables either in sql or in the Entity Framework ؟
Eexcept_Admin_except_List
List<int> tempIdList = answeripE.Select(q => q.ID).ToList();
var quslist = db.Qustion.Where(q => !tempIdList.Contains(q.ID));
\Thanks for the creator of "daryal" Get All Except from SQL database using Entity Framework
I need to do this without asking for each table and querying it. And also request SQL from the database as a whole without exception such as
select*
IDfield
FROM
MSDB_Table T
WHERE
T.id == MaxBy(T.OrderBy(x => x.id);
can replace "where TABLE1.id 'OR' Table2.id" decode all the tables and give a result.
All I'm looking forward to is that I can query one database on a whole, get it on a list without the use of tables or a composite key because it serves me in analyzing a set of data converted to other data formats, for example when representing a database in the form of JSON There are a lot of them on more than one platform and in a single database and to avoid the repetition of the data I need to do this or a comprehensive query may be compared or to investigate or like Solver Tool in Excel, so far did not get the answer to show me the first step is because it does not exist originally or because it is not possible?
If you want Entity Framework to retrieve all columns except a subset of them, the best way to do that is either via a stored procedure or a view. With a view you can query it using LINQ and add your predicates in code, but in a stored procedure you will have to write it and feed your predicate conditions into it...so it sounds like a view would be better for you.
Old example, but should guide you through the process:
https://www.mssqltips.com/sqlservertip/1990/how-to-use-sql-server-views-with-the-entity-framework/

Change returned table name from stored procedure at the SQL side

I have written a single stored procedure that returns 2 tables:
select *
from workers
select *
from orders
I call this stored procedure from my C# application and get a DataSet with two tables, and everything is working fine.
My question is how can I change the tables name at the SQL Server side so that in the C# side I will be able to access it via a name (instead of Tables[0]):
myDataSet.Tables["workers"]...
I tried to look for the answer in Google but couldn't find it. Maybe the search keywords was not sufficient.
You cannot really do anything from the server-side to influence those table names - those names only exist on the client-side, in your ADO.NET code.
What you can do is on the client-side - add table mappings - something like:
SqlDataAdapter dap = new SqlDataAdapter(YourSqlCommandHere);
dap.TableMappings.Add("Table", "workers");
dap.TableMappings.Add("Table1", "orders");
This would "rename" the Table (first result set) to workers and Table1 (second result set) to orders before you actually fill the data. So after the call to
dap.Fill(myDataSet);
you would then have myDataSet.Tables["workers"] and myDataSet.Tables["orders"] available for you to use.
The TDS Protocol documentation (Which is the protocol used to return results from SQL Server) does not mention a "resultset name". So the only way you will ever be able to access the result sets in ADO.net is by the number as mentioned in your example.

Increasing Performance of a Database Fetch Operation in

I am using stored procedures to fetch information from the database. First I fetch all the parent elements and hold them in the array and then using the parent Id I fetch all the related children. Each parent can have 150 children. There are about 100 parent elements. What is the best way to increase the performance of the fetch operation. Currently it takes 13 seconds to retrieve.
Here is the basic algorithm:
while(reader.read())
{
Parent p = new Parent();
// assign properties to the parent
p.Children = GetChildrenByParentId(parent.Id);
}
You should get all that data in one SQL select / stored proc (do some sort of join on child data) and then populate parent and children objects. Now you have 100*150 = 15000 requests on DB and I if you can do this with one request I would expect dramatic performance effect.
As Brian mentioned it in comment, that is known as RBAR, RowByAgonizingRow :)
Like a achronime a lot, here is more :
https://www.simple-talk.com/sql/t-sql-programming/rbar--row-by-agonizing-row/
The first and most important step is to measure the performance. Is it SQL Server that is the bottle neck, or .NET?
Also, you need to minimize the times you have to go back to the database, so if you can retrieve all of the data you need in a single stored procedure, that would be best.
From your question, it sounds to me like it is SQL Server that is the problem. To test this, run your stored procedure from SQL Query Anylizer, and see how long it takes for a known parent id. I bet you just need some indexes added to your underlying table to make it possible for SQL to get the data faster. If possible, look at your Execution Plan for the stored procedure. You can find a good article about reading execution plans here.
SQL Server 2008 is easy, create a user defined table type and pass the list of parent IDs to that, OR you can just use the logic you used to get those parent IDs in the first place and just join to the tables that hold child data.
To create the table type, you make something like this:
CREATE TYPE [dbo].[Int32List]
AS TABLE (
[ID] int NOT NULL
);
GO
And your stored proc goes something like this:
CREATE PROCEDURE [dbo].[MyStoredProc]
#ParentIDTable [dbo].[Int32List] READONLY
AS
--logic goes here
GO
And you call that procedure from your C# code like this:
DataTable ParentIDs = new DataTable();
ParentIDs.Columns.Add("ID", typeof(int));
SqlConnection connection = new SqlConnection(yourConnectionInfo);
SqlCommand command = new SqlCommand("MyStoredProc", connection);
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("#ParentIDTable", SqlDbType.Structured).Value = ParentIDs;
command.Parameters["#ParentIDTable"].TypeName = "Int32List";
This way is nice, because it's a great way to effectively pass a list of values to SQL Server and treat it like a table. I use table types all over my applications where I want to pass an array of values to a stored proc. Just remember that the column names in the C# DataTable need to match the column names in the table type you created, and the TypeName property needs to match the table type's name.
With this method, you will only make 1 call to the DB, and when you iterate through the results, you should also make sure to include the ParentID in the select list so you can match each child to the proper parent object.
Here's a great resource to explain table types in more detail: http://www.sommarskog.se/arrays-in-sql-2008.html

Reading custom data from SQL tables

We have an application that allows the user to add custom columns to our tables (maybe not the best idea, but that's how it is).
We are now (re)designing our dataaccess layer (we didn't really have one before) and now we're going to use parameterized queries in our datamappers when querying the SQL-database (earlier we concatenated the SQL-strings and escaped all input).
Now we're trying to determine the best way of handling the custom columns in order to both query, create and update these records. The custom attributes are going to be stored in a Dictionary on our "business objects" so I was thinking about doing it like this:
Querying data
Use SELECT * to get all columns and populate our properties and store the rest (custom data) in a dictionary on the business object.
Create/Update
Iterate all columns in the table (something like: SELECT COLUMN_NAME FROM information_schema.columns WHERE TABLE_NAME = 'TableName'
Generate a SQL-string (with parameterized variablenames) by checking which columns exists in both the dictionary and the table and then adding the values from the dictionary as variables to the SQLCommand
Or are there any better approches while still using parameterized queries?
If you are adding ad-hoc columns, ORM gets very tricky. In some ways, dropping back to DataTable/DataAdapter (of which I am not a fan) may be an option. Personally, I would look first at other options for storing the custom data:
an xml column
a set of key/value pairs against each record (in a second table)
some other delimited format in a [n]varchar(max)
Do you really have to add columns?

Categories