I'm working on a Silverlight project trying to access a database using LINQ To DataSet and then sending data over to Silverlight via .ASMX web service.
I've defined my DataSet using the Server Explorer tool (dragging and dropping all the different tables that I'm interested in). The DataSet is able to access the server and database with no issues.
Below is code from one of my Web Methods:
public List<ClassSpecification> getSpecifications()
{
DataSet2TableAdapters.SpecificationTableAdapter Sta = new DataSet2TableAdapters.SpecificationTableAdapter();
return (from Spec in Sta.GetData().AsEnumerable()
select new ClassSpecification()
{
Specification = Spec.Field<String>("Specification"),
SpecificationType = Spec.Field<string>("SpecificationType"),
StatusChange = Spec.Field<DateTime>("StatusChange"),
Spec = Spec.Field<int>("Spec")
}).ToList<ClassSpecification>();
}
I created a "ClassSpecification" data class which is going to contain my data and it has all the table fields as properties.
My question is, is there a quicker way of doing the assignment than what is shown here? There are actually about 10 more fields, and I would imagine that since my DataSet knows my table definition, that I would have a quicker way of doing the assignment than going field by field. I tried just "select new ClassSpecification()).ToList
Any help would be greatly appreciated.
First, I'll recommend testing out different possibilities using LINQPad, which is free and awesome.
I can't quite remember what you can do from the table adapter, but you should be able to use the DataSet to get at the data you want, e.g.
string spec = myDataSet.MyTable.Rows[0] // or FindBy... or however you are choosing a row
.Specification;
So you might be able to do
foreach(var row in myDataSet.MyTable.Rows) {
string spec = row.Specification;
...
}
Or
return (from row in myDataSet.Specification
select new ClassSpecification()
{
Specification = row.Specification,
SpecificationType = row.SpecificationType,
StatusChange = row.StatusChange,
Spec = row.Spec,
}).ToList<ClassSpecification>();
Or even
return myDataSet.Specification.Cast<ClassSpecification>()
Not sure if the last one will work, but you can see that there are several ways to get what you want. Also, in my tests the row is strongly typed, so you shouldn't need to create a new class in which to put the data - you should just be able to use the existing "SpecificationRow" class. (In fact, I believe that this is the anemic domain model anti-pattern.)
So are you using a dataset for lack of a Linq provider to the database? That is the only reason I would consider this move. For instance, with Linq to Sql you can drag the table out and drop it. Then you have instant objects with the shape you want for it.
Related
What is a data structure (like list, array, etc...) that could replace a database like SQL?
I would like it to have as many database-like features as possible, like query select and so on...
If is there none, suggest structure how it should be look like
edit:
datatable is good enough i think, thx for the answers
The simplest such data structure would be a record in F# or a class in C#. This would represent your table. A collection of tables would represent a database. You can query this with query expressions (aka Linq), and serialize it as pointed out above. You can also use a DataTable. If you are just looking for an in memory representation of a database you could have that with SQLite.
If you just want to access a database you can do it with the SQLProvider in F#, or Dapper in both F# and C#.
Here is an example with a list of records and a query expression:
open System
type Row = {
Id: bigint
Name: string
Address: string
}
let table = [
{Id = 100I; Name = "Joe"; Address = "NYC"}
{Id = 101I; Name = "Jane"; Address = "KC"}
{Id = 102I; Name = "Jim"; Address = "LA"}
]
let notInNYC =
query {
for user in table do
where (user.Address <> "NYC")
select user.Name
}
|> Seq.toList
//val notInNYC : string list = ["Jane"; "Jim"]
If you are looking to use an actual SQL database, then
(per MSDN):
private static void CreateCommand(string queryString, string connectionString)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand command = new SqlCommand(queryString, connection);
command.Connection.Open();
command.ExecuteNonQuery();
}
}
If you are looking to not use an actual SQL database and try to save data, relations, etc. directly in your code (not sure why you'd want to do that), you could create your own custom classes for it. You'd want to include some form of table, as well as a search method that could look through the instances of table, etc. There are so many functionalities that you'd have to implement though, so this would be difficult to do if you are trying to replicate all of the functionality of a real SQL db.
Assuming you already know about Entity Framework as an ORM and a gateway to access DBs, here are some alternatives you'd want to have in mind.
One straight forward and quick solution for small data amounts is serialization.
You can choose from:
Json
XML
Binary
Some others.
Serialization allows you to store and retrieve an object graph with not fuss of setting up DBs and connections. But doesn't give sophisticated search and update capabilities.
Another thing you might want to explore is NoSQL databases.
Check out LiteDB to get you started with the concept.
Well, lets say I've got 2 Datatables from the beginning.
The first one (source) contains the data from da database.
The second one also contains data from a database, nut these values have to be updated into the first database.
Unfortunately they don't have the same structure.
The sourcedatatable has some additional columns which the second has not.
For example:
First DT: ID | Name | Company | Age
Second DT: Name | Company | Age
I want the FIRST DataTable to be updated with the values from the second DataTable IF THERE ARE SOME DIFFERENCES (and only the differences).
Any ideas on how to work that out? Any suggestions about performance, even if using very big databases?
If you are working with a big amount of data, I would suggest doing things as close to the DB as possible(if possible within a stored procedure).
If sticking to .NET is mandatory, these are the options I would consider given the description of your scenario you provided.
First I would choose how to load the data (the order in which I would consider them):
Generate Entities (LINQ to SQL).
Use F# Type providers
Use ADO directly
After this, I would either:
use .Select and .Except on the IQueryable sources, or
do something similar to http://canlu.blogspot.ro/2009/05/how-to-compare-two-datatables-in-adonet.html, if by some chance I was using ADO.NET
It is rather hard to give a specific and exact answer if you do not provide more information on the type of data, amount, hardware, database type.
Note: whichever solution you choose, you should keep in mind that it is hard to compare things of different structure, so an extra step to add empty columns to the one that is missing columns is required.
This code is just for reference, I did not have time to test it. It might reuire a bit of tweaking. Try something like:
var a = new DataTable();
a.Columns.Add("ID");
a.Columns.Add("Name");
a.Columns.Add("Company");
a.Columns.Add("Age");
var b = new DataTable();
b.Columns.Add("Name");
b.Columns.Add("Company");
b.Columns.Add("Age");
var destination = a.AsEnumerable();
var localValues = b.AsEnumerable();
var diff = destination.Join(localValues, dstRow => dstRow["Name"], srcRow => srcRow["Name"],
(dstRow, srcRow) =>
new {Destination = dstRow, Source = srcRow})
.Where(combinedView =>
combinedView.Destination["Age"] != combinedView.Source["Age"] ||
combinedView.Destination["Company"] != combinedView.Source["Company"]);
Also, I would really move to a proper DB, and maybe improve the data model.
With "table names" I mean just the name of normal (not queries or stuff like that), plain old tables. This is because I'm working on a project that currently connects to the Jet engine and among other features, it shows a list of tables that the user would double click to see some specific's table contents. But now I want the user to be able to change the engine from a list of installed engines. But for my program to work with other engines it will need to get the table names in a way that will work for every SQL engine (or at least most of them). I actually also need to be able to get all the column names for a specific table, and also, be able to create a "CREATE TABLE" query in a way that will work with every possible engine (since the user can create tables from a wizard, and my program generates the query). I'm actually very doubtful that this is possible but, as far as I know, Visual Studio can create tables from a wizard for different database engines. How do they manage to do this? Will I have to have a different "CREATE TABLE" query for every possible SQL engine?
I'm wondering if ADO can help with this as it seems to have everything somehow standardized.
No, unfortunately there is no general way to do these things as far as I know. All DB engines have slightly different dialects of DDL and SQL, support different sets of data types, and different ways of managing their metadata etc. If you keep to the absolute lowest denominator of features I guess you could rely on standard SQL/DDL but that will be very limited.
Usually this is solved by creating an abstract data layer with several different implementations which handles the differences.
ADO only solves part of the problem as it offers a common interface for sending queries to a database but the SQL in the queries have to be specified by the client.
If you want any back-end, there will always be one that does not work, but nearly every back-end will allow:
select table_name from information_schema.tables
Your basic create table commands, with keys and indexes, are easily coded to be compatible with nearly every back-end, execpt for auto-incremented integers keys, which have a different syntax on every back end.
So the answer, "mostly yes, probably more than you think, but not 100%". Because the quirks are small, it is possible to write some general code with some tweaks for the particular back-end.
This should do it for you in MSSQL. I imagine it would be very similar for other SQL implementations.
SELECT DISTINCT Name FROM sysobjects WHERE xtype='U'
You can use the GetSchema ADO function to get a DataTable with almost all schema data.
This example use a SQLConnection, but the function can be used in any ODBCConnection.
using System;
using System.Data;
using System.Data.SqlClient;
class Program
{
static void Main()
{
string connectionString = GetConnectionString();
sing (SqlConnection connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
DataTable table = connection.GetSchema("Tables");
// Display the contents of the table.
DisplayData(table);
Console.WriteLine("Press any key to continue.");
Console.ReadKey();
}
}
private static string GetConnectionString()
{
// To avoid storing the connection string in your code,
// you can retrieve it from a configuration file.
return "Data Source=(local);Database=AdventureWorks;" +
"Integrated Security=true;";
}
private static void DisplayData(System.Data.DataTable table)
{
foreach (System.Data.DataRow row in table.Rows)
{
foreach (System.Data.DataColumn col in table.Columns)
{
Console.WriteLine("{0} = {1}", col.ColumnName, row[col]);
}
Console.WriteLine("============================");
}
}
}
In Visual Studio it is implemented using the Data Designer Extensibility (DDEX) where a specific provider should expose GetSchema method to help retrieving metadata. You can get some ideas here.
I'm using table adapters and datasets in .NET (version 2.0).
I have two tables like this:
Table1
------
...
TypeId
Table2
------
Id
Name
where Table1.TypeId is linked to Table2.Id.
I've strongly generated these types by using the wizard so now I can do things like this:
Table1Adapter adapter = new Table1Adapter();
Table1DataSet data = adapter.GetData();
foreach(Table1Row row in data) { ... }
// Can now iterate through and display the data
This all works fine. But now I also want the data from Table2. I have noticed that in row there is generated field Table2Row which seems ideal but it is set to null. How do I populate this dataset properly?
Each DataTable in Typed-Dataset has its own TableAdapter. So you'll have to the same step for each DataTable in the dataset.
There does not exists any api that lets you do this auto-fill of the entire typed-dataset or no such code is generated within typed-dataset that supports this. It is also difficult to do this because TableAdapters do not have a common base-class that can let you do this.
If you really need to do this, you'll have to maintain a collection of DataTable type-names and TableAdapter type-names and iterate over the collection to perform the dataset fill.
So I recommend to fill dataset for each table in 'hard-code' manner.
Actually there is a Microsoft Data Provider called `MSDataShape', which provides this functionality. It was flagged for depreciation last when I checked and I don't know what the current status and future plans are, or what replaces it.
Here's an example of a single "SQL command", that will return, I believe, a DataSet with two nicely related DataTable-s:
SHAPE {select * from customers}
APPEND ({select * from orders} AS rsOrders
RELATE customerid TO customerid)
Potential replacement to look at is well-crafted FOR XML query, but I have not played with that.
EDIT:
I think this SO answer does exactly that using XML.
I'm currently doing the following to use typed datasets in vs2008:
Right click on "app_code" add new dataset, name it tableDS.
Open tableDS, right click, add "table adapter"
In the wizard, choose a pre defined connection string, "use SQL statements"
select * from tablename and next + next to finish. (I generate one table adapter for each table in my DB)
In my code I do the following to get a row of data when I only need one:
cpcDS.tbl_cpcRow tr = (cpcDS.tbl_cpcRow)(new cpcDSTableAdapters.tbl_cpcTableAdapter()).GetData().Select("cpcID = " + cpcID)[0];
I believe this will get the entire table from the database and to the filtering in dotnet (ie not optimal), is there any way I can get the tableadapter to filer the result set on the database instead (IE what I want to is send select * from tbl_cpc where cpcID = 1 to the database)
And as a side note, I think this is a fairly ok design pattern for getting data from a database in vs2008. It's fairly easy to code with, read and mantain. But I would like to know it there are any other design patterns that is better out there? I use the datasets for read/update/insert and delete.
A bit of a shift, but you ask about different patterns - how about LINQ? Since you are using VS2008, it is possible (although not guaranteed) that you might also be able to use .NET 3.5.
A LINQ-to-SQL data-context provides much more managed access to data (filtered, etc). Is this an option? I'm not sure I'd go "Entity Framework" at the moment, though (see here).
Edit per request:
to get a row from the data-context, you simply need to specify the "predicate" - in this case, a primary key match:
int id = ... // the primary key we want to look for
using(var ctx = new MydataContext()) {
SomeType record = ctx.SomeTable.Single(x => x.SomeColumn == id);
//... etc
// ctx.SubmitChanges(); // to commit any updates
}
The use of Single above is deliberate - this particular usage [Single(predicate)] allows the data-context to make full use of local in-memory data - i.e. if the predicate is just on the primary key columns, it might not have to touch the database at all if the data-context has already seen that record.
However, LINQ is very flexible; you can also use "query syntax" - for example, a slightly different (list) query:
var myOrders = from row in ctx.Orders
where row.CustomerID = id && row.IsActive
orderby row.OrderDate
select row;
etc
There is two potential problem with using typed datasets,
one is testability. It's fairly hard work to set up the objects you want to use in a unit test when using typed datasets.
The other is maintainability. Using typed datasets is typically a symptom of a deeper problem, I'm guessing that all you business rules live outside the datasets, and a fair few of them take datasets as input and outputs some aggregated values based on them. This leads to business logic leaking all over the place, and though it will all be honky-dory the first 6 months, it will start to bite you after a while. Such a use of DataSets are fundamentally non-object oriented
That being said, it's perfectly possible to have a sensible architecture using datasets, but it doesn't come naturally. An ORM will be harder to set up initially, but will lend itself nicely to writing maintainable and testable code, so you don't have to look back on the mess you made 6 months from now.
You can add a query with a where clause to the tableadapter for the table you're interested in.
LINQ is nice, but it's really just shortcut syntax for what the OP is already doing.
Typed Datasets make perfect sense unless your data model is very complex. Then writing your own ORM would be the best choice. I'm a little confused as to why Andreas thinks typed datasets are hard to maintain. The only annoying thing about them is that the insert, update, and delete commands are removed whenever the select command is changed.
Also, the speed advantage of creating a typed dataset versus your own ORM lets you focus on the app itself and not the data access code.