I'm a little behind the times on data access and need to be pointed in the right direction. I currently have just a single SQL Server db table with about 35 columns in it. I'm building a WCF web service to provide access to it and am trying to figure out the quickest way to link up the C# based web service to the database.
I've been looking at Entity Framework but it seems pretty heavy and everything I've found on it so far seems to assume you already know something about it so I didn't want to get too far into it if it's the wrong path. I'm not fully sold on the idea of generating SQL in the application. I already have a DataContract class with properties for each column in the table, I'm just looking for an automatic way to map columns to properties and properties back to columns/sproc parameters. I already wrote some code that uses reflection to map data from a different source to this DataContract (matching on property name with a dictionary of additional mappings as a backup) so it's not that much work to do the same here, but I wanted to see what else is available. What I want to avoid is writing out each PropertyName = ColumnName.Value. Is there something light weight built into VS2010 .NET 4.0 for a simple case like this? Would directly calling a stored procedure through EF as is mentioned here be a good option? It looks a little out of date.
I like Dapper, a "micro-ORM". It's used by SO. I've used it beautifully as "an automatic way to map columns to properties" and it appears to also do "properties back to columns/sproc parameters" but I haven't used it for that. It's superb - I had it going in about 5 minutes after getting it with Nuget. I'm a newbie to EF and wouldn't recommend it without a guide.
IEnumerable<TModel> result;
using (MySqlConnection conn = new MySqlConnection(_mysqlConnString))
{
// "Query" is a Dapper extension method that stuffs the datareader into objects based on the column names
result = conn.Query<TModel>("Select * from YourTable");
}
// do stuff with result
This links to a full example, instead of just the piece I pulled out of my current project. http://www.tritac.com/bp-24-dapper-net-by-example
other Dapper information: c-sharpcorner.com/UploadFile/4d9083/… and liangwu.wordpress.com/2012/08/16/dapper-net-samples talk about it.
(moved from comment as this was more an answer than a comment - I'm trying to do the proper task in the proper place).
Related
Sorry if this has been asked elsewhere, but I couldn't find a clear answer anywhere.
I have decided to begin learning to use relational databases a bit more, namely SQL. This is a major beginners question but its probably essential to get started on.
I'm basically a little confused the best practice on how to utilize SQL (or other). At college i have accessed databases (using JSON strings) for things such as mobile apps, but i have never actually designed and built a database myself, as my tutor made the mentioned database for us to access himself.
Lets say I have a C# application that holds genealogy information (i.e. families and their members) and i wanted to store each individual on a database. Would I, simply use the structure I already have but save to fields in a database instead of an xml or text document? Or does it work the other way, i.e. do I create a database with required fields then just retrieve this from the database in a c# application and manipulate the data as I so wish, so the application would be entirely different (so the c# application basically doesn't really hold/store any data and just works on whats fed from the database)?
Whats troubling me is that usually where I would store my c# objects in a dictionary or list for example, would i instead just retrieve straight from the database? or retrieve from the and store the data into a normal structure and work from there (surely this would defeat the point of fast-searching from a database)?
I may be over-thinking it slightly. Hope that makes sense. Thanks in advance
Would I, simply use the structure I already...
or
do I create a database with required fields...
I think that is the crux of your question.
Starting from the database
For me, when building an application that uses a backend database, an Entity-Relationship diagram is pretty crucial. I found quite a nice little tutorial for you here: http://www.sum-it.nl/cursus/dbdesign/english/index.php3 but you can easily find one that suits your learning style. The key point is that you are trying to model the problem domain (the real world out there that needs your application) in a way that your application can somehow capture. Once you have an E-R diagram of related tables, it is easier to figure out the details. Using SQL Management Studio for SQL Server 2008 (Express edition) you can create a few basic tables and build the E-R diagram right there and have it generate relationships for you. You can then, at your leisure, examine the SQL used to achieve that and refine accordingly.
Personally, I always start by examining the problem domain, then I build the E-R diagram, then I build the database. I start building the C# application when I'm reasonably confident the database reflects the problem domain.
Starting from your C# application
However, what really matters is that you model the real world in a meaningful and effective way. In your case you already have a starting point in structures you've created in C# and you can use them to give you a starting point to build the E-R diagram. If you find it easier to get a C# application going and then build a database that reflects it, that should be fine. Perhaps you already have an approach that helps you capture the problem domain effectively. It's an iterative process whatever you do: building the C# code might reveal problems with the underlying database design and vice versa.
Diagramming - E-R or UML?
I'm personally convinced that this whole business is so complicated that you really need some diagrams.
to visualise your database, use an E-R diagram
to visualise your C# application use a UML class diagram
As you head towards a working application, you'll see how these 2 diagrams begin to match or at least reflect eash other pretty closely. In both cases, (entities or classes) understanding the relationship between objects will be really important when you query the database because it is crucial to understand relationships between tables (especially using 1-to-many relationships to resolve a complex many-to-many relationship) and various techniques for joining tables in queries (INNER or OUTER joins etc) No matter how clever your C# application is, you will at some point need to understand at least some of the complexities of the SQL language - and it is easier if you can refer to an E-R diagram.
Where to store?
Whats troubling me is that usually where I would store my c# objects in a dictionary or list for example, would i instead just retrieve straight from the database?
In the database, without a doubt. A C# class called Family would have a property FamilyName, say, with a setter method built in. If you discover a spelling mistake and want to change the name, the setter method would open a connection to the database, run an UPDATE query with the specified family name, (and probably the family id) as a parameter, and update the underlying field accordingly. Retrieving data would involve running a SELECT query etc.
Conclusion
Do some tutorials on how to examine a problem domain, create an entity-relationship diagram and build a set of related tables based on the diagram. I'm convinced that way you'll find it much easier to keep track of the C# classes that you build to communicate with the backend database.
Here's an example of a simple E-R diagram for families and their members:
To begin with you might think members and family could be in one table, but then you discover that creates a lot of duplication so you separate that out into family and member table with a one-to-many relationship, but then you realise that, through marriage for instance, people can belong to more than one family and you need to create a many-to-many relationship. I think the E-R diagram is the best place to work out that kind of complexity.
Not knowing what your structures look like or how your DB will be designed this is hard to answer. But you should be able to use existing data structures, and just pipe the data from the database instead of the XML file.
Look into Linq-to-XML, C# has a strong library to interact with SQL. May be a bit confusing at first, but very powerful once you learn it.
If I am right you are asking also if you should retrieve all the records from the database and store them as objects in a collection or retrieve selected records from the database and use the dataset results without placing them in a purpose defined structure.
I tend to select the records I want from the database and then load the results into my purpose defined classes / structures. This allows you to add your manipulation methods to the class holding a record result etc. without needing to take in dataset results to each method. However you will find yourself doing singular updates all the time when a batch update might be more efficient... if that makes sense.
Take a look at entity frameworks code first. If your data structures are classes in your application there are techniques to use that to create your database schema from that. As far as the data. Store it in your database and populate your lists and dictionaries with it. Or populate list of class genealogy individual with it.
If you want to write your own data classes, there's a free tutorial here written by myself. What I would definitely not to is use the data sources in ASP.NET, as these wizards are the Barty Crouches of the ASP.NET world - they appear good, but turn out to be evil, as inevitably you'll want to be able to tweak them and you won't understand how to do this.
I have a database with a lots of relationships between Tables and a Silverlight client that connects to my server with WCF service on ASP.Net side.
First i used LINQ to SQL as a robust mapper tables to object and in a WebMethod that returns a List<Foo> of my Database's object(suppose GetFoo()). The Foo has lots of relationships with other objects that each of that have lots of realaships too,(this means , there is a PK and FK between tables).also i use Microsoft Service Trace Viewr for track my service
When i call GetFoo() , WCF returns this error:
Object graph for type 'X.Y.Z' contains cycles and cannot be serialized if
reference tracking is disabled
I searched this error and find this great post but that is not working properly and i see same error too.
Various options:
remove the cyclic dependencies from your model; this might be tricky for a generated model that has lots of existing code built against it, but is worth a try; however, you typically want to not serialize the parent, which is exactly what LINQ-to-SQL wants you to keep (it'll let you drop the children property, but that is what you usually want to serialize)
enable cyclic references; it looks like you've tried this without success; did you enable it at both ends, though? Actually I wouldn't be surprised if Silverlight doesn't like this extension (it has limited extension support)
use a separate (flat) DTO model for data transfer purposes
try using NetDataContractSerializer; I can't remember if this is supported in Silverlight, and I must admit I'm not its biggest fan, but it might be a pragmatic fix here
I'd vote firmly in the "DTO model" category; simply, having a separate model means you are less likely to run into tangles whenever you tweak the DB - and you are in complete control over it.
A bit late this. But if anyone are using linqtosql and have this problem you can simply just open the tables in your dbml class. Right click next to a table and click properties.
HEre there is a property named Serialization Mode.. Set it to Unidirectional
The error will be gone
I know this is an old question now, but did you try decorating the classes generated by your DBML with [DataContract(IsReference=True)]?
I had the same problem in 2010 and had to resort to some fairly extreme measures to get it to work on client and service sides, but recently went back through it with VS2013/.NET 4.5 and had much less pain, as documented here (with EF v6 RC 1 POCO objects): http://sanderstechnology.com/2013/more-with-the-entity-framework-v6-rc1/12423/
I'm looking for a good solution to make my life easier with regards to writing/reading to a SQL Server DB in a dynamic manner. I started with Entity-framework to make my life easier to begin with, but as the software become more general and config driven I'm finding that Entity becomes less and less appropriate because it relies on specific objects defined at design time.
What I'd like to do.
Generate Tables/Fields at runtime.
Select rows from tables by table name with unknown schema into a generic data type (eg Dictionary)
Insert rows to tables by table name using generic data types (dictonary, where the string maps to field name), where the data type mapping between typeof(object) and field type is taken care off.
I've started implementing this stuff myself, but I imagine someone has already has already done it before.
Any suggestions?
Thanks.
I'm having trouble understanding how what you are describing is any different than plain old ADO.NET. DataTables are dynamically constructed based on a SQL query and a DataRow is just a special case of an IndexedDictionary (sometimes called an OrderedDictionary where you can access values via a string name or an integer index like a list). I make no judgment as to whether choosing ADO.NET is actually right or wrong for your needs, but I'm trying to understand why you seem to have ruled it out.
You can use Sql.Net ( http://sqlom.sourceforge.net ) to easily generate dynamic SQL statements in C#.
The iBATIS.NET (now MyBatis.NET) Data Mapper framework doesn't automatically generate tables or fields at runtime, but it does allow you to select and commit data via Dictionary objects.
It's probably not going to suit your needs completely (it's kind of tedious to set up, but pretty easy to maintain once it is), but it might be worth a look. Here's a link to the online documentation.
Other popular frameworks might do the same or similar, such as NHibernate.
Using VS2005, .net 2.0 , C#
Hi all,
What is the best way to map stored proc columns to the c# object properties with out creating tight coupling.
For example, I dont like to do the following
DataRow row = Getmyrows();
MyObject.MyProperty1 = row["col1"];
MyObject.MyProperty2 = row["col2"];
So, when the column in stored proc gets changed to colxyz then the binary code will break. What is the best practice to address this. A code sample would be helpful and thank you in advance.
I would look into OR mappers. LINQ to SQL, nHibernate, Entity Framework, LLBLGen, etc. These allow you to configure your mapping via XML or some other external configuration source. Most of them also provide a way to completely decouple your entities from the persistence framework, allowing your entities to be POCO (Plain Old CLR Objects). Another benefit of OR mappers is they generate SQL for you on the fly, which allows you to largely eliminate your stored proc layer, which is also a coupling that can cause problems (on both ends...in your code as well as in your DB schema.)
Couple approaches:
If you're forced to stick w/ ADO.NET proper, use a strongly typed dataset. All that mapping between objects and data structures is down in a schema where it belongs. Then you'd be able to hydrate your objects w/ code like this:
MyObject.MyProperty1 = dataSet.TableName.PropertyName
I noticed everyone else said the same thing I was going to ;-) Go w/ an ORM. I know premature optimization is a slippery slope, but you inevitably will find clear justification for going that route. You won't regret it as your requirements become more complex, and you'll be learning a valuable skillset that's clearly gaining a lot of momentum in the .NET space.
Best solution is to use a proper O/RM like NHibernate, or if you can settle for "less" Linq to SQL or Entity Framework.
However if you must, I suggest using IDataReader/SqlDataReader instead (simplest, best performance), but you won't get away with having to map to column names if you want to do it the "hard way".
An approach that coworkers and I used back in the 2.0 days was to create a custom attribute which we used to specify the field name from the data table, and tagged our objects' properties with it. then built a generic entity builder that would take a datareader as a parameter ( EntityBuilder(IDataReader rdr) ); as it worked through the datareader, it would create an empty T, reflect on the class, go through the properties to get the custom attribute information and the type, and set the value based on that.
also had another custom attribute that specified the parameter used in our Insert and Update SPROCs to automatically populate the parameters, too.
If you have to do it that way, and dont want to use ORM, store the column names mapping in an xml file.
<appSettings>
<key="prop1" value="col1">
</appSettings>
then in the code do something like:
myObject.Prop1 = ConfigurationManager.AppSettings["prop1"].Value
I know its clunky, and involves reading from xml (or the config file) for each property but it will work.
I am trying to leverage ORM given the following requirements:
1) Using .NET Framework (latest Framework is okay)
2) Must be able to use Sybase, Oracle, MSSQL interchangeably
3) The schema is mostly static, BUT there are dynamic parts.
I am somewhat familiar with SubSonic and NHibernate, but not deeply.
I get the nagging feeling that the ORM can do what I want, but I don't know how to leverage it at the moment.
SubSonic probably isn't optimal, since it doesn't currently support Sybase, and writing my own provider for it is beyond my resources and ability right now.
For #3 (above), there are a couple of metadata tables, which describe tables which the vendors can "staple on" to the existing database.
Let's call these MetaTables, and MetaFields.
There is a base static schema, which the ORM (NHibernate ATM) handles nicely.
However, a vendor can add a table to the database (physically) as long as they also add the data to the metadata tables to describe their structure.
What I'd really like is for me to be able to somehow "feed" the ORM with that metadata (in a way that it understands) and have it at that point allow me to manipulate the data.
My primary goal is to reduce the amount of generic SQL statement building I have to do on these dynamic tables.
I'd also like to avoid having to worry about the differences in SQL being sent to Sybase,Oracle, or MSSQL.
My primary problem is that I don't have a way to let ORM know about the dynamic tables until runtime, when I'll have access to the metadata
Edit: An example of the usage might be like the one outlined here:
IDataReader rdr=new Query("DynamicTable1").WHERE("ArbitraryId",2).ExecuteReader();
(However, it doesn't look like SubSonic will work, as there is no Sybase provider (see above)
Acording to this blog you can in fact use NHibernate with dynamic mapping. It takes a bit of tweaking though...
We did some of the using NHibernate, however we stopped the project since it didn't provide us with the ROI we wanted. We ended up writing our own ORM/SQL layer which worked very well (worked since I no longer work there, I'm guessing it still works).
Our system used a open source project to generate the SQL (don't remember the name any more) and we built all our queries in our own Xml based language (Query Markup Language - QML). We could then build an xmlDocument with selects, wheres, groups etc. and then send that to the SqlEngine that would turn it into a Sql statement and execute it. We discusse, but never implemented, a cache in all of this. That would've allowed us to cache the Qmls for frequently used queries.
I am a little confused as to how the orm would be used then at runtime? If the ORM would dynamically build something at runtime, how does the runtime code know what the orm did dynamically?
"have it at that point allow me to manipulate the data" - What is manipulating the data?
I may be missing something here and i aplogize if thats the case. (I only have really used bottom up approach with ORM)
IDataReader doesn't map anything to an object you know. So your example should be written using classic query builder.
Have you looked into using the ADO.NET Entity Framework?
MSDN: LINQ to Entities
It allows you to map database tables to an object model in such a manner that you can code without thinking about which database vendor is being used, and without worrying about minor variations made by a DBA to the actual tables. The mapping is kept in configuration files that can be modified when the db tables are modified without requiring a recompile.
Also, using LINQ to Entities, you can build queries in an OO manner, so you aren't writing actual SQL query strings.