Which ORM will give me compile-tested queries? - c#

Which ORM will give me compile-tested queries?
Is linqtosql compile time tested?
Edit:
Say I write a query that references a column named 'TotalSales'. I then rename the column in my database to TotalSales2 (and any other config file like: Employee.cfg.xml in nHibernate).
When I compile the project, I want Visual Studio to tell me the column 'totalSales' doesn't exist and then I will go and change it.

There aren't any as far as I'm aware. They will often let you create a LINQ query that cannot be translated into SQL for example. Also, I am not aware of any compile time checking that your mappings map to your database correctly.
You can, and should in my opinion, perform all these checks within tests. Most ORMs make this easy to do.

I use LLBLGen but it has to be "refreshed" when data model changes are made. I don't think you'll get an ORM that will AT COMPILE TIME check for modifications against the database. You're asking for quite a bit there.

In DataObjects.Net properties marked by [Field] attribute are always bound to field in database, so you can be sure that query will be translated. If you use not persistent field or another not supported statement, query translator will fail in runtime or performs such operation on fetched objects (on client).
Generally compile time validation is impossible or theoretically can be performed with special post-build tasks, that will scan compiled code, find all queries and validate them. But such checks will seriously slow down compilation process.

Perhaps not exactly what you're looking for but if using the Entity Framework and selecting "Update Model From Database" from the designer you will get messages saying the fields are no longer mapped if you change the names.
This doesn't happen automatically when you build a project.

Basically, you need 2 features together:
Compile-time checked queries (= an ORM with LINQ implementation). This is normally not a problem - at least some tools support this.
Pre-build step updating your entities based on database schema. AFAIK this is rarely implemented this way - normally you must explicitly update the model by the schema. Note that this part is normally rather costly.

SubSonic can do that if you include the code generation step as a pre-build event.

I used a Java tool called DODS, which was developed ca. 2000 with the Enhydra application server. DODS is still around here: http://www.enhydra.org/tech/dods/
The way DODS works, and which meets your goal of compile-time validation, is that it's a code generation tool. It generates Java classes corresponding to the tables in your database. Object instances of these classes have getters and setters for each column in the table. Of course if you change your database structure, you have to re-generation the Java code using DODS.
As long as you keep the generated code up to date with the structure of your database, it provides compile-time validation that any application code that uses these classes is querying valid tables and columns.
Anyway, I realize you tagged your question with C# and ASP.NET. A tool that generates Java code isn't going to be that helpful for you. But there could be another tool more specifically for .NET that works on the same principle of generating code that maps to database structure. So I'd suggest narrowing your search to .NET ORM tools that say something about code generation.

Related

Programatically find and apply schema differences on SQL Server

I have a product that I'm currently authoring that relies on SQL server for the backend. One issue I'm trying to resolve is to improve the 'upgrade' story. So v1 will have a particular schema and v2 may include some enhancements to this schema (new tables and new columns).
I'm aware of the SDKs from RedGate and ApexSQL - but would like to avoid.
I've had a read through the SMO docs, but I'm new to it and struggling to see if this can be applied in this situtaiton. Ideally, I'd like this to make this programatic (SMO or other) - the base cases seems straight forward enough, but I really don't want to re-invent the wheel if I can help it. Does anyone have any experience of similar requirements or ideas about how I could approach?
You don't say what version of SQL Server you're using but in (I think) 2005 and beyond, there is the concept of database triggers. These work like their table level cousins but can be used to track any kind of DDL change that happens on the database. We didn't use it to actually generate DDL - more to track when the format of a table changed. Although what you're after should be possible I'd have thought.
Triggers are one of those things that divide developers. Some people think they're the best thing since sliced bread whilst others hate them with a passion. Perhaps because when data changes, these are the last thing you think of.
Maybe not exactly what your're looking for (since it's not SMO) but having a look at Entity Framework Code First Migrations might help you:
http://msdn.microsoft.com/en-us/data/jj591621
Changes in the model-classes can be versioned and can either be applied directly to a database or, if you do not have direct access to your database, you can generate SQL-Code for your new version and hand it to your database-administrator.
I us Database Projects in Visual Studio to mange versioning of schemas. Once you create a baseline in a Database Project, you can make your changes in the project and then use the Schema Compare to create SQL scripts to apply the changes in different environments.
I would recommend doing only additive changes, but it will generate change scripts for destructive changes. If you do not have your environments synced up, I strongly recommend generating a new script for each environment.
This blog post goes over how to create one in Visual Studio 2012: http://candordeveloper.com/2013/01/08/creating-a-sql-server-database-project-in-visual-studio-2012/
Red Gate has a schema compare product too, but I have not really used it.

SQL vs. Entity Framework for databases at run-time

I'm currently building a query builder in C#, which first starts by prompting the user to specify which database they would like to query on their local machine.
Thus, the program allows databases to be "plugged-in" to it, rather than have a number of databases with tables that are always used.
Thus, for my query builder to generate SQL statements, would it make more sense to output and execute SQL statements in the form of strings, or could I use Entity Framework? I have no experience with Entity Framework, however from what I can understand, it makes more sense to utilise EF if you've got a static database whose tables and schema you are aware of - versus potentially any database being specified by the user at run-time.
I've currently been working with SQL statements - i.e. the users' interaction with the query builder, literally executes string-based SQL statements which are generated by the application. Would it be possible or worthwhile to switch to Entity Framework?
From my experience using EF, if you are generating queries dynamically, you're better stick to SQL strings.
The only way you could use EF to query a unknown schema is by generating the Entities through reflection, which would be a hell of lot of work. And I'm not sure it would work. And also, you'd lose all benefits from using EF.
So, if this is the case, no.
Entity Framework does not provide any advantages in this scenario. In fact, it limits severely.
It is possible to write a generic SortHelper, PagerHelper, FilterHelper, etc. that takes an expression tree as IQuerable and applies the sort you desire. This sort of generic programming is great, as it avoids SQL Injection.
However, if you use Entity Framework for your query generator, you would have to use reflection to generate your Entity Data Model. Moreover, you would have to decide how to do open-ended select statements. Further, you would be tied to how Entity Framework represents and evaluates queries, which is still not as robust as it should be for an ORM at version 5.0! For example, there is no good way to represent right joins, and you have to always represent them as left joins if you want decent SQL generated. Another limitation is that if you want to write a projection, you would need to generate an anonymous class. .NET does not have a good way to unload types from memory, and every type you generate in an AppDomain is held in memory until the AppDomain gets unloaded. That is why F# 3.0 uses type erasure for its F# Type Providers API, to avoid generating a billion types for databases like RDF, where there are billions of "types".
Also, Entity Framework does not do any kind of serious analysis to decide if an expression can be transformed, like SAT solving.
I am basing my answer on real life experience, having built the exact application you are describing, and then some. The application allowed business analysts to write queries visually and compose queries together.
That said, I do recommend studying Entity Framework's design vocabulary. I have shifted over to using very similar vocabulary, even though I don't use Entity Framework. For example, Navigational Properties. I don't call them Properties, since that is an object-oriented abstraction for those who use object query languages and doesn't make sense in a visual query language. I call them Paths. But I like the Any() operator to imply left join, as well as Include(). Those little modeling ideas were valuable to me.

Dealing with Schema Updates in nHibernate/Fluent nHibernate after Deployment

In writing an application that runs on Fluent Nhibernate/Nhibernate, something has me a bit concerned. I suppose this would be true of any ORM (and even without using an ORM), but what is the ... I guess the word is 'field of study' that relates to the best practices and methods for updating a database after deployment?
In nHibernate, I establish a SessionFactory and have an initial run where it writes the database out based on the mappings. That's fine and good, I can even write the database out manually. But what about when my client comes back and wants something new added? Can I append to the database without losing my data? I am completely new to all of this and it has been troubling me since the start of this project, and I really do not know what direction to go to make sure I can manage the program after it is deployed.
I have looked at other stack overflow questions that I could find regarding this topic - one of which did not even have an accepted answer (though the question itself was kind of vague), but I did discover the tool http://www.red-gate.com/products/sql-development/sql-compare/ from the question
Tool to upgrade SQL Express database after deployment though I am wondering just how good of a 'strategy' that is.
There are a couple of options, use the AutoMapping feature in Fluent NHibernate to minimize the mapping code you write. If your schema changes comply with the AutoMap conventions then you only need to work with the corresponding domain object changes.
Another less optimal option is to take a database first approach and have something like MyGeneration automatically generate the domain classes and NHibernate mapping files from the schema. This works if you have complete control of the database schema and it can be made to implement a good domain model design (both conditions which very rarely ever happen...)
In either approach, these tools can help handle the database scripting needed to "migrate" the schema changes to a new version
from my experience, after deployment you have to manually keep your db structure up-to-date.
that means that whenever you add / change your db structure, you do so using a script with DDL commands.
when you're ready to deploy, you just run those DDL scripts against your production db.
for example, if you add a 'bar' column to your 'foo' table, your script would be something like (pseudo-code):
ALTER TABLE foo ADD COLUMN 'bar' int(32) not null default(0);

Suggestions for dynamic SQL Server access using C#

I'm looking for a good solution to make my life easier with regards to writing/reading to a SQL Server DB in a dynamic manner. I started with Entity-framework to make my life easier to begin with, but as the software become more general and config driven I'm finding that Entity becomes less and less appropriate because it relies on specific objects defined at design time.
What I'd like to do.
Generate Tables/Fields at runtime.
Select rows from tables by table name with unknown schema into a generic data type (eg Dictionary)
Insert rows to tables by table name using generic data types (dictonary, where the string maps to field name), where the data type mapping between typeof(object) and field type is taken care off.
I've started implementing this stuff myself, but I imagine someone has already has already done it before.
Any suggestions?
Thanks.
I'm having trouble understanding how what you are describing is any different than plain old ADO.NET. DataTables are dynamically constructed based on a SQL query and a DataRow is just a special case of an IndexedDictionary (sometimes called an OrderedDictionary where you can access values via a string name or an integer index like a list). I make no judgment as to whether choosing ADO.NET is actually right or wrong for your needs, but I'm trying to understand why you seem to have ruled it out.
You can use Sql.Net ( http://sqlom.sourceforge.net ) to easily generate dynamic SQL statements in C#.
The iBATIS.NET (now MyBatis.NET) Data Mapper framework doesn't automatically generate tables or fields at runtime, but it does allow you to select and commit data via Dictionary objects.
It's probably not going to suit your needs completely (it's kind of tedious to set up, but pretty easy to maintain once it is), but it might be worth a look. Here's a link to the online documentation.
Other popular frameworks might do the same or similar, such as NHibernate.

How can I leverage an ORM for a database whose schema is unknown until runtime?

I am trying to leverage ORM given the following requirements:
1) Using .NET Framework (latest Framework is okay)
2) Must be able to use Sybase, Oracle, MSSQL interchangeably
3) The schema is mostly static, BUT there are dynamic parts.
I am somewhat familiar with SubSonic and NHibernate, but not deeply.
I get the nagging feeling that the ORM can do what I want, but I don't know how to leverage it at the moment.
SubSonic probably isn't optimal, since it doesn't currently support Sybase, and writing my own provider for it is beyond my resources and ability right now.
For #3 (above), there are a couple of metadata tables, which describe tables which the vendors can "staple on" to the existing database.
Let's call these MetaTables, and MetaFields.
There is a base static schema, which the ORM (NHibernate ATM) handles nicely.
However, a vendor can add a table to the database (physically) as long as they also add the data to the metadata tables to describe their structure.
What I'd really like is for me to be able to somehow "feed" the ORM with that metadata (in a way that it understands) and have it at that point allow me to manipulate the data.
My primary goal is to reduce the amount of generic SQL statement building I have to do on these dynamic tables.
I'd also like to avoid having to worry about the differences in SQL being sent to Sybase,Oracle, or MSSQL.
My primary problem is that I don't have a way to let ORM know about the dynamic tables until runtime, when I'll have access to the metadata
Edit: An example of the usage might be like the one outlined here:
IDataReader rdr=new Query("DynamicTable1").WHERE("ArbitraryId",2).ExecuteReader();
(However, it doesn't look like SubSonic will work, as there is no Sybase provider (see above)
Acording to this blog you can in fact use NHibernate with dynamic mapping. It takes a bit of tweaking though...
We did some of the using NHibernate, however we stopped the project since it didn't provide us with the ROI we wanted. We ended up writing our own ORM/SQL layer which worked very well (worked since I no longer work there, I'm guessing it still works).
Our system used a open source project to generate the SQL (don't remember the name any more) and we built all our queries in our own Xml based language (Query Markup Language - QML). We could then build an xmlDocument with selects, wheres, groups etc. and then send that to the SqlEngine that would turn it into a Sql statement and execute it. We discusse, but never implemented, a cache in all of this. That would've allowed us to cache the Qmls for frequently used queries.
I am a little confused as to how the orm would be used then at runtime? If the ORM would dynamically build something at runtime, how does the runtime code know what the orm did dynamically?
"have it at that point allow me to manipulate the data" - What is manipulating the data?
I may be missing something here and i aplogize if thats the case. (I only have really used bottom up approach with ORM)
IDataReader doesn't map anything to an object you know. So your example should be written using classic query builder.
Have you looked into using the ADO.NET Entity Framework?
MSDN: LINQ to Entities
It allows you to map database tables to an object model in such a manner that you can code without thinking about which database vendor is being used, and without worrying about minor variations made by a DBA to the actual tables. The mapping is kept in configuration files that can be modified when the db tables are modified without requiring a recompile.
Also, using LINQ to Entities, you can build queries in an OO manner, so you aren't writing actual SQL query strings.

Categories