Creating a database view in code - c#

So I've been tasked with creating a tool to allow ours users to create their own 'worklists' which they use to work through their data. In our app these worklists are driven by SQL views, so for now my program is having to dynamically create views in our database based on the users input. I don't like this, but for now I have to make the best of it an am brainstorming the best ways to go about this.
Basically every view I create has a similar skeleton, it has several columns that are always pulled and several joins that always happens. Based on the users input I may add additional SELECT columns, as well as additional joins if they are necessary to access the added display columns.
So basically right now my code looks like this...
string SQL = string.Format(#"CREATE VIEW {0}
AS
SELECT
Foo.A,
Bar.B,
{1}
FROM
Table
INNER JOIN Foo on Foo.ID = Table.FooID
INNER JOIN Bar on Bar.ID = Table.BarID
{2}", viewName, displayNames, extraJoins);
Database.ExecuteNonQuery(SQL);
I really don't like this for obvious reasons. However, I cannot seem to find the equivalent of a parametrized query for view creation with ADO. I could perhaps create a stored procedure to do this, but even that seems sloppy. Is there any reasonable way to do something like this that doesn't make me sick to my stomach? Also we are using MS SQL, and have to support as far back as 2005.

In contrast to DML (like SELECT / UPDATE / INSERT / DELETE) there is no support for parameters in DDL(see here too). So basically you either hide that inside a Stored Procedure with dynamic SQL or do it the way you describe...

Related

Dapper insert syntax

I use orm dapper from c# application for mysql database access. It works fine. There's only syntax question. I have a class with a many properties. All this properties matches database table fields exactly. So Select request is pretty short:
var listOfInstances = con.Query<MyClass>("Select * From myTable");
but when I need to insert something into database I have to write all those properties names that looks a little bit ugly:
con.Execute(#"Insert into myTable values(#Id, #Property1, #Property2, #Property3, #Property4, ....)", listOfInstances);
I wonder if there are any shorter syntax to insert data, at least for such a case when all class properties matches database table fields exactly.
P.S. The same thing about Update request
P.P.S. If say honestly I just start to work with a database which contains many tables, so I had to write basic functions get/add/change instance for each of those tables and it is quite annoing to list all of their fields.
Basically you need to install a nuget package called Dapper.Contrib
here's the repo
https://github.com/StackExchange/dapper-dot-net/tree/master/Dapper.Contrib
Also check this out https://samsaffron.com/archive/2012/01/16/that-annoying-insert-problem-getting-data-into-the-db-using-dapper

Selecting the same column from the same table multiple times in the same statement

I am converting a VB6 app to C# with an SQL Server back end. The app includes a very general query editor that allows the user to write any select query and return the results visually in a grid control. Some of the tables have several hundred columns (poor design, I know but I have no control over this). A typical use case for an admin user would be to
select * from A_Table_With_Many_Columns
However, while they want to be able to view all the data, they are particularly interested in 2 columns and they want these to be displayed as the first 2 columns in the grid (instead of 67th and 99th for example) so instead they execute the following statement:
select First_Interesting_Field, Second_Interesting_Field, *
from A_Table_With_Many_Columns
Then they will go and modify the data in the grid. However, when saving this data, it results in a concurrency violation (DBConcurrencyException). This worked fine with the connected RecordSets of VB6 but not so well in C#. I have tried a myriad of solutions to no avail.
Does anyone know how to handle this exception in a generic way? (Remember, the user can type ANY select statement or join etc. into the query editor)
Does anyone know how I might manipulate the columns returned such that I delete the 2 columns that appear further on in the list? (My difficulty here is that if the column name in the database is EMail so I do select Email, * from Blah the 2 pertinent columns returned are EMail and ADO.NET or C# aliases the second EMail column from the * portion of the query as EMail1 so I am not able to detect the second column as a duplicate and remove it)
Does anyone have an alternate solution I have not thought of?
Thank you very much
Actually, you could rename all variables to something like email_userdefined by doing something like this:
SELECT First_Interesting_Field as First_Interesting_Field_userdefined, Second_Interesting_Field as Second_Interesting_Field_userdefined, *
from A_Table_With_Many_Columns
Replace user_defined with whatever you want, like order number or anything else user acceptable

Retrieve just some columns using an ORM

I'm using Entity Framework and SQL Server 2008 with the Database First approach.
My problem is :
I have some tables that holds many many columns (~100), and when I try to retrieve a lot of rows it takes a significant time before it returns the results, even if sometimes I need to use just 3 or 4 columns from that table.
I passed half a day in Stackoverflow trying to find a way to solve this problem, and I came up with two solutions :
Using stored procedures to retrieve data with the columns I want.
Edit the .edmx (xml) and the .cs files to remove the columns that I won't use.
My problem again is :
If I use stored procedures to retrieve the data with the columns that I want, Entity Framework loose it benefit and I can use ADO.NET instead of it and call directly the stored procedures ...
I can't take the second solution, because every time I make a change in the database, I'm obliged to regenerate the .edmx file and I loose the changes I made before :'(
Is there a way to do this somehow in Entity Framework ? Is that possible !
I know that other ORMs exist like NHibernate or Dapper, but I don't know if they can offer this feature without causing a lot of pain.
You don't have to return every column each time. You can specify which columns you need.
var query = from t in db.Table
select new { t.Column1, t.Column2, t.Column3 };
Normally if you project the data into a different poco it will do this automatically in EF / L2S etc:
var slim = from row in db.Customers
select new CustomerViewModel {
Name = row.Name, Id = row.Id };
I would expect that to only read 2 columns.
For tools like dapper: since you control the SQL, only specify columns you want - don't use *
You can create a second project with a code-first DbContext, POCO's and maps that return the subset of columns that you require.
This is a case of cut and paste code but it will get you what you need.
You can just create classes and project the data into them but I'm not sure you can make updates using this method. You can use anonymous types within a single method but you'll need actual classes to pass around between methods.
Another option would be to move to a code first development.

"Smart" SQL Update using ListBox

I am developing a project which access a database in sql server 2012 through C# and performs CRUD modifications on it. Here is the main form:
both listboxes on the right are used to deal with informations contained in an intermediate tables (many-to-many relationship). Here is how they work: Basically, you choose types and abilities from the comboboxes, then click on 'add' and they are added in the respective listboxes. To delete items in the listboxes, you just need to select one item and then click 'delete'.
Here's another print to clear any doubts:
On the first print I've provided here, you will see a 'Bulbasaur' data. The PokémonID = 1 is represented by the 'Bulbasaur'; TypeID = 1 and 12 are 'Grass' and 'Poison', respectively; and AbilityID = 1 is 'Overgrow'.
I was trying to create an update function (update_click) using sql queries (SqlCommand, SqlDataReader and so on...), but without deleting the whole associations of a pokémon and its types (and abilities) and then re-adding them, based on the new modifications on the listboxes. I want to avoid it in order to save some memory in cases that some pokémon may hold thousands of types and abilities...
Is it possible? If necessary, I can send you my C# project for more details.
I would suggest a combination of:
1) Use table-valued parameters to send all the data (in its present state in your listboxes) to your T-SQL query or stored procedure at once
2) Consider using the EXCEPT and/or INTERSECT operators (as well as any necessary LEFT or RIGHT JOIN) to compare the contents of your table-valued parameter (essentially a table itself) with the data currently in the underlying tables
3) UPDATE/DELETE/INSERT accordingly
Essentially it sounds like what you are saying you'd like to do is to only "send the changes" to the database:
add any abilities that were not there before;
remove any abilities that were in the database but have been removed
If that's the case then what you need to be able to do is simple set operations:
Set Union
Set Intersect
Set Difference
while you can perform these operations using simple arrays or lists, it is much more efficient to use an actual set implementation such as a generic HashSet<>. With a correct implementation using sets or hash tables you ca achieve linear-time performance.
I hope this helps point you in the right direction..

High performance Custom user fields

looking for examples/tutorial for custom user fields, not via EAV
EAV is going to be problematic for various reasons such as performance
there are many base entities/tables with over 100000 records each
there will likely be over a dozen attributes
the records are to be displayed in a flat ui grid incl. custom fields so flattening them would be an issue while maintaining performance
Looking at enabling this via DDL where all custom fields would go into a matching table such as
<tablename>_custom_<userid>
and all user attributes would map to a column each and all their metadata stored in a metadata table
the retrieval would be simpler where the query would simply be
select *
from <tablename> A, tableName_custom_userid B
where B.KeyField = A.KeyField --( perhaps using outer join, haven't gone that far yet )
Wondering if there are any gotchas down the road that i need to be aware of ?
of course any samples/pointers would be helpful to kickstart the effort
specifically would appreciate any advice on using DDL for Sql Server compact 4
One technique I have seen used is to use a sort of 'hard-coded' EAV pattern. Don't hang up! It worked well with the dataset sizes you were talking about and didn't actually use EAV - it was only EAV-esque.
The idea is to have a set of tables to store these custom attributes within it, with some triggers (described below) on them. The custom attributes tablesets store metadata about the attribute (what table it goes with, data type, constraints, etc). You can get very fancy with this but I did not haev the need.
The triggers on your meta-tables are there to re-generate views that rollup base+extension into first class objects within the DB. So instead of table person + employee extension table, you have an employee view that includes both. When you drop a new value into the custom attributes tables, the triggers will re-roll the views and include the new stuff. If you wanted to go nuts, you could also have the triggers re-write stored procedures as well. Depending on how your mid-tier code is structured, you would still be forced to re-code some, however this would be the case anyway should you be applying rules that read the data.
In testing, I found that for the relatively small # of records you're talking about, performance was somewhat slower but followed roughly the same pattern of degradation (2x the number of records, ~2x as slow).
-- edits --
How I saw it done, you had a table that represented your first class objects, so a row for 'person' and a row for 'employee,' etc. We'll call that FCO. Then you had a secondary table that stored what tables represented the FCO. We'll call that Srcs.. For person, there would be one row, which is the person table. For Employee, there would be two rows, the person table and the Employee extension. There is a third table, called Attribs, which stores the columns from the tables that constitute the FCO. For simplicity, we'll say Employee has ID, Name and Address, and Employee has Hire Date and Department, and obviously PersonID referring back to Person table. So, 2 rows in FCO table (person and employee), 3 rows in Src table, 8 rows in Attribs.
The view, we'll call it vw_Employee, selects PersonID, Name, Address, Hire Date, Department from the two tables. It is built by a SQL stored procedure we'll call OnMetadataChange.
This SP is fired (by trigger or batch process), and its purpose is to generate the CREATE VIEW statements. It will iterate through every First Class Object, collect which fields from which tables constitute the view, and will issue a CREATE statement based on that. So OnMetadataChange produces a DROP and CREATE for each view, it generates a dynamic SQL statement that is executed once per entry in FCO table. It is preferable to do this with Triggers but not necessary. Hopefully your FCO definitions won't change too often, and when they do, there will probably be a code release as well. You can run your OnMetadataChange SP at that time.
The end result is a 2-layer database. The views constitute the First Class Object layer, which is meaningful to the application. The application only uses views. The tables constitute the 'physical' layer, which the application shouldn't care about. The meta-tables are essentially your mapping between the FCO layer and the physical layer. It takes some time to set it up, but it's quite effective, and gives you many of the benefits of EAV, while at the same time giving you the concrete benefits of 3nf tables (indexability, etc).
If you'd like I can throw some sample SQL out there.
Part of the problem you are having is that you are trying to store schema-less data in a SQL database, which is not its strength. There are three approaches that would make your life far easier:
1) Have a column which stores the serialized custom fields, with whatever format is mst convenient. For example, this column could store xml. Upsides are that you can use SQL Server Compact and pulling back a record is trivial. Downsides are that you always have to pull/push the entire xml blob to do an update, and it is difficult to impossible to query on any custom fields.
2) Upgrade to SQL Server Express, and use XML columns. This is nearly the same as the first suggestion, except that any server ready version of SQL Server has native support for XML data. These columns can have indexes added and fields within the data can be used in queries.
3) Use a Schema-less Database, like MongoDB or CouchDB. These databases are all about storing schemaless data, so your custom fields will be no different than any other field. As such, you can index and query custom fields. Upsides are that custom data is incredibly easy to work with, downsides are that you would have to spend some time rethinking how you store data to fit within their model.
If you do not need to query based on custom fields, or if you can query custom fields within business logic, then the first option can work for you. In any other case, I would err towards something with more capabilities than compact. If cost is the deciding factor, both SQL Server Express and MongoDB are free.

Categories