Currently, I'm sitting on an ugly business application written in Access that takes a spreadsheet on a bi-daily basis and imports it into a MDB. I am currently converting a major project that includes this into SQL Server and .net, specifically in c#.
To house this information there are two tables (alias names here) that I will call Master_Prod and Master_Sheet joined on an identity key parent to the Master_Prod table, ProdID. There are also two more tables to store history, History_Prod and History_Sheet. There are more tables that extend off of Master_Prod but keeping this limited to two tables for explanation purposes.
Since this was written in Access, the subroutine to handle this file is littered with manually coded triggers to deal with history that were and have been a constant pain to keep up with, one reason why I'm glad this is moving to a database server rather than a RAD tool. I am writing triggers to handle history tracking.
My plan is/was to create an object modeling the spreadsheet, parse the data into it and use LINQ to do some checks client side before sending the data to the server... Basically I need to compare the data in the sheet to a matching record (Unless none exist, then its new). If any of the fields have been altered I want to send the update.
Originally I was hoping to put this procedure into some sort of CLR assembly that accepts an IEnumerable list since I'll have the spreadsheet in this form already but I've recently learned this is going to be paired with a rather important database server that I am very concerned with bogging down.
Is this worth putting a CLR stored procedure in for? There are other points of entry where data enters and if I could build a procedure to handle them given the objects passed in then I could take a lot of business rule away from the application at the expense of potential database performance.
Basically I want to take the update checking away from the client and put it on the database so the data system manages whether or not the table should be updated so the history trigger can fire off.
Thoughts on a better way to implement this along the same direction?
Use SSIS. Use Excel Source to read the spreadsheets, perhaps use a Lookup Transformation to detect new items and finally use a SQL Server Destination to insert the stream of missing items into SQL.
SSIS is way better fit to these kind of jobs that writing something from scratch, no matter how much fun linq is. SSIS Packages are easier to debug, maintain and refactor than some dll with forgoten sources. Besides, you will not be able to match the refinements SSIS has in managing its buffers for high troughput Data Flows.
Originally I was hoping to put this
procedure into some sort of CLR
assembly that accepts an IEnumerable
list since I'll have the spreadsheet
in this form already but I've recently
learned this is going to be paired
with a rather important database
server that I am very concerned with
bogging down.
Does not work. Any input into a C# written CLR procedure STILL has to follow normal SQL semantics. All that can change is the internal setup. Any communication up with the client has to be done in SQL. Which means executions / method calls. No way to directly pass in an enumerable of objects.
My plan is/was to create an object
modeling the spreadsheet, parse the
data into it and use LINQ to do some
checks client side before sending the
data to the server... Basically I need
to compare the data in the sheet to a
matching record (Unless none exist,
then its new). If any of the fields
have been altered I want to send the
update.
You probably need to pick a "centricity" for your approach - i.e. data-centric or object-centric.
I would probably model the data appropriately first. This is because relational databases (or even non-normalized models represented in relational databases) will often outlive client tools/libraries applications. I would probably start trying to model in a normal form and think about the triggers to maintain audit/history as you mention during this time also.
I would typically then think of the data coming in (not an object model or an entity, really). So then I focus on the format and semantics of the inputs and see if there is misfit in my data model - perhaps there were assumptions in my data model which were incorrect. Yes, I'm not thinking of making an object model which validates the spreadsheet even though spreadsheets are notoriously fickle input sources. Like Remus, I would simply use SSIS to bring it in - perhaps to a staging table and then some more validation before applying it to production tables with some T-SQL.
Then I would think about a client tool which had an object model based on my good solid data model.
Alternatively, the object approach would mean modeling the spreadsheet, but also an object model which needs to be persisted to the database - and perhaps you now have two object models (spreadsheet and full business domain) and database model (storage persistence), if the spreadsheet object model is not as complete as the system's business domain object model.
I can think of an example where I had a throwaway external object model kind of like this. It read a "master file" which was a layout file describing an input file. This object model allowed the program to build SSIS packages (and BCP and SQL scripts) to import/export/do other operations on these files. Effectively it was a throwaway object model - it was not used as the actual model for the data in the rows or any kind of navigation between parent and child rows, etc., but simply an internal representation for internal purposes - it didn't necessarily correspond to a "domain" entity.
Related
Introduction:
I'm refactoring (pretty much rewriting) a legacy application in my current internship. The part that this question will be concerned about is the database it uses and the way they retrieve data from it.
The database structure is:
There's a table that has the main records. Let's say each record is a measurement. It has some info about the measured material and different measurement information.
There's a table view they use that has the same information columns, plus some extra columns that contains data calculated from the given measurements. And it also filters some of the data from the table.
So let's say we have the main table with columns:
Measurement ID
Measurement A
Measurement B
The view has something like this:
Measurement ID
Measurement A
Measurement B
Some extra data (for example Measurement A * Measurement B)
The guy that is leading the development only knows some SQL, so he likes adding new columns that is calculated by some columns in the main table for experimenting. And this is definitely a need at the moment.
Requirements are:
Different types of databases should be supported (like SQL Server, Oracle, and probably some others).
The frontend should be able to show the view, which means even though some main columns will always stay the same, there may be some new columns including newly calculated values.
My question is:
What kind of system should I use to accommodate the needs of this application? I wanted to use Entity Framework, but the fact that the view may have new columns in the future is I think a problem. As far as I understand, I should map my classes to the database before compiling.
The other thing that I'm considering is maybe using Entity Framework to get data from the main table and do the calculations and the filtering that is currently done in the table view directly in the frontend, and skip the view altogether. Which sounds fine, though I don't know if they will allow me to do that.
What would you do in my case? Please take into account that I have virtually no experience with databases and ORMs.
You are correct in that using Entity Framework will be a problem if the underlying DB schema is always changing. It will require you to update the EF model on your end every time to grab those new columns.
Ideally, all of your database access is hidden behind the interface to your DAL, so that your application doesn't need to know about which ORM is being used -- if any -- or which database it's connecting to.
I hate to say it, but given your requirements, an ORM might not make sense. You might want to go with something more generic without any strong-typing. You could just simply always return a DataTable to your application layer, and it could loop through the columns and values to display whatever is returned. If there are fields you know will never change, you could create a manual mapping for those fields only into your application object(s).
You may have a look to NoSQL system that are a lot more flexible on the schema. Or have a look to document database like RavenDB. All these systems allow the schema to change dynamically. You need to check the Pro's and Con's to see if it can fill you requirements.
(This answer is a bit out of subject as it's about replacing the SQL server and not really creating a DAL, but other answers cover the subject well and I would like to propose another way that may help.)
If your schema is unstable, then using Entity Framework as a beginner is going to be a headache. The assumption is that you can just refresh the design canvas periodically to let the tool handle database table changes. You can try that for a time to see when it becomes too much of a pain, but without any prior experience using ORMs or Entity Framework it may not be worth the effort.
I would probably use something like Rob Conery's Massive ORM (https://github.com/robconery/massive). It gives you more flexibility with the underlying database schema and is a very small library. I remember it being ~300 lines of code and very easy to use. It uses C# dynamics so you'll have to be using >= C# 4.0 and be comfortable with that one concept but IMO it's worth it for the low-overhead. A full-fledged ORM like Entity Framework or NHibernate is going to cost a lot of learning cycles.
You could, of course, just stick to ADO.NET DataTables. They're a bit ugly and verbose, but they'll do the job.
You can use Entity Framework - Database First if the DB is changing. Of course, you will have to regenerate your classes when you want to be able to access new columns, when the DB schema changes.
If you need to accomodate different database servers, then you should take a look into implementing a repository pattern and abstract all your data access that way.
Your comment
it involves write operations to the main table but the main table never changes
confirms what I was hoping for. It means you can use Entity Framework as the core of you application and a different route to display data.
Suppose that for display (of the view) you use a classic DataTable (because all common grids support them, contrary to displaying dynamic objects). I don't know how create/update/delete will be done, but saving changes will at some point involve mapping a DataRow to a MainEntity object. You can write one method for that like
MainEntity DataRowToEntity(DataRow row)
{
var entity = new MainEntity();
entity.PropertyA = row["PropertyA"];
....
}
The MainEntity can be attached to a context, its status changed to Modified, and saved.
Sorry if this has been asked elsewhere, but I couldn't find a clear answer anywhere.
I have decided to begin learning to use relational databases a bit more, namely SQL. This is a major beginners question but its probably essential to get started on.
I'm basically a little confused the best practice on how to utilize SQL (or other). At college i have accessed databases (using JSON strings) for things such as mobile apps, but i have never actually designed and built a database myself, as my tutor made the mentioned database for us to access himself.
Lets say I have a C# application that holds genealogy information (i.e. families and their members) and i wanted to store each individual on a database. Would I, simply use the structure I already have but save to fields in a database instead of an xml or text document? Or does it work the other way, i.e. do I create a database with required fields then just retrieve this from the database in a c# application and manipulate the data as I so wish, so the application would be entirely different (so the c# application basically doesn't really hold/store any data and just works on whats fed from the database)?
Whats troubling me is that usually where I would store my c# objects in a dictionary or list for example, would i instead just retrieve straight from the database? or retrieve from the and store the data into a normal structure and work from there (surely this would defeat the point of fast-searching from a database)?
I may be over-thinking it slightly. Hope that makes sense. Thanks in advance
Would I, simply use the structure I already...
or
do I create a database with required fields...
I think that is the crux of your question.
Starting from the database
For me, when building an application that uses a backend database, an Entity-Relationship diagram is pretty crucial. I found quite a nice little tutorial for you here: http://www.sum-it.nl/cursus/dbdesign/english/index.php3 but you can easily find one that suits your learning style. The key point is that you are trying to model the problem domain (the real world out there that needs your application) in a way that your application can somehow capture. Once you have an E-R diagram of related tables, it is easier to figure out the details. Using SQL Management Studio for SQL Server 2008 (Express edition) you can create a few basic tables and build the E-R diagram right there and have it generate relationships for you. You can then, at your leisure, examine the SQL used to achieve that and refine accordingly.
Personally, I always start by examining the problem domain, then I build the E-R diagram, then I build the database. I start building the C# application when I'm reasonably confident the database reflects the problem domain.
Starting from your C# application
However, what really matters is that you model the real world in a meaningful and effective way. In your case you already have a starting point in structures you've created in C# and you can use them to give you a starting point to build the E-R diagram. If you find it easier to get a C# application going and then build a database that reflects it, that should be fine. Perhaps you already have an approach that helps you capture the problem domain effectively. It's an iterative process whatever you do: building the C# code might reveal problems with the underlying database design and vice versa.
Diagramming - E-R or UML?
I'm personally convinced that this whole business is so complicated that you really need some diagrams.
to visualise your database, use an E-R diagram
to visualise your C# application use a UML class diagram
As you head towards a working application, you'll see how these 2 diagrams begin to match or at least reflect eash other pretty closely. In both cases, (entities or classes) understanding the relationship between objects will be really important when you query the database because it is crucial to understand relationships between tables (especially using 1-to-many relationships to resolve a complex many-to-many relationship) and various techniques for joining tables in queries (INNER or OUTER joins etc) No matter how clever your C# application is, you will at some point need to understand at least some of the complexities of the SQL language - and it is easier if you can refer to an E-R diagram.
Where to store?
Whats troubling me is that usually where I would store my c# objects in a dictionary or list for example, would i instead just retrieve straight from the database?
In the database, without a doubt. A C# class called Family would have a property FamilyName, say, with a setter method built in. If you discover a spelling mistake and want to change the name, the setter method would open a connection to the database, run an UPDATE query with the specified family name, (and probably the family id) as a parameter, and update the underlying field accordingly. Retrieving data would involve running a SELECT query etc.
Conclusion
Do some tutorials on how to examine a problem domain, create an entity-relationship diagram and build a set of related tables based on the diagram. I'm convinced that way you'll find it much easier to keep track of the C# classes that you build to communicate with the backend database.
Here's an example of a simple E-R diagram for families and their members:
To begin with you might think members and family could be in one table, but then you discover that creates a lot of duplication so you separate that out into family and member table with a one-to-many relationship, but then you realise that, through marriage for instance, people can belong to more than one family and you need to create a many-to-many relationship. I think the E-R diagram is the best place to work out that kind of complexity.
Not knowing what your structures look like or how your DB will be designed this is hard to answer. But you should be able to use existing data structures, and just pipe the data from the database instead of the XML file.
Look into Linq-to-XML, C# has a strong library to interact with SQL. May be a bit confusing at first, but very powerful once you learn it.
If I am right you are asking also if you should retrieve all the records from the database and store them as objects in a collection or retrieve selected records from the database and use the dataset results without placing them in a purpose defined structure.
I tend to select the records I want from the database and then load the results into my purpose defined classes / structures. This allows you to add your manipulation methods to the class holding a record result etc. without needing to take in dataset results to each method. However you will find yourself doing singular updates all the time when a batch update might be more efficient... if that makes sense.
Take a look at entity frameworks code first. If your data structures are classes in your application there are techniques to use that to create your database schema from that. As far as the data. Store it in your database and populate your lists and dictionaries with it. Or populate list of class genealogy individual with it.
If you want to write your own data classes, there's a free tutorial here written by myself. What I would definitely not to is use the data sources in ASP.NET, as these wizards are the Barty Crouches of the ASP.NET world - they appear good, but turn out to be evil, as inevitably you'll want to be able to tweak them and you won't understand how to do this.
Please excuse the noob question as I am new to integrating data with my applications. I've tried to find answers on the net, but not there yet.
I have an application I'm developing in C# on VS2010 which requires data in/out from a database. I am trying to figure out if its a DataSet or Entity Data Model I need to use when setting up a data source. My understanding was that it was the EDM which allowed me to treat tables/fields in a database as objects, but somehow it looks like I can do that with a DataSet too.
Some sources explain that a DataSet makes a cached copy of the Database which can then be manipulated.
Essentially my question is which should I use and what are the (dis)advantages of one over the other.
You have several options open to you when it comes to storing and retrieving data to/from a database:
At the very simplest level, use ADO.NET to open a connection to the DB, create a command and execute it. If you expect results back (i.e. SELECT ...) then you could call the command's ExecuteReader(...). Working in this manner results in very quick execution and the minimum of overhead, but you have to do more of the heavy lifting. If your app is simple, this is probably a good way to go. If your app is, or is likely to be more complex, you may want to consider other options...
ADO.NET DataSets are a reasonable DB IO mechanism, particularly for reading data from a DB. However, they can be a little cumbersome when trying to update the DB.
You could use an Object-Relational Mapper (ORM) like nHibernate or Entity Framework, but, frankly, that often results in your learning curve increasing dramatically while you figure out how to plug together the moving parts and make them work well together.
You might also consider a new variant of Entity Framework called Code First (CF): This allows you to pretty much design your code and CF will generate your EDM and handle the majority of the DB operations required for you to build your system. Scott Hanselman wrote up a nice intro into EF CF.
Having used practically every DB API and ORM on Windows over the last 20+ years, I am delighted with how CF is shaping up! EF 4.3 that shipped just a couple of weeks ago includes some key new improvements to CF including migrations which allow you to handle changes to your DB schema as it evolves. I've build 3-4 systems using EF CF over the last couple of months and am very happy - it's my favorite relational database IO mechanism at present.
If you want to really get into EF CF, I strongly recommend Julia Lerman's book EF CF - it's a short, nicely written, very useful guide that should take you no more than a day or two to work through the main sections of.
Hope this helps.
If you add a LocalDB data source to your project (because you want a small local database file) then when the Data Source Configuration Wizard pops up, it explicitly asks you whether you want to use a Dataset or Entity Data Model database model. Is this the situation you were facing? That was the problem I had that brought me to this entry.
There is no question that for an enterprise class application, or a website, you would want to investigate ADO.NET or an ORM, but it doesn't help answer this question, which has to do with what are the differences between choosing Dataset vs Entity Data Model in the wizard.
Essentially, Entity Data Model is the more recent technology. If you are unfamiliar with Dataset, then this is probably not the time to start using it.
If you're asking what are the pros and cons for ADO.NET (DataSet) vs EntityFramework (Entity Data Model) then there is a discussion that may help at ADO.NET Entity Framework or ADO.NET
EF will get you up and running pretty quickly but in my (very limited) experience its been a pain to maintain.
What is it that has determined that these are your only two options? There are far more available to you including many ORMs.
If your application is supporting a business application than queries get complex pretty soon. In such scenario, stored-procedures save a lot of time and are much easier to maintain and they work better with ADO.NET. In almost all scenarios, I would suggest using stored-procedures and ADO.NET. Move as much of the business rules and logic to stored procedures as you can...much easier to maintain this way.
Use Datasets (datatables) only to retrieve and read data. Any data that needs to be saved to database should be directly manipulated in the database ... no point doing it in dataset and then saving the same. In a multi-user environment it is almost always better to save the changes to database as soon as the user has clicked "save".
You may (should) use business objects within the application for business-logic processes.
Let us take a simple example of where you are saving a Contact (name, phone, email, address etc) and then retrieving a list of contacts added today...I would suggest you do it as follows:
1) Adding the contact - Client (web or otherwise) collects data --> data is saved in a Contact business object --> validate Contact object --> Call repository layer to save Contact object (adding a repository layer is useful but not-necessary to keep the data layer abstract from the client) --> Repository calls the data layer to save the contact object (here a simple ADO.NET call, using Command object, can be made to call the stored procedure to save the contact in database). No dataset was used in this use case.
2) Retrieving list of contacts -- Client calls the repository layer to get the list of contacts --> repository layer call the data layer to retrieve the data --> here the list of data is retrieved as a dataset(datatable) --> return the datatable back to the client and let the client read the data directly from datatable while rendering the data. Even a single contact can be retrieved as a dataset.
P.S: ORM is almost always an overkill. It is almost always used because certain developers like to keep everything object-oriented...so an extra layer gets added even though it does nothing useful (IMHO).
But, what if you have business logic (stored procedures) which can be used in many different applications.
So depends: if you make your application for different users with different backend storage, or you make many applications for users which doesn't change backend storage so often.
It is very important to have database integrity and rules independent from application (inner or outsource)
What is the standard way of copying data from one oracle database to another.
1) Read data from source table and copy to temp table on destination using configuration( i.e. there are more than 1 table and each table has separate temp table)
2) Right now there is no clob data, but in future clob data might be used.
3) Read everything to memory(if large data read in chunks)
Should not use Oracle links
Should not use files
Code should be only using C# but not any database procedures.
One way that I've used to do this is to use a DataReader on the source database and just perform inserts on the target database (using Bind Parameters for sure).
Note that the DataReader is excellent at not using much memory as it moves through a table (I believe that by default it uses a Fast Forward, Read Only cursor). This means that only a small amount of data is held in memory at a given time.
Here are the things to watch out for:
Relationships
If you're working with data that has relationships, you're going to need to deal with that. There are two ways that I've seen to deal with this:
Temporarily drop the relationships in the target database before doing the copy, then recreate them after.
Copy the data in the correct order for the relationships to work correctly (this is usually pretty difficult / inefficient)
Auto Generated Id Values
These columns are usually handled by disabling the auto increment functionality for the given table and allowing identity insert (I'm using some SQL Server terms, I can't remember how it works on Oracle).
Transactions
If you're moving a lot of data, transactions will be expensive.
Repeatability / Deleting Target Data
Unless you're way more awesome than the rest of us, you'll probably have to run this thing more than once (at least during development). That means you might want a way to delete the target data.
Platform Specific Methods
In SQL Server, there are ways to perform bulk inserts that are blazingly fast (by giving up little things like referential integrity checking). There might be a similar feature within the Oracle toolset.
Table / Column Metadata
I haven't had to do this in Oracle yet, but it looks like you can get metadata on tables and columns using the views mentioned here.
I'm working on an application that uses SQL Server and NHibernate. We have the concept of default data (complex entities) that needs to be created for each new entity. This data can be changed on a per-user basis. However, we're struggling with the best way to create this data.
For example, lets say my application has a Store entity which has several default Products that I want to create when a new Store gets created. Anything about aProduct can be modified by managers of each Store.
As I see it, there are two main options:
Keep the default data in code and write it to the database once the new entity is created.
Keep the default data in the database and move it over with a stored procedure/raw SQL when the entity is created.
Instinctively, I lean toward option two, since databases are great at moving and manipulating sets of data, and option one would require a ton of messy code that could get out of hand.
However, writing a stored procedure or raw SQL presents its own issues:
We would have to re-write the stored procedure or SQL depending on the database we're using
We would be subverting the ORM in a way (not sure if this is actually wrong). That is, we'd be moving data around without using NHibernate
I found this article by Ayende Rahien which outlines how to perform a bulk delete. I am thinking that doing something similar for inserting default data would be fine. I also found an nhibernate users groups post (called "Schema export and default data"--SO won't let me post two links) that describes a similar situation, but it doesn't seem like there's a consensus on what the right solution is (although Ayende does offer some feedback and suggests that the data live in the database).
After writing this, I'm leaning even more toward using a stored procedure, I'm just worried about possible pitfalls of mixing two database access strategies (directly calling SProcs and using an ORM).
Any feedback is appreciated!
Edit: Removed "immutable" language. I'm specifically talking about default data that can change so I think this term was incorrect/confusing here.
I would create a default data service that creates those data in code, and use a factory to create your store and use the default data service to generate the default entities.
Using a Stored Procedure definitely defeats the point of having an ORM.