I have a Router that saves changes from incoming messages to a database that is used by our CRM software. The Router uses LINQ to SQL for communication. We just released a completely revisioned version of the software that was built from the ground up, and runs on a different database.
Rather than maintaining two Routers with almost identical code, we want to change the code to work on either database, and change the context dynamically. This requires having a DataContext for each of the two formats.
My question is, can I have a DataContext for a database that doesn't exist on the system as long as the context is never used? I plan on refactoring all database communication to a single dll, and use two different classes that implement the same interface to access the two different databases. I will then only call methods in the correct class, but the dll would hold both DataContexts.
Thanks.
Related
I have an SQL Server instance where I have two databases attached. One is a MS SQL database and the other is a linked server(ODBC) which is an indexed file system (Vision). Let's say the Customer table exists in both db's and should be kept identical. I will populate fields in my application from the linked server, and if any changes are made they should be written to both databases. Field names may also be different in the two db's. I use ADO connections in the application and would normally use adapter.Update if I were working with only one db. As I will be doing quite a lot of db calls throughout the application, I would prefer to make a kind of data handling class which will take care of this and leave me with a simple call to this class. I was also thinking of making some kind of db-transaction to ensure both systems will stay identical.
Does anybody have a suggestion on how to approach this?
I'm thinking you can have 2 separate projects for handling the DataLayer (one for each db) and expose them through a Facade/Adapter that will handle delegating the CRUD operations to both of them, also handling the necessary conversions (you mentioned the fields are not named the same).
In the Facade/Adapter you can also implement Retry Logic and Transactions to ensure both data sources are in sync.
I made an application that generates reports based on data from a database.
The functionality of my application is correct, but I have a following problem: my client has 2 identical databases - one for testing and one for actual work he does.
My application should work with both databases (it should have a "switching mechanism"), but I don't know how to implement it.
I know that I could just switch between connection strings but the problem is that in my reports I use datasets that are bound to one database.
Is it possible to fill those datasets with the data from both databases (since the databases are identical in schema, it should be possible), and how would that be done, or do I have to use duplicate dataset/report pairs?
I'm using C# in VS 2010 with SQL Server 2005, and .rdlc for my reports.
Thanks.
Ideally you should should be able to change the connection string in one place and it should affect project-wide.
This will work ONLY IF you get the connection string from one place. Keep it in the app.config file.
See this article to see how you can store and read the connection string from the app.config file.
You've hit upon the reason why people implement the Repository pattern or at least a version of it.
You really need to remove your business logic away from the database, so that it is database agnostic. It shouldn't care where the data comes from only what it is.
From what you' said the implication is that your client doesn't wants more than just a change in the app.config connection string used for database access.
If that is so then, I know that it will entail some work, your best bet is to have a singleton pattern type class to control all data access to and from your data layer.
Using a known inteface you can use a factory pattern to create access to your development or live database at runtime (perhaps based on an app.config setting or even a test class that has no database access at all, but just returns hard coded test data.
I'm building a WPF, C#, .NET solution that needs to rapidly change data connections.
All the connections will eventually sync back to a parent Oracle database, but in the meantime, I might be pulling data from an access database, or a local sql server compact database, or an xml file, or even a web service, or sharepoint. Problem is, I might even need to add providers, and need to be able to keep the providers in sync with each other, and do it real time, with no loss of connection/seamless to the users. This is all dependent on what type of machine, which domain, and what kind of network connectivity we have, and is a client requirement, not something I can change.
Does anyone have a good recommendation for what the best way to accomplish this would be?
Has the client explained why they need this functionality? Often they ask for things to solve a problem that they forsee, without adequate knowledge of how best to solve it.
If you're coding something like an application for a travelling salesman to use, which will have intermittent connectivity to the Oracle database, then maybe you should look at using some means other than a direct database connection for synchronising the databases.
Say using a WCF/SOAP service to pass serialized data objects back and forth or you could look at using MSMQ to transfer changes back and forth between the intermittently connected mobile application and the Oracle database server. It would, of course mean that you'll need to run a server side application/service to handle this data and pass it into the Oracle database, but it would allow for intermittent connections to be handled more easily without having to handle database connection error logic.
In the meantime if your client code should look at layering the code to use a factory Repository type pattern. As business logic just calls an interface it is then possible to use database specific code within your data layer that was decided upon at run time (say through a config setting).
you can create one or two tables in any of your server whcih will include all connection strings and the condtions on which data you need to use which connection .
and your business layer will be independent of connections .
other options is to use sofware facotry pattern.
there you can repository independent of connections and on run time decided which data repository will connect to whcih DB.
I have probably written the same LINQ to SQL statement 4-5 times across multiple projects. I don't even want to have to paste it. We use DBML files combined with Repository classes. I would like to share the same Library across multiple projects, but I also want to easily update it and ensure it doesn't break any of the projects. What is a good way to do this? It is OK if I have to change my approach, I do not need to be married to LINQ to SQL and DBML.
We have both console apps and MVC web apps accessing the database with their own flavor of the DBML, and there have been times when a major DB update has broken them.
Also, since currently each project accesses the DB from itself, which is sometimes on another server, etc. Would it be possible to eliminate the DB layer from being within each project all together? It might help the problem above and be better for security and data integrity if I could manage all the database access through a centralized application that my other applications could use directly rather than calling the database directly.
Any ideas?
The way I handle this is using WCF Data Services. I have my data models and services in one project and host this on IIS. My other projects (whatever they may be) simply add a service reference to the URI and then access data it needs over the wire. My database stuff happens all on the service, my individual projects don't touch the database at all - they don't even know a database exists.
It's working out pretty well but there are a few "gotchas" with WCF. You can even create "WebGet" methods to expose commonly used methods via the service.
Let me know if you want to see some example code :-)
Is there a standard messaging protocol(s) / API(s) available to keep databases in sync. Or alternatively API(s) for creating and parsing messages.
Our company is working with another company to provide two different software packages to two different kinds of users. The data sits in two separate databases but parts of it have to remain in sync.
Their system is pretty much a black box to us. And vice versa.
So what would be required would be to track updates, and turn these into messages and send them to a web service, map these back to the destination database fields, and commit them.
The database schemas do not match.
I am aware that we are going to have to roll most of this ourself, but some ideas around messaging or techniques would be good.
One solution : SQL Server Integration Service. It appears from SQL Server 2005. This is exactly what you need. It was called DTS in SQL Server 2000 for Data Transformation Service. This was created to import/export/transform data from one point to an other. This is really easy to use from SQL Server 2005 (DTS is quite horrible).
So basically, you will have to write packages to import data from their database, transform, filter, etc. it exactly how you need it to insert it into your database. And vice versa.
Regarding the black box fact, you should generate the database relational design to make it easier.
EDIT
Just in case of you need to install it, I remember bugs from the SQL Server 2005 installer not installing SSIS at all. I had to satisfy all warnings in the installer system requirements step to obtain it.
You have two problems:
track the changes that have to be synced
apply the changes to the peer
There is a solution that combines a solution to both issues and I'm sure you are aware of it: replication. Merge Replication would allow both sites to update the data and would also provide merge conflict resolution. But replication only works when the table schema is similar and puts a big constraint on development as schema changes have to be carefully coordinated between the sites. In practice, when the sites are operated by independent companies, is quite difficult to maintain for a long term.
If you want to roll your own the change tracking part has built in support in SQL Server:
Change Tracking
Change Data Capture
Both can be used for a sync solution as a mean to detect what changed.
Applying the changes can be resolved by a web service, but there are also built-in solutions in SQL Server that allow for far higher scalability and throughput: Service Broker. Relying on a message defined API for sync allows the two sites to evolve at their own pace and change the schema almost at will, as long as the communication API (the message protocol)remains unchanged.
The answers provided give me some good ideas, but I think we are going to end up doing something a bit different.
We are using MSMQ, and defining a standard messaging system which we will roll ourselves.
As to how we will know what things have changed I am not sure at the moment.