Synchronize a client database with the central database - c#

I need to update existing data or insert new data from client database say DB1 into central database say DB2 both holding same schema and both databases reside in same machine. The updates are not biderectional. I just want changes to be reflected from client(DB1) to server(DB2).
The client database(DB1) is nothing but the backup database(Full database backup consisting of mdf and ldf files) which is attached to the same server where the central database(DB2) exists. I am not going to make any changes to the backup database(DB1) once it is attached to the server. The backup database(DB1) already has the modified data which i want to update it to central database(DB2) . So how do i do programatically using C# .NET?.Can you give any example code?
I have tried transactional replication with push subscription without sending the snapshot. The problem is that the i want to update the modified data from DB1 to central database DB2 at the first shot itself but transactional replication will not allow me to do so. It will not send any modified data which is already present in DB1. So the initial data present in DB1 is untouched when you try to send without snapshot. The backup database (DB1) already has the modified data prior to replication. How do i tackle this as i am not going to insert any new or modify data into the backup database(DB1) after i set replication.
Thanks and regards,
Pavan

Microsoft Sync framework is the best solution, especially if you are using express editions (in which case replication will not work).
Sync framework is quite straight-forward if used with SQL server change tracking in sql server 2008. You can define your mode of synchronization as well (bi-directional, upload only, download only) and also define what happens when there are conflicts (for instance constraints get violated, etc).
And yeah - just google for an example there are several straight forward walk throughs available on the topic, including peer-peer synchronization (might be the one you require) and client-server synchronization (client should be sql server compact edition).

You may also want to explore SQL Server's merge replication functionality. It is the replication type designed to allow satellite databases to automatically post back their results to a central repository.

To achieve this you have the following options:
1.) Use SQL Server Transactional Replication. Make DB1 as Publisher, DB2 as Subscriber and go for Pull or Push based subscription. All changes in DB1 will be simply reflected to central. If any changes we there in Central for the same tuple, they will be overwritten by DB1 changes.
Advantages: Easy to implement and reliable
Disadvantages: Very little customization
2.) Use Microsoft Sync Framework SQLDataBaseProvider.
Advantages: Very Flexible
Disadvantages: I have heard bad things about it but never tried.
3.) Custom Implementation: This is a bit hard as you need to track changes on DB1. One option can be reading transactional logs which Transactional Replication does internally or other option is to use trigger and build knowledge of changes. Then you need to write a library or routine which will get you change knowledge then it will apply to central.
Edit:
For backup and restore database progmatically:
http://www.mssqltips.com/tip.asp?tip=1849

Related

PerformPostRestoreFixup -Can this convert a server database backup to a client database

I'm trying to use sync framework to synchronize large databases,but since the sizes of databases, it is really painful to deprovision and reprovision,when there are schema changes. Since the project is in Development stage, I want a fixed solution to provision client database without any time wasting way.
My question is, is it possible to restore a provisioned server DB to client DB & run PerformPostRestoreFixup in client DB to save initial sync time?(also vice versa)
yes, that's your only other alternative for initialising new replicas with pre-loaded data. (the other one is generating snapshots via SQL CE).

.net windows application store data offline and store to db when there is network

I am developing a windows application for agricultural purpose. This application will be used by multiple users to maintain the data. The main issue is there won't be network connectivity on the work location. But however by end of the day they can go and synchronize if there are any option.
I just want to know how can we import and store all the data locally and update the data to database when there is network.
The options that i thought is to have SQL on every machine that runs this application. Store the data to local database when there is no network.
Having a separate button to export the local data to the centralized database when there is network.
Looks like this is complicated. Is there any better and easier option.
I prefer using c#, Visual studio.
Thanks.
You can use SQLite for storing data locally. It's fast, lightweight, and public domain.
You can use whatever the database of choice for the centralized server.
Well, this a quite broad question, as it has many options and scenarios. The questions you should ask yourself are:
Does user handle new information only or any information from any other user from the previous syncing?
Do you have to handle update conflicts?
Do you handle text information only or you have complex types and binary files?
As for the solution, the easiest way, from my point of view, would be using SQL Lite on portable devices, is a lightweight SQL client that will allow you to handle information easily. On the server you can use whatever you want, SQL Server, MySQL or any other SQL flavor you may like. Just make sure there is a connector for your portable device OS.
If you keep thinking of using SQL server on the portable device, it's a battery hogger!!!, you might want to check Microsoft Sync framework, as it provides almost all possible scenarios for handling data syncing, manage conflicts, etc.
Thanks for the answers. Please find the below solution that we implemented.
1) Installed SQL express on all the local machines
2) Used Microsoft Sync framework to sync the data. The sync is configured on demand.
Issues faced:
1) We were using geometry datatype on few tables and this was not supported by sync framework.
2) Any change in the database schema will not reflect on the client machine. We will have to delete all the system generated procedures used to track the table change and regenerate it. I am sure there will be a much better way to do this.
Cheers,
Jebli

Which one is the best method to replicate a database in SQL Server?

I was wondering which one is the best way to replicate some data of a database to another.
I have a database in one computer and this one receives some transactions. I need to send this data to another server (in the same local network) but with a modified value (I need to add 11 years to a Timestamp value).
So I was looking for some options for my case, I can develop a windows service to do this but I don't know if the sql server replication can do this for me or if there is another option like some kind of magical trigger that can do that.
I'm using SQL Server 2005 on Windows Server 2003 R2.
This link should help you:
Selecting the Appropriate Type of Replication
Quoted summary from link:
Microsoft SQL Server offers three types of replication. Each type of
replication is suited to different application requirements. Depending
on the needs of your application, you can use one or more types of
replication in a topology:
Snapshot replication
Transactional replication
Merge replication
I personally would replicate the database (transactional) and then use log shipping to update the replicated database (on your second server) with the latest data changes (from the primary server) then use a stored procedure running as a sql agent job to update the fields you need.
I personally am not a fan of triggers as you can end up having triggers activating other triggers and something that takes milliseconds to run can take seconds and if you have large volumes of data that can be painful (I manage a system that has exactly this issue - soon to be replaced thankfully)
hope this helps and if you have some follow up questions I'll be happy to help.

Best approach to incremently update application data

I have been working on an application for a couple of years that I updated using a back-end database. The whole key is that everything is cached on the client, so that it never requires an network connection to operate, but when it does have a connection it will always pickup the latest updates. Every application updated is shipped with the latest version of the database and I wanted it to download only the minimum amount of data when the database has been updated.
I currently use a table with a timestamp to check for updates. It looks something like this.
ID - Name - Description- Severity - LastUpdated
0 - test.exe - KnownVirus - Critical - 2009-09-11 13:38
1 - test2.exe - Firewall - None - 2009-09-12 14:38
This approach was fine for what I previously needed, but I am looking to expand more function of the application to use this type of dynamic approach. All the data is currently stored as XML, but I do not want to store complete XML files in the database and only transmit changed data.
So how would you go about allowing a fairly simple approach to storing dynamic content (text/xml/json/xaml) in a database, and have the client only download new updates? I was thinking of having logic that can handle XML inserted directly
ID - Data - Revision
15 - XXX - 15
XXX would be something like <Content><File>Test.dll<File/><Description>New DLL to load.</Description></Content> and would be inserted into the cache, but this would obviously be complicated as I would need to load them in sequence.
Another approach that has been mentioned was to base it on something similar to Source Control, storing the version in the root of the file and calculating the delta to figure out the minimal amount of data that need to be sent to the client.
Anyone got any suggestions on how to approach this with no risk for data corruption? I would also to expand with features that allows me to revert possibly bad revisions, and replace them with new working ones.
It really depends on the tools you are using and the architecture you already have. Is there already a server with some logic and a data access layer?
Dynamic approaches might get complicated, slow and limit the number of solutions. Why do you need a dynamic structure? Would it be feasible to just add data by using a name-value pair approach in a relational database? Static and uniform data structures are much easier to handle.
Before going into detail, you should consider the different scenarios.
Items can be added
Items can be changed
Items can be removed (I assume)
Adding is not a big problem. The client needs to remember the last revision number it got from the server and you write a query which get everything since there.
Changing is basically the same. You should care about identification of the items. You need an unchangeable surrogate key, as it seems to be the ID you already have. (Guids may be useful here.)
Removing is tricky. You need to either flag items as deleted instead of actually removing them, or have a list of removed IDs with the revision number when they had been removed.
Storing the data in the client: Consider using a relational database like SQLite in the client. (It doesn't need installation, it is just storing in a file. Firefox for instance stores quite a lot in SQLite databases.) When using the same in the server, you can probably reuse some code. It is also transaction based, which helps to keep it consistent (rollback in case of error during synchronization).
XML - if you really need it - can be stored just as a string in the database.
When using an abstraction layer or ORM that supports SQLite (eg. NHibernate), you may also reuse some code even when there is another database used by the server. Note that the learning curve for such an ORM might be rather steep. If you don't know anything like this, it could be too much.
You don't need to force reuse of code in the client and server.
Synchronization itself shouldn't be very complicated. You have a revision number in the client and a last revision in the server. You get all new / changed and deleted items since then in the client and apply it to the local store. Update the local revision number. Commit. Done.
I would never update only a part of a revision, because then you can't really know what changed since the last synchronization. Because you do differential updates, it is essential to have a well defined state of the client.
I would go with a solution using Sync Framework.
Quote from Microsoft:
Microsoft Sync Framework is a comprehensive synchronization platform enabling collaboration and offline for applications, services and devices. Developers can build synchronization ecosystems that integrate any application, any data from any store using any protocol over any network. Sync Framework features technologies and tools that enable roaming, sharing, and taking data offline.
A key aspect of Sync Framework is the ability to create custom providers. Providers enable any data sources to participate in the Sync Framework synchronization process, allowing peer-to-peer synchronization to occur.
I have just built an application pretty much exactly as you described. I built it on top of the Microsoft Sync Framework that DjSol mentioned.
I use a C# front end application with a SqlCe database, and a SQL 2005 Server at the other end.
The following articles were extremely useful for me:
Tutorial: Synchronizing SQL Server and SQL Server Compact
Walkthrough: Creating a Sync service
Step by step N-tier configuration of Sync services for ADO.NET 2.0
How to Sync schema changed database using sync framework?
You don't say what your back-end database is, but if it's SQL Server you can use SqlCE (SQL Server Compact Edition) as the client DB and then use RDA merge replication to update the client DB as desired. This will handle all your requirements for sure; there is no need to reinvent the wheel for such a common requirement.

Sync Framework - Syncing data without schema changes

Is there a way to use Microsoft Sync Framework without implementing the required schema changes ('_tracking tables'). Basically, I am faced with the task of Syncing two SQL Server 2008 databases, one of which is a legacy db that we cannot make any schema changes to.
Would it be possible to store the additional tables required for each database in a separate database?
e.g. I have 3 tables that we need to sync (Staff, Customer & Sales), normally we would just add the three additional tracking tables, but this isn't possible. Instead, can I have a separate database with the required tracking tables (Staff_tracking, Customer_tracking, Sales_tracking) and somehow point the sync framework to this new db??
Any help is appreciated, and a code example would be super!
Since you are using SQL 2008 as the database, just turn on change tracking and let SQL Server track the change tables for you internally without having to change the schema of the actual client database. MSDN explains it nicely in this article. About half way down you will see the following:
SQL Server 2008 has introduced a new alternative method for tracking
changes called SQL Server 2008 Change Tracking. The concept behind
change tracking is that an administrator marks certain tables to be
monitored for changes. From that point SQL Server 2008 keeps tracks of
any inserts, updates, or deletes that are made. When a remote
“requestor” requests changes, SQL Server 2008 will provide all of the
changes that have occurred since the last successful download as
specified by the requestor. The Sync Framework database
synchronization providers have been built to take advantage of SQL
Server 2008 change tracking and provide the following advantages for
an OCA environment:
No schema changes are required to be able to track changes.
Assuming you are using the standard Microsoft synchronization providers, change tracking support is included by default.

Categories