I'm using Sync Framework to handle syncing between local and remote databases.
I've managed to get both upload and download working, but I would like to have any local changes made to a specific table be overwritten with the original remote values; a forced overwrite in a sense.
Is there any way that this can be accomplished?
Any changes made to the remote database's table are successfully syncing down to the local db's table, but in the event that a change is made locally, it must be overwritten.
SyncFx syncs incrementally (syncs changes after last sync). In your case, the remote values will not be re-sent to your local if they didn't change.
you can do a dummy update on the remote rows to force them to be re-sent, but rather than doing it that way, why dont you just prevent edits on the local copy?
Related
I'm trying to use sync framework to synchronize large databases,but since the sizes of databases, it is really painful to deprovision and reprovision,when there are schema changes. Since the project is in Development stage, I want a fixed solution to provision client database without any time wasting way.
My question is, is it possible to restore a provisioned server DB to client DB & run PerformPostRestoreFixup in client DB to save initial sync time?(also vice versa)
yes, that's your only other alternative for initialising new replicas with pre-loaded data. (the other one is generating snapshots via SQL CE).
I have a local data base for different different client on multiple terminals, and i have replica of database to the remote server, i want in a windows application on button click the data from the local should get updated to the remote server for some client id.
How can i achieve this? Could somebody suggest me on this.
Or link of reference.
If I understand the question correctly, you want to synchronize databases. There's different approches to do that:
Manually detect differences between two databases and insert/update the differences. Now, your database should be in a certain format for that to work, for example one could use GUID instead of incremental integers for ID's in a relational database. And you also need a created/updated dates for each row.
You could also check ouf the Sync Framework, there's more documentation on how do to deal with synchronization: http://msdn.microsoft.com/en-us/sync/bb887608
I am using the Microsoft Sync Framework "collaboration" providers. Both ends of the sync will use SQL Express to begin with. When provisioned the database contains a "_tracking" table for each "real" table in the database. My database is fairly large, and I don't want to transfer the entire thing via MSF on the first sync. Is there a way to use some other method to "jumpstart" the sync when both sides are known to contain the same data? In my testing when both databases contain identical content, it looks like it downloads the entire scope, churns through the entire batch of "changes", and then uploads the entire scope back to the server which then churns through the entire dataset again. Is there any way to update the _tracking tables (hopefully only on one side) to let the system know that the database contents are the same?
More information (edit):
From examining the contents of the tracking tables after doing an initial sync, it looks like the scope_update_peer_timestamp and local_create_peer_timestamp fields in every _tracking table need to be updated on both sides. In addition, the update_scope_local_id, scope_update_peer_key, and last_change_datetime need to be set on one of the two sides.
The last_change_datetime field is a datetime and is fairly self-explanatory.
The two _timestamp fields seem to use ##DBTS and are thus bigints that contain the equivalent of an editable timestamp column.
That still leaves a bunch of unknowns:
Does MSF track which peer the content of the timestamp columns come from?
Which peer (local or remote) drives the contents of the _timestamp fields?
What logic drives the contents of the update_scope_local_id and scope_update_peer_key fields?
More information on the environment (edit):
I have SQL Express/Std on both sides. The server side will eventually contain information for several clients (using multi-tenancy), so backups will not be useful since the server will contain information for multiple clients.
how are you initializing your databases? are you provisioning databases that both contain the same set of data already?
the best way to initialize other replicas is to use the GenerateSnapshot method on the SqlCeSyncProvider that creates an SDF file to initialize other replicas or to do a back up of the database (non-SDF, SQL Server/Express database), restore it and run PerformPostRestoreFixup before doing a sync.
Alright, background:
I've got an app, it has a local read-only reference database (lets call it "local.sdf") included in the source. Now, the user will be reaching out to a website (call it "http:\www.websiteImGettingTxtFrom.txt") which is the source for a pipe-delimited .txt file to update a separate local db ("webdata.sdf") with entities that will correspond directly with entities in local.sdf. Ideally, it would be easiest if the app just created/updated webdata.sdf on app_launch/app_load/whenever the user pushed a button to "update".
So, how do I create/update the aforementioned webdata.sdf in-code strictly from a pipe-delimted txt (keeping in mind this database will have over 20,000 entities with, i believe, 7 properties each)?
here's an exmaple of the pipe-delimited text I'm pulling:
|ColumnName1|ColumnName2|ColumnName3|ColumnName4
|Entity|Value1|Value2|Value3
|Entity2|Value1|Value2|Value3
|Entity3|Value1|Value2|Value3
I know how to do a mass record clear, but populating is the real issue. Also, is there a process-lite way to do all of this in background (to prevent the app from crashing the DB if the app is closed during the load)
Thanks,
rapterj
You can include an webdata.sdf with an empty table as a resource, and copy to Isolated Storage on launch, if it does not exist already (the DataContext generated by SQL Server Compact Toolbox gives you a CreateDataIfExists method that can do that for you).
For the INSERTs, batch these in appropiate batches and call SubmitChanges (you will need to do some testing)
I need to update existing data or insert new data from client database say DB1 into central database say DB2 both holding same schema and both databases reside in same machine. The updates are not biderectional. I just want changes to be reflected from client(DB1) to server(DB2).
The client database(DB1) is nothing but the backup database(Full database backup consisting of mdf and ldf files) which is attached to the same server where the central database(DB2) exists. I am not going to make any changes to the backup database(DB1) once it is attached to the server. The backup database(DB1) already has the modified data which i want to update it to central database(DB2) . So how do i do programatically using C# .NET?.Can you give any example code?
I have tried transactional replication with push subscription without sending the snapshot. The problem is that the i want to update the modified data from DB1 to central database DB2 at the first shot itself but transactional replication will not allow me to do so. It will not send any modified data which is already present in DB1. So the initial data present in DB1 is untouched when you try to send without snapshot. The backup database (DB1) already has the modified data prior to replication. How do i tackle this as i am not going to insert any new or modify data into the backup database(DB1) after i set replication.
Thanks and regards,
Pavan
Microsoft Sync framework is the best solution, especially if you are using express editions (in which case replication will not work).
Sync framework is quite straight-forward if used with SQL server change tracking in sql server 2008. You can define your mode of synchronization as well (bi-directional, upload only, download only) and also define what happens when there are conflicts (for instance constraints get violated, etc).
And yeah - just google for an example there are several straight forward walk throughs available on the topic, including peer-peer synchronization (might be the one you require) and client-server synchronization (client should be sql server compact edition).
You may also want to explore SQL Server's merge replication functionality. It is the replication type designed to allow satellite databases to automatically post back their results to a central repository.
To achieve this you have the following options:
1.) Use SQL Server Transactional Replication. Make DB1 as Publisher, DB2 as Subscriber and go for Pull or Push based subscription. All changes in DB1 will be simply reflected to central. If any changes we there in Central for the same tuple, they will be overwritten by DB1 changes.
Advantages: Easy to implement and reliable
Disadvantages: Very little customization
2.) Use Microsoft Sync Framework SQLDataBaseProvider.
Advantages: Very Flexible
Disadvantages: I have heard bad things about it but never tried.
3.) Custom Implementation: This is a bit hard as you need to track changes on DB1. One option can be reading transactional logs which Transactional Replication does internally or other option is to use trigger and build knowledge of changes. Then you need to write a library or routine which will get you change knowledge then it will apply to central.
Edit:
For backup and restore database progmatically:
http://www.mssqltips.com/tip.asp?tip=1849