Case
I have an app where I am downloading information about some products from a server and storing it to an SQLite database.
Every minute I re-download the whole information in case it has been modified, deleted or added although I know it is not efficient.
Goal
What I need is some form of getting only the data that has been modified if it actually has.
How can I achieve this?
There are many possible solutions for this problem. No one here will be able to give a conclusive answer without knowing your system in more. But following is a possible approach that might work for you.
Client side need to cache/save the timestamp of when the api was last called.
Server need to make changes to the existing api, to send the timestamp. If there is any changes made to the data since the timestamp, the server will return those changes or else no data.
Another approach will be to have a data sync, which will be more efficient but it involves more complexity and there might be more work involved.
Related
I am trying to build a WinForms app that allows the user to fill out a form for our company while keeping it clean and standardized. As part of this project, I have created a replication from our production Db with only the tables for clients and client contacts. I need those tables available to the App so it can pull current client information for this form. I wanted to add another table to the replication that is not from the original Db that will take all the information added for the form. I guess I don't need it, but I was wondering if it was possible and, if so, will it break anything.
I can already hear some of you guys reading this saying, why don't you just add another table to the production Db. Well, I thought of that but, the App that is bound by that Db is very strict and runs a check every time the app is launched to make sure the Db hasn't been corrupted. If it finds a new table has been added, I'm sure it wouldn't work. Also, I hear some of you shouting at your monitors, "Why do you want this table as part of the Replication, Why not just house it on a different Db that won't affect your precious app?" And to those people I would say, I just thought of that. But, I am trying to make something lightweight that can be put on a lot of computers without much over head or back end. Thank you in advance for considering this problem.
I am designing a WPF desktop application and using Entity framework Code First to create and use SQL Server Database. My database will be hosted on One Server machine and will be running 24*7.
I want to provide a feature, where you can modify data offline(when you have no connectivity with SQL Server DB) and Save it somehow. And whenever your application will find connection with SQL Server, all changes can be moved to SQL Server DB.
Is there any way to achieve this by using Entity Framework ?
I want to emphasis on the part that I am using Entity Framework. Is this type of functionality already implemented by EF?? Or I have to do it manually, like have to write that in any file system and then manually merge it later to DB ?
You could figure out the specific exceptions that are generated when the SQL Server connection is lost, and embed your calls in try-catch blocks. If the server is offline, then in your catch block, pass the entity to a method that serializes the entity to JSON and saves it to the hard drive in a special directory or something. On your next successful query, check that directory to see if there are any saved entities that need to be saved.
Be specific with your catches - you don't want unrelated exceptions to trigger this code.
Some things to keep in mind - what if somebody else changed the data in the meantime? Are you intending to overwrite those changes? How did you get the data which needs to be saved in the first place if you are offline?
As long as you have all data loaded into DbContext/ObjectContext you're free to amend those data anyway you want. Only when SaveChanges() is invoked, the connection is really needed.
However, if you're going to load everything into the context, you seem to reimplementing DataSet functionality, which, in addition, allows for xml serialization/deserialization of the changes, so the changes can be even saved between sessions.
Not as trendy as EF, though :)
While I have never tried this with SQL-based data I have done it in the past with filesystem-based data and it's a major can of worms.
First, you have to have some means of indicating what data needs to be stored locally so that it will be available when you're offline. This will need to be updated either all the time or before you head out--and that can involve a lot of data transfer.
Second, once you're back online there's a lot of conflict resolution that must be done. If there's a realistic chance that someone else might have changed the data while you were out you need some way of detecting the conflict and prompting the user as to what to do in that situation. This almost certainly requires a system that keeps a detailed edit trail on every unit of data that could reasonably be updated.
In my situation I was very fortunate in that it was virtually certain that if the remote user edited file [x] that overwriting the system copy was the right thing to do. Remote users would only be carrying the files that pertained to their projects, conflicts should never happen. Thus the writeback was simply based on timestamps, nothing more. Data which people in the field would not normally need to modify was handled by not even looking at it, modified files were simply copied from the system to the laptop.
This leaves the middle step--saving the pending writes. I disagree with Elemental Pete's answer in this regard--simply serializing them and saving the result them does not work because what happens when you read that data back in again? You see the old copy, not the changed copy!
My approach to this was a local store of all relevant data that was accessed exactly like the main system data was, all reads and writes worked normally.
Something a lot fancier might be needed if you have data that needs transactions involved.
Note that we also hit a nasty human problem: the update process took several minutes (note: >10y ago) simply analyzing what needed to be done, not counting any actual copy time. The result was people bypassing it when they thought they could. Sometimes they thought wrong, oops!
I want to be able to maintain a count and a last accessed date across application loads for a web service polling application. I'm not too sure what the best way to do this is. I dont like the idea of storing that data in a database as I would have to create one specifically for the purpose. What other options do I have and are there any particularly nice ways of keeping application state between subsequent runs of the app?
Persisting data eh? I suggest a database or file.
File solutions you can just XML serialize to a file and load it again when the app starts.
If the data is shared or might ever grow, then a database is probably the best solution. You can find one that fits your need among the many free projects if you wish:
couchdb
mysql
postgres
mangodb
membase
sqlite
etc
You could roll your own solution that doesn't involve a database, but most likely there is one that fits your needs and learning it would be useful beyond just the project at hand.
Don't be afraid to make a 'configuration' style table for your website, that simply has only a few rows and let's you store runtime information as needed.
Perfectly fine.
I am working on an application at the minute that will originally be just installed on a client machine with a lightweight database (may SqlLite).
After a while I want to add a web version of the same piece of software and with this the smart client will then be able to sync with the online version.
Has anyone done anything similar, I am looking to know:
What is the best way of syncing, are there patterns around it?
Are there any frameworks out there to handle syncing?
Is there any gotcha's I should be aware of from the start (maybe security concurrency)?
What would be the best way to architect this?
Thansk in advance...
So, Microsofts Sync Framework will help with this. Introduction
Couple of issues stand right out at the beginning.
If you are going to have the data exisit on the client first, then sync to a server at some point later, you need to think about what happens when a number of clients all sync to the server, esp. around conflict resolution.
There are events that get raised on the server side to idetify when a conflict occurs, but you need to decide who wins. (one on server, one from incoming client). Depending on wht you choose to do here, the second sync is likely to modify the client data.
Think carefully about what to sync. If its a contacvts database, is it good enough to have just the client name and telephone number sync, or do you need to whole contact history as well?
Think in terms of syncing a table, using rows where the key is all the same value. Even if this is a constructed table with triggers etc. This makes the framework sync a much simpler process and less prone to errors (tables needed to be synched in different orders).
If its an invoicing program, maybe an upload only table of orders is needed, with all the assoiciated invoice, history, reporting tables etc being updated on the server, rather than updating them on the client and syncing multiple tables....
I have a client who has a product-based website with hundreds of static product pages that are generated by Microsoft Access reports and pushed up to the ISP via FTP (it is an old design). We are thinking about getting a little more sophisticated and creating a data-driven website, probably using ASP.NET MVC.
Here's my question. Since this is a very small business (a handful of employees), I'd like to avoid enterprise patterns like web services if I can. How does one push updated product information to the website, batch-style? In a SQL Server environment, you can't just push up a new copy of the database, can you?
Clarification: The client already has a system at his facility where he keeps all of his product information and specifications. I would like to refresh the database at the ISP with this information.
You don't mention what exactly the data source is, but the implication is that it's not already in SQL Server. If that's the case, have a look at SSIS.
If the source data is in SQL Server, then I think you'd want to be looking at either transactional replication or log shipping to sync the two databases.
If you are modernizing, and it is a handful of employees, why would you push the product info out batch style?
I don't know exactly what you mean by "data driven", but why not allow the ASP.NET app to query the SQL Server product catalog database directly? Why generate static pages at all?
UPDATE: ok, I see, the real question is, how to update the SQL database running at the ISP.
You create an admin panel so the client can edit the data directly on the server. It is perfectly reasonable to have the client keep all their records on the server as long as the server is backed up nightly. Many cloud and virtual services offer easy ways to do replicated backups.
The additional benefit of this model is that more than one user can be adding or updating records at a time, making the workforce a lot more scalable. Likewise, the users can log in from anywhere they have a web browser to add new records, fix mistakes made in old records, etc.
EDIT: This approach assumes you can convince the client to abandon their current data entry system in favor of a centralized web-based management panel. Even if this isn't the case, the SQL database can be hosted on the server and the client's application could be made to talk to that so you're only ever using one database. From the sounds of it, it's a set of Access forms and macros which you should have source access to.
Assuming that there is no way to sync the data directly between your legacy system DB (is it in Access, or is Access just running the reports) and the SQL Server DB on the website (I'm not aware of any):
The problem with "pushing" the data directly into the SQL server will be that "old" (already in the DB) records won't be updated, but instead removed and then recreated. This is a big problem with foreign keys. Plus, I really don't like the idea of giving the client any access to the db at all.
So considering that, I find that the best is to write a relatively simple page that takes an uploaded file and updates the database. The file will likely be CSV, possibly XML. After a few iterations of writing these pages over the years, here's what I've come up with:
Show file upload box.
On next page load, save file to temp location
Loop through each line (element in XML) and validate all the data. Foreign keys, especially, but also business validations. You can also validate that the header row exists, etc. Don't update the database.
3a. If invalid data exists, save an error message to an array
At the end of the looping, show the view.
4a. If there were errors, show the list of error messages and tell them to re-upload the file.
4b. If there were no errors, create a link that has the file location from #2 and a confirmation flag
After the file location and confirm flag have been submitted run the loop in #3 again, but there's an if (confirmed) {} statement that actually makes the updates to the db.
EDIT: I saw your other post. One of the assumptions I made is that the databases won't be the same. ie, the legacy app will have a table or two. Maybe just products. But the new app will have orders, products, categories, etc, etc. This will complicate "just uploading the file".
Why do you need to push anything?
You just need to create a product management portion of the webpage and a secondly a public facing portion of the webpage. Both portions would touch the same SqlServer database.
.Net has the ability to monitor a database and check for updates. then you can run a query to [push] the data elsewhere.
or use sql to push the data with a trigger on the table(s) in question.
Is this what you were looking for?
You can try Dynamic Data Web Application.
You should have a service that regularly updates the data in the target DB. It will probably run on your source data machine (where the Access-DB is)
The service can use SSIS or ADO.NET to write the data. You can do this over the web, because you have access via TCP/IP to the server I assume.
Please check when the updates are done and how long it takes. If you can do the updates during the night you are fine. If not you should check, if you can still access the web during the import. That is sometimes not the case.
Use wget to push the new data file to the mvc app and once the data is received by the action, the mvc app invokes the processing/importing of the data (maybe in a worker process if you dont want long requests).