Generating Occasionally Disconnected WPF App - c#

Hope you can help me get on the right path. I am currently in Design phase of my project. I have a WCF Soap/Rest Web Service that will be consumed by my WPF Client Application. Since the WPF Application needs to work on both Connected and Disconnected State I am having a design problem of how to implement caching.
I am aware of caching during the runtime of the application using ObjectCache but I am wondering in case of the application being closed and re-opened I would like to retrieve anything that the user has already entered as well if the user is disconnected that I can grab last Web Service Response and populate the form? Since ObjectCaching is only while application is up, one way I thought is to have a local database where the client app stores all the data from web service response and user entered/modified data. What I don't like about this option is that I have to duplicate the server database and its tables and data which I don't think is very good practice as well very secure.
Finally, how do you sync all the data? While being disconnected when your finally connected I need to call the WCF Web Service update method and update information back to server. Would this be some type of messaging with a batch job that would run on the client that would know when your connected and reprocess all the data? Any thoughts would be great.

What you're looking for is pretty easy to accomplish, and doesn't require a client-side database. Whether you implement it this way really depends on how secure you need the data to be.
To persist data on the client in a totally disconnected way that lets the user exit and return with no risk of losing their entries, your only option is to store the data on the client. If the application is not able to access the web server to persist the changes, and the application closes or crashes, the changes are lost, and the user is unhappy.
To make this work, create a serializable class or classes that fit your client-side field requirements. The classes need to implement INotifyPropertyChanged so you can bind your UI fields to it and keep the changes inside of the model object (as opposed to the UI controls themselves). Your code behind also needs to implement INotifyPropertyChanged. You need a property that holds the instance of the data object, and this is what you bind your fields to.
As the user types/makes changes, your data bindings have 3 update options: update the property when the user leaves the field, update the property as the text changes, or wait to update the property until after a specified delay time. When these updates occur, the PropertyChanged event is raised. If you attach to this event, you can write a method inside the class to serialize it and save the data as it is entered. A simple XML or JSON file is fine. You also need to add a load method to read the data file, deserialize it, and return the data object. This way, if the application were to close or crash unexpectedly, you would simply call the Load method and set the property in your code behind to the loaded object, the bindings restore the text, and the user can continue.
To keep everything synchronized, the client sends the object to the server so it can validate and save the changes. I would use a field to track data versions (TimeStamp field if using SQL Server) to prevent a client with outdated data from overwriting newer data, especially if you're in a multi-user environment.
If your server is able to take advantage of WCF and Entity Framework, you can build a very robust, reliable application very quickly.
As far as security goes, that depends on the type of data being entered and the legal requirements behind them (I.e. Credit cards and PCI compliance), so you would have to address those individually.

Related

Desktop application which can work offline when no connectivity with SQL Server

I am designing a WPF desktop application and using Entity framework Code First to create and use SQL Server Database. My database will be hosted on One Server machine and will be running 24*7.
I want to provide a feature, where you can modify data offline(when you have no connectivity with SQL Server DB) and Save it somehow. And whenever your application will find connection with SQL Server, all changes can be moved to SQL Server DB.
Is there any way to achieve this by using Entity Framework ?
I want to emphasis on the part that I am using Entity Framework. Is this type of functionality already implemented by EF?? Or I have to do it manually, like have to write that in any file system and then manually merge it later to DB ?
You could figure out the specific exceptions that are generated when the SQL Server connection is lost, and embed your calls in try-catch blocks. If the server is offline, then in your catch block, pass the entity to a method that serializes the entity to JSON and saves it to the hard drive in a special directory or something. On your next successful query, check that directory to see if there are any saved entities that need to be saved.
Be specific with your catches - you don't want unrelated exceptions to trigger this code.
Some things to keep in mind - what if somebody else changed the data in the meantime? Are you intending to overwrite those changes? How did you get the data which needs to be saved in the first place if you are offline?
As long as you have all data loaded into DbContext/ObjectContext you're free to amend those data anyway you want. Only when SaveChanges() is invoked, the connection is really needed.
However, if you're going to load everything into the context, you seem to reimplementing DataSet functionality, which, in addition, allows for xml serialization/deserialization of the changes, so the changes can be even saved between sessions.
Not as trendy as EF, though :)
While I have never tried this with SQL-based data I have done it in the past with filesystem-based data and it's a major can of worms.
First, you have to have some means of indicating what data needs to be stored locally so that it will be available when you're offline. This will need to be updated either all the time or before you head out--and that can involve a lot of data transfer.
Second, once you're back online there's a lot of conflict resolution that must be done. If there's a realistic chance that someone else might have changed the data while you were out you need some way of detecting the conflict and prompting the user as to what to do in that situation. This almost certainly requires a system that keeps a detailed edit trail on every unit of data that could reasonably be updated.
In my situation I was very fortunate in that it was virtually certain that if the remote user edited file [x] that overwriting the system copy was the right thing to do. Remote users would only be carrying the files that pertained to their projects, conflicts should never happen. Thus the writeback was simply based on timestamps, nothing more. Data which people in the field would not normally need to modify was handled by not even looking at it, modified files were simply copied from the system to the laptop.
This leaves the middle step--saving the pending writes. I disagree with Elemental Pete's answer in this regard--simply serializing them and saving the result them does not work because what happens when you read that data back in again? You see the old copy, not the changed copy!
My approach to this was a local store of all relevant data that was accessed exactly like the main system data was, all reads and writes worked normally.
Something a lot fancier might be needed if you have data that needs transactions involved.
Note that we also hit a nasty human problem: the update process took several minutes (note: >10y ago) simply analyzing what needed to be done, not counting any actual copy time. The result was people bypassing it when they thought they could. Sometimes they thought wrong, oops!

Why would I use Entity Framework in a mobile situtation?

I want to save edited values from a WPF mobile app, via a Web API, as the user tabs out of each field. So on the LostFocus event.
When using EF then the whole entity graph is posted (put) to the Web API each time a field is updated. Even if I just make a DTO for the basic fields on the form, I would still be posting unnecessary data each time.
I was thinking of forgetting about EF in the Web API and simply posting the entity ID, field name and new value. Then in the controller, create my own SQL update statement and use good old ADO.Net to update the database.
This sounds like going back to the noughties or even the nineties, but is there any reason why I should not do that?
I have read this post which makes me lean towards my proposed solution.
Thanks for any comments or advice
Sounds like you are trying to move away from having a RESTful Web API and towards something a little more RPC-ish. Which is fine, as long as you are happy that the extra hassle of implementing this is worth it in terms of bandwith saved.
In terms of tech level, you're not regressing by doing what you proposed; I use EF every day but I still often need to use plain old ADO.NET every now and then and there is a reason why it's still well supported in the CLR. So there is no reason not to, as long as you are comfortable with writing SQL, etc.
However, I'd advise against your current proposal for a couple of reasons
Bandwidth isn't necessarily all that precious
Even for mobile devices, sending 20 or 30 fields back at a time probably isn't a lot of data. Of course, only you can know for your specific scenario if that's too much but considering the wide-spread availability of 3 & 4G networks, I wouldn't see this as a concern unless those fields contain huge amounts of data - of course, it's your use case so you know best :)
Concurrency
Unless the form is actually a representation of several discrete objects which can be updated independently, then by sending back individual changes every time you update a field, you run the risk of ending up with invalid state on the device.
Consider for example if User A and User B are both looking at the same object on their devices. This object has 3 fields A, B, C thus:
A-"FOO"
B-"42"
C-"12345"
Now suppose User A changes field "A" to "BAR" and tabs out of the field, and then User B changes field "C" to "67890" and tabs.
Your back-end now has this state for the object:
A - "BAR"
B - "42"
C - "67890"
However, User A and User B now both have an incorrect state for the Object!
It gets worse if you also have a facility to re-send the entire object from either client because if User A re-sends the entire form (for whatever reason) User B's changes will be lost without any warning!
Typically this is why the RESTful mechanism of exchanging full state works so well; you send the entire object back to the server, and get to decide based on that full state, if it should override the latest version, or return an error, or return some state that prompts the user to manually merge changes, etc.
In other words, it allows you to handle conflicts meaningfully. Entity Framework for example will give you concurrency checking for free just by including a specially typed column; you can handle a Concurreny exception to decide what to do.
Now, if it's the case that the form is comprised of several distinct entities that can be independently updated, you have more of a task-based scenario so you can model your solution accordingly - by all means send a single Model to the client representing all the properties of all of the individual entities on the form, but have separate POST back models, and a handler for each.
For example, if the form shows Customer Master data and their corresponding Address record, you can send the client a single model to populate the form, but only send the Customer Master model when a Customer Master field changes, and only the Address model when an address field changes, etc. This way you can have your cake and eat it because you have a smaller POST payload and you can manage concurrency.

Notify user of a change to data through ajax or some other mechanism

I need to try to notify a users who are modifying the same page that an update was made to an Excel grid SPA. I was thinking about passing pack and forth the date modified timestamp and if the original is in the past from the current in the database it would mean the grid was updated by someone else. Is there a better way to do this?
Since you've mentioned AJAX, I'll assume this is a web application. This sounds like an excellent candidate for bi-directional communication via websockets. I've used SignalR with great success. It will allow you to publish events from the server to any subscribed clients, allowing you to easily update what they are viewing.

Passing data between instances of ASP.NET programs. Have I got the right idea?

I am a little confused as to how ASP.NET works. To my understanding, each time a webpage is created, it is an instance of the ASP.NET program. First of all, is this correct? For my website I have a class called 'Control' which inherits from System.Web.UI.Page, from which every other class (e.g. the aspx pages and their code behind pages) inherits. I need to maintain a list of customers etc. somewhere where it can be accessed by every user of the website (currently accessing it) and thought that this may be a good place, but if every user is accessing a different instance of the program, this list will be different for every user (as only they will be communicating with it).
If my thoughts are correct, to keep this list updated would I have to synchronize it in every instance of the program some how (possibly using threading)? Or would I have to connect to an external program which maintains this list? Or am I wrong about everything?
Thanks in advance, and sorry if this sounds like a load of nonsense; I am very confused!
Edit:
Thank you to all who have answered. I already have a database to which this data is being stored, but I also wanted to represent some of the data in the program.
I am making a booking system and have a big input form, and my plan is to load the data into objects (bookings, customers etc.) when it comes into the program (so that I don't lose the data during successive post backs), get these objects to write it to the database (it is a requirement of my client to write all data to the database as soon as it comes in to the program to minimize loss if the system goes down), and to then retain those objects software side as the program has to put constrains on what users can book (check that these services are available to them) and this would require some logic which would be easier with objects instead of having to back to the database a lot.
I therefore had the idea of storing this data in a place which was accessible to every website instance, and this is what I was confused about how to do.
It sounds like you are looking for the Cache property of the HttpContext class. The Cache shares data across the application domain, as opposed to the Items collection, which is per request. See msdn. Note that you will still need to store the data in a database as commented above.
You want to store your data in an external place like a database. Your application can then for every user load the data needed to display from the same database. Your application will grow and if you have to edit the data to a later point in time you already have all the needed pieces in place.

Data-Driven Websites for Very Small Businesses

I have a client who has a product-based website with hundreds of static product pages that are generated by Microsoft Access reports and pushed up to the ISP via FTP (it is an old design). We are thinking about getting a little more sophisticated and creating a data-driven website, probably using ASP.NET MVC.
Here's my question. Since this is a very small business (a handful of employees), I'd like to avoid enterprise patterns like web services if I can. How does one push updated product information to the website, batch-style? In a SQL Server environment, you can't just push up a new copy of the database, can you?
Clarification: The client already has a system at his facility where he keeps all of his product information and specifications. I would like to refresh the database at the ISP with this information.
You don't mention what exactly the data source is, but the implication is that it's not already in SQL Server. If that's the case, have a look at SSIS.
If the source data is in SQL Server, then I think you'd want to be looking at either transactional replication or log shipping to sync the two databases.
If you are modernizing, and it is a handful of employees, why would you push the product info out batch style?
I don't know exactly what you mean by "data driven", but why not allow the ASP.NET app to query the SQL Server product catalog database directly? Why generate static pages at all?
UPDATE: ok, I see, the real question is, how to update the SQL database running at the ISP.
You create an admin panel so the client can edit the data directly on the server. It is perfectly reasonable to have the client keep all their records on the server as long as the server is backed up nightly. Many cloud and virtual services offer easy ways to do replicated backups.
The additional benefit of this model is that more than one user can be adding or updating records at a time, making the workforce a lot more scalable. Likewise, the users can log in from anywhere they have a web browser to add new records, fix mistakes made in old records, etc.
EDIT: This approach assumes you can convince the client to abandon their current data entry system in favor of a centralized web-based management panel. Even if this isn't the case, the SQL database can be hosted on the server and the client's application could be made to talk to that so you're only ever using one database. From the sounds of it, it's a set of Access forms and macros which you should have source access to.
Assuming that there is no way to sync the data directly between your legacy system DB (is it in Access, or is Access just running the reports) and the SQL Server DB on the website (I'm not aware of any):
The problem with "pushing" the data directly into the SQL server will be that "old" (already in the DB) records won't be updated, but instead removed and then recreated. This is a big problem with foreign keys. Plus, I really don't like the idea of giving the client any access to the db at all.
So considering that, I find that the best is to write a relatively simple page that takes an uploaded file and updates the database. The file will likely be CSV, possibly XML. After a few iterations of writing these pages over the years, here's what I've come up with:
Show file upload box.
On next page load, save file to temp location
Loop through each line (element in XML) and validate all the data. Foreign keys, especially, but also business validations. You can also validate that the header row exists, etc. Don't update the database.
3a. If invalid data exists, save an error message to an array
At the end of the looping, show the view.
4a. If there were errors, show the list of error messages and tell them to re-upload the file.
4b. If there were no errors, create a link that has the file location from #2 and a confirmation flag
After the file location and confirm flag have been submitted run the loop in #3 again, but there's an if (confirmed) {} statement that actually makes the updates to the db.
EDIT: I saw your other post. One of the assumptions I made is that the databases won't be the same. ie, the legacy app will have a table or two. Maybe just products. But the new app will have orders, products, categories, etc, etc. This will complicate "just uploading the file".
Why do you need to push anything?
You just need to create a product management portion of the webpage and a secondly a public facing portion of the webpage. Both portions would touch the same SqlServer database.
.Net has the ability to monitor a database and check for updates. then you can run a query to [push] the data elsewhere.
or use sql to push the data with a trigger on the table(s) in question.
Is this what you were looking for?
You can try Dynamic Data Web Application.
You should have a service that regularly updates the data in the target DB. It will probably run on your source data machine (where the Access-DB is)
The service can use SSIS or ADO.NET to write the data. You can do this over the web, because you have access via TCP/IP to the server I assume.
Please check when the updates are done and how long it takes. If you can do the updates during the night you are fine. If not you should check, if you can still access the web during the import. That is sometimes not the case.
Use wget to push the new data file to the mvc app and once the data is received by the action, the mvc app invokes the processing/importing of the data (maybe in a worker process if you dont want long requests).

Categories