Dynamic WPF approach to REST - c#

I've been working with a product which has an underlying Oracle database, usually we just reference the access provider DLL and then we can pretty must allow the user to configure things like select statement or parameterised SQL for execution on the product (they can only access their own resource via package procedures)
Recently the product has developed a new architecture whereby each possible request has a REST oData endpoint available, I'd like to still build into my applications the same level of configuration but I'm struggling to find a way forward -
Previously we could execute an anonymous block which the user could configure to suite their needs. Now it's likely that they might need to call 20 endpoints to achieve the same goal - how can I scale this?
Suggestions?

Related

Call web service from SQL CLR?

I have a SQL Server 2012 stored procedure that returns a table. I have to modify that SP to add an additional value to the returned table. Unfortunately, that added value comes from a call to a web-service. From my research, I gather the main ways to do this are using the OLE Automation procedures (sp_OA...) in SQL, or a SQLCLR stored procedure. Given the security context in which the sp_OA... procedures run, the single return value is a VARCHAR(10) registration key, and calls to the service are few (ten to twenty per hour), I'm guessing the SQLCLR method is the way to go. Also, the web-service is hosted on our intranet, and not accessible to the outside world.
Is there a better way to accomplish what I need? Better meaning more performant, better security, easier to code and maintain
Please do not use the sp_OA* OLE Automation procedures. They do not appear to be officially deprecated, but SQLCLR replaces both the OLE Automation procedures as well as Extended Stored Procedures.
Yes, this can be done easily enough in SQLCLR. You can find examples on using WCF (as shown in #CodeCaster's answer) or using HttpWebRequest / HttpWebResponse (I have more info in this answer: How to invoke webservice from SQL Server stored procedure ). Also, please be aware that sometimes you will need to also add the Serialization Assembly: Using Webservices and Xml Serialization in CLR Integration
Coding and Maintenance
Web Services provide a nice API, but if you change the structure you will have to recompile and redeploy at least some part of this. Assuming the information being exchanged is simple enough, I tend to think that treating this as a standard web request adds a lot of flexibility. You can create a generic web request function (scalar or TVF) that takes in the parameters and URI and constructs the properly formatted XML request and sends it to the URI. It then gets the response and merely returns the XML. So you shift a little bit of the responsibility since you now need to parse the XML response rather than getting a nice object. But, XML is easy to parse in SQL Server, and you can re-use this function in any number of places. And, if the remote service is ever updated, updating a Stored Procedure to change the query string that is passed to the Web Service and/or change the parsing of the XML response is a simple ALTER PROCEDURE and should be easy to test. No need to recompile / redeploy the SQLCLR Assembly.
Security
Regardless of how "pure" of a web service call you want, the main thing, security wise, is to NOT be lazy and turn TRUSTWORTHY ON (as also shown in the linked page from #CodeCaster's answer, and unfortunately most other examples here on the interwebs). The proper way to make this secure is to do the following:
Sign your Assembly
In the [master] database, create an Asymmetric Key from the DLL of your Assembly.
Also, in [master], create a Login from that Asymmetric Key
Grant your new Login the EXTERNAL ACCESS ASSEMBLY permission
Create your Assembly with a PERMISSION_SET of EXTERNAL_ACCESS, not UNSAFE
For more details on:
using SQLCLR in general, please visit: SQLCLR Info
using Module Signing, please visit: Module Signing Info
not using TRUSTWORTHY ON, please read: PLEASE, Please, please Stop Using Impersonation, TRUSTWORTHY, and Cross-DB Ownership Chaining
You can definitely call a WCF service using SQL CLR.
If you don't want that, you could write a Windows Service in C# that watches or polls the table for changes. Depending on how you implement this service, the reaction to a new record would be near immediate. Read also How to notify a windows service(c#) of a DB Table Change(sql 2005)?.
Then you can perform the service call from C#, perform the required work and store the result in the column.
When you require more information, for example extra variables obtained during the exchange, you could introduce a new table for storing that, and the actual result you're interested in. Then join that table from the table in your question.

How to create an IQueryable Web API that can pull data from several data sources?

I'm trying to figure out how to write an IQueryable data source that can pull and combine data from multiple sources (in this case Azure Table, Azure Blobs, and ElasticSearch). I'm really having a hard time figuring out where to start with this though.
The idea is that a web service (in this case an Asp.Net Web Api) can present a queryable, OData interface, but when it gets queried it pulls data from multiple sources depending on what is requested. So large queries might hit the indexing service (ElasticSearch) which wouldn't necessarily have the full object available, but calls to get an individual object would go directly to the Azure Tables. But from the service users perspective it's always just accessing the same data source.
While I would like to just use the index as our search service and the tables as our backup, I have a design requirement that it has to pull data from multiple sources, which greatly complicates this whole thing.
I'm wondering if anyone has any guidance on this or can point me towards the right technologies. Some of the big issues I'm seeing are:
the backend objects aren't necessarily the same as the front end object being queried. Multiple back end objects may get combined into a single front end one, or it may have computed values. So a LINQ query would have to be translated or mapped
changing data sources based on query parameters
Here is a quick overview of the technology I'm working with:
ASP.Net Web API 2 web service running as an Azure Cloud service
ElasticSearch running on SUSE VMs (on Azure)
Azure Tables
Azure Blobs
First, you need to separate the data access from the Web API project. The Web API project is merely an interface, so remove it from the equation. The solution to the problem should be the same regardless of whether it is web API or an ASP.NET web page, an MVC solution, a WPF desktop application, etc.
You can then focus on the data problem. What you need is some form of "router" to determine the data source based on the parameters that make the decision. In this case, you are talking about 1 item = azure and more than 1 item - and map reduce when more than 1 item (I would set up the rules as a strategy or similar so you can swap out if you find 1 versus 2+ is not a good condition to change routing).
Then you solve the data access problem for each methodology.
The system as a whole.
User asks for data (user can be a real person or another system through the web api)
Query is partially parsed to determine routing path
Router sends data request to proper class that handles data access for the route
Data is returned
Data is routed back to the user via whatever User interface is used (in this case Web API - see paragraph 1 for other options)
One caution. Don't try to mix all types of persistence, as a generic "I can pull data or a blob or a {name your favorite other persistant storage here}" often ends up becoming a garbage can.
This post has been out a while. The 2nd / last paragraph is close, yet still restricted... Even a couple years ago, this architecture is common place.
Whether a WPF or ASP.NET or Java, or whatever the core interface is written in - the critical path is the result set based on a query for information. High-level, but sharing more than I should because of other details of a project I've been part of for several years.
Develop your core interface. We did a complete shell that replaced Windows/Linux entirely.
Develop a solution architecture wherein Providers are the source component. Providers are the publishing component.
Now - regardless of your query 'source' - it's just another Provider. The interfacing to that Provider - is abstract and consistent - regardless of the Provider::SourceAPI/ProviderSourceAPI::Interface
When the user wants to query for anything... literally anything... Criminal background checks.... Just hit Google... Query these specific public libraries in SW somewhere USA/Anywhere USA - for activity on checkouts or checkins - it's really relevant. Step back - and consider the objective. No solution is too small, and guaranteed - too large for this - abstract the objectives of the solution - and code them.
All queries - regardless of what is being searhed for - are simply queries.
All responses - regardless of the response/result-set - are results - the ResultantProviderModel / ResultantProviderController (no, I'm not referencing MVC specifically).
I cannot code you a literal example here.. but I hope I challenge you to consider the approach and solution much more abstract and open than what I've read here. The physical implementation should be much more simplified and VERY abstract form a specific technology stack. The searched source? MUST be abstract - and use a Provider Architecture to implement. So - if I have a tool my desktop or basically office workers use - they query for something... What has John Doe written on physics???
In a corporation leveraging SharePoint and FAST Search? This is easy, out of the box stuff...
For a custom user interfacing component - well - they you have the backend plumbing to resolve. So - abstract each piece/layer - from an architecture approach. Pseudo code it out - however you choose to do that. Most important is that you do not get locked into a mindset locked into a specific development paradigm, language, IDE, or whatever. If you can design the solution abstract and walk it through is pseudo code - and do this for each abstraction layer... Then start coding it. The source is relative... The publishing aspect - is relative - consistent.
I do not know if you'll grasp this - but perhaps someone will - and it'll prove helpful.
HTH's...

C#/SQL Cloud application - clients on different versions

I'm designing a small application for a group of users at different physical locations. The application will connect to a central database in the cloud (well, on a central server - think cloud, but not really cloud). The database is held centrally to facilitate backups in a central location. I'm a seasoned developer, and the connection methods, code and other factors really aren't the issue.
However, I have a need to allow the application be upgraded to a newer version when the user sees fit - not on any kind of schedule. In a new update, the database schema could possibly change. So I'm going to run into the problem of User A downloading the new version, and upgrading the database. Users B, C and D will then get errors when they try to hit the database as tables/views may not be there.
I've thought about maintaining different databases on the same server. When User A upgrades, we'll "push" their database values to DB_V2 from DB_V1 and they'll use that one. Users B, C & D will still be able to use DB_V1 until they decide to upgrade. Eventually, DB_V1 can be removed when all users have upgraded away from that database.
Can I get some thoughts on the best way to handle this in a cloud-esque application? How are DB updates normally done/handled when clients might be on different versions?
Unfortunately, it is hard and there is no silver bullet. The name of the game is versioning, and you must support overlapping versions. Your access API must be versioned. Eg. consider the client communicates with the 'cloud' over a REST interface. The root URL for the API could be somehting like http://example.com/api/v1.1/. When you deploy a new version, you also move the API to http://example.com/api/v1.2/ and expose the new features in this new API, but continue to support the old v1.1 too. you give a period of grace to clients to upgrade to v1.2, and retire v1.1 sometime int he future, after sufficient number of clients have upgraded. The REST API Design Handbook is a good resource on the topic.
Now the real problem is your back end, the code that sits behind the URL service. You have to carefully design each upgrade step as to maintain backward compatibility with previous version. Is very likely that both overlapping versions would use the same storage (same DB). If an user adds an item using v1.2 API he usually expects to find it using the v1.1 API later, albeit possibly missing some attributes specific to v1.2. Your back end has to decide how to handle the v1.2 attributes for items added/edited using v1.1 API (eg. default values, NULLs etc). As I said, it is hard and ther eis no silver bullet. Sometimes you may have to bite the bullet and provide no back compat (items added with v1.2 are not visible to v1.1 API clients, ie. use separate storage, different DBs).
How about the case when your client access directly the DB? Ie. there is no explicit API, the client connects directly to the database. Your chances of success have diminished significantly... You can have your interaction with the DB use an API, eg. use only stored procedures for everything. By using schemas as namespaces you can provide a versioning, eg. exec [v1.1].[getProducts] vs. exec [v1.2].[getProducts]. But is cumbersome, hard, error prone and you can kiss goodbye most of the dev tools wizards, orms and other whistles and bells.

Best approach to incremently update application data

I have been working on an application for a couple of years that I updated using a back-end database. The whole key is that everything is cached on the client, so that it never requires an network connection to operate, but when it does have a connection it will always pickup the latest updates. Every application updated is shipped with the latest version of the database and I wanted it to download only the minimum amount of data when the database has been updated.
I currently use a table with a timestamp to check for updates. It looks something like this.
ID - Name - Description- Severity - LastUpdated
0 - test.exe - KnownVirus - Critical - 2009-09-11 13:38
1 - test2.exe - Firewall - None - 2009-09-12 14:38
This approach was fine for what I previously needed, but I am looking to expand more function of the application to use this type of dynamic approach. All the data is currently stored as XML, but I do not want to store complete XML files in the database and only transmit changed data.
So how would you go about allowing a fairly simple approach to storing dynamic content (text/xml/json/xaml) in a database, and have the client only download new updates? I was thinking of having logic that can handle XML inserted directly
ID - Data - Revision
15 - XXX - 15
XXX would be something like <Content><File>Test.dll<File/><Description>New DLL to load.</Description></Content> and would be inserted into the cache, but this would obviously be complicated as I would need to load them in sequence.
Another approach that has been mentioned was to base it on something similar to Source Control, storing the version in the root of the file and calculating the delta to figure out the minimal amount of data that need to be sent to the client.
Anyone got any suggestions on how to approach this with no risk for data corruption? I would also to expand with features that allows me to revert possibly bad revisions, and replace them with new working ones.
It really depends on the tools you are using and the architecture you already have. Is there already a server with some logic and a data access layer?
Dynamic approaches might get complicated, slow and limit the number of solutions. Why do you need a dynamic structure? Would it be feasible to just add data by using a name-value pair approach in a relational database? Static and uniform data structures are much easier to handle.
Before going into detail, you should consider the different scenarios.
Items can be added
Items can be changed
Items can be removed (I assume)
Adding is not a big problem. The client needs to remember the last revision number it got from the server and you write a query which get everything since there.
Changing is basically the same. You should care about identification of the items. You need an unchangeable surrogate key, as it seems to be the ID you already have. (Guids may be useful here.)
Removing is tricky. You need to either flag items as deleted instead of actually removing them, or have a list of removed IDs with the revision number when they had been removed.
Storing the data in the client: Consider using a relational database like SQLite in the client. (It doesn't need installation, it is just storing in a file. Firefox for instance stores quite a lot in SQLite databases.) When using the same in the server, you can probably reuse some code. It is also transaction based, which helps to keep it consistent (rollback in case of error during synchronization).
XML - if you really need it - can be stored just as a string in the database.
When using an abstraction layer or ORM that supports SQLite (eg. NHibernate), you may also reuse some code even when there is another database used by the server. Note that the learning curve for such an ORM might be rather steep. If you don't know anything like this, it could be too much.
You don't need to force reuse of code in the client and server.
Synchronization itself shouldn't be very complicated. You have a revision number in the client and a last revision in the server. You get all new / changed and deleted items since then in the client and apply it to the local store. Update the local revision number. Commit. Done.
I would never update only a part of a revision, because then you can't really know what changed since the last synchronization. Because you do differential updates, it is essential to have a well defined state of the client.
I would go with a solution using Sync Framework.
Quote from Microsoft:
Microsoft Sync Framework is a comprehensive synchronization platform enabling collaboration and offline for applications, services and devices. Developers can build synchronization ecosystems that integrate any application, any data from any store using any protocol over any network. Sync Framework features technologies and tools that enable roaming, sharing, and taking data offline.
A key aspect of Sync Framework is the ability to create custom providers. Providers enable any data sources to participate in the Sync Framework synchronization process, allowing peer-to-peer synchronization to occur.
I have just built an application pretty much exactly as you described. I built it on top of the Microsoft Sync Framework that DjSol mentioned.
I use a C# front end application with a SqlCe database, and a SQL 2005 Server at the other end.
The following articles were extremely useful for me:
Tutorial: Synchronizing SQL Server and SQL Server Compact
Walkthrough: Creating a Sync service
Step by step N-tier configuration of Sync services for ADO.NET 2.0
How to Sync schema changed database using sync framework?
You don't say what your back-end database is, but if it's SQL Server you can use SqlCE (SQL Server Compact Edition) as the client DB and then use RDA merge replication to update the client DB as desired. This will handle all your requirements for sure; there is no need to reinvent the wheel for such a common requirement.

What's the best way to reuse database access across multiple projects to ensure that when updated it does not break any projects?

I have probably written the same LINQ to SQL statement 4-5 times across multiple projects. I don't even want to have to paste it. We use DBML files combined with Repository classes. I would like to share the same Library across multiple projects, but I also want to easily update it and ensure it doesn't break any of the projects. What is a good way to do this? It is OK if I have to change my approach, I do not need to be married to LINQ to SQL and DBML.
We have both console apps and MVC web apps accessing the database with their own flavor of the DBML, and there have been times when a major DB update has broken them.
Also, since currently each project accesses the DB from itself, which is sometimes on another server, etc. Would it be possible to eliminate the DB layer from being within each project all together? It might help the problem above and be better for security and data integrity if I could manage all the database access through a centralized application that my other applications could use directly rather than calling the database directly.
Any ideas?
The way I handle this is using WCF Data Services. I have my data models and services in one project and host this on IIS. My other projects (whatever they may be) simply add a service reference to the URI and then access data it needs over the wire. My database stuff happens all on the service, my individual projects don't touch the database at all - they don't even know a database exists.
It's working out pretty well but there are a few "gotchas" with WCF. You can even create "WebGet" methods to expose commonly used methods via the service.
Let me know if you want to see some example code :-)

Categories