Call web service from SQL CLR? - c#

I have a SQL Server 2012 stored procedure that returns a table. I have to modify that SP to add an additional value to the returned table. Unfortunately, that added value comes from a call to a web-service. From my research, I gather the main ways to do this are using the OLE Automation procedures (sp_OA...) in SQL, or a SQLCLR stored procedure. Given the security context in which the sp_OA... procedures run, the single return value is a VARCHAR(10) registration key, and calls to the service are few (ten to twenty per hour), I'm guessing the SQLCLR method is the way to go. Also, the web-service is hosted on our intranet, and not accessible to the outside world.
Is there a better way to accomplish what I need? Better meaning more performant, better security, easier to code and maintain

Please do not use the sp_OA* OLE Automation procedures. They do not appear to be officially deprecated, but SQLCLR replaces both the OLE Automation procedures as well as Extended Stored Procedures.
Yes, this can be done easily enough in SQLCLR. You can find examples on using WCF (as shown in #CodeCaster's answer) or using HttpWebRequest / HttpWebResponse (I have more info in this answer: How to invoke webservice from SQL Server stored procedure ). Also, please be aware that sometimes you will need to also add the Serialization Assembly: Using Webservices and Xml Serialization in CLR Integration
Coding and Maintenance
Web Services provide a nice API, but if you change the structure you will have to recompile and redeploy at least some part of this. Assuming the information being exchanged is simple enough, I tend to think that treating this as a standard web request adds a lot of flexibility. You can create a generic web request function (scalar or TVF) that takes in the parameters and URI and constructs the properly formatted XML request and sends it to the URI. It then gets the response and merely returns the XML. So you shift a little bit of the responsibility since you now need to parse the XML response rather than getting a nice object. But, XML is easy to parse in SQL Server, and you can re-use this function in any number of places. And, if the remote service is ever updated, updating a Stored Procedure to change the query string that is passed to the Web Service and/or change the parsing of the XML response is a simple ALTER PROCEDURE and should be easy to test. No need to recompile / redeploy the SQLCLR Assembly.
Security
Regardless of how "pure" of a web service call you want, the main thing, security wise, is to NOT be lazy and turn TRUSTWORTHY ON (as also shown in the linked page from #CodeCaster's answer, and unfortunately most other examples here on the interwebs). The proper way to make this secure is to do the following:
Sign your Assembly
In the [master] database, create an Asymmetric Key from the DLL of your Assembly.
Also, in [master], create a Login from that Asymmetric Key
Grant your new Login the EXTERNAL ACCESS ASSEMBLY permission
Create your Assembly with a PERMISSION_SET of EXTERNAL_ACCESS, not UNSAFE
For more details on:
using SQLCLR in general, please visit: SQLCLR Info
using Module Signing, please visit: Module Signing Info
not using TRUSTWORTHY ON, please read: PLEASE, Please, please Stop Using Impersonation, TRUSTWORTHY, and Cross-DB Ownership Chaining

You can definitely call a WCF service using SQL CLR.
If you don't want that, you could write a Windows Service in C# that watches or polls the table for changes. Depending on how you implement this service, the reaction to a new record would be near immediate. Read also How to notify a windows service(c#) of a DB Table Change(sql 2005)?.
Then you can perform the service call from C#, perform the required work and store the result in the column.
When you require more information, for example extra variables obtained during the exchange, you could introduce a new table for storing that, and the actual result you're interested in. Then join that table from the table in your question.

Related

Dynamic WPF approach to REST

I've been working with a product which has an underlying Oracle database, usually we just reference the access provider DLL and then we can pretty must allow the user to configure things like select statement or parameterised SQL for execution on the product (they can only access their own resource via package procedures)
Recently the product has developed a new architecture whereby each possible request has a REST oData endpoint available, I'd like to still build into my applications the same level of configuration but I'm struggling to find a way forward -
Previously we could execute an anonymous block which the user could configure to suite their needs. Now it's likely that they might need to call 20 endpoints to achieve the same goal - how can I scale this?
Suggestions?

What is the best way to store some immutable and highly accessed data in C#?

Some background - I am working in a project which requires a kind of headshake authentication. The external service will send a request with a Token, and I will answer with Validator. Then it will send a second request containing the same Token and the data I should store in my database. The token is also used to get a couple extra fields that are required to insert the data in the database. Due to several project constraints and requirements, this "api" is implemented in Serverless (Azure Functions).
Since there are only 100 and something token-validator pairs that are not often updated (I will update manually every month or so), I have decided about not querying the database every time I get an incoming request. Normally I would simply use caching in C#, but since I am working with Functions, the code will be executed in multiple changing processes, which means no shared cache. I also think that using a cache service, such as Redis or Azure Cache would be an overkill.
My current solution - Currently, I am storing the data in a Hashtable that maps a Token to a ValidatorModel object that contains the validator and the extra fields I require. It works pretty well, but since it is a big C# object, it is a pain to update, the IDE lags when I open it, etc. I also don't know if it is a good idea to have it hardcoded in C# like that.
What I have been thinking about - I was thinking about storing a binary protobuf file that contained my Hashmap object. I am unsure if this would work or perform well.
My question - What is the best way do store such data and access it in a performatic way?

Sending Correlation ID from Code to SQL Server

Is there a way to send a correlation ID from C# code to SQL Server at the command level?
For instance, using x-correlation-id is an accepted way to track a request down to all parts of the system. we are looking for a way to pass this string value to stored procedure calls in SQL Server.
I spent sometime reading thru documents and posts but I was not able to find anything useful.
Can someone please let me know if there is a way to do this? The goal is to be able to track a specific call thru all services (which we can now) and DB calls (which we cannot and looking for a solution.)
I know the answer here is one year later. But in case, somebody has the same question.
Since EF core 2.2, MS provides a new method called "TagWith()" which you could pass your own annotation with the EF query into SQL server. In this way, you could easily track the SQL query with the same correlation id generated in your C# code.
https://learn.microsoft.com/en-us/ef/core/querying/tags
Unfortunately, this new feature is not available in EF 6. But it is not only us in this situation. If you just need a simple solution, you could check the thread here and MS documents.
If you need a more stable solution, you could check this NuGet plugin for EF 6 as well.
To pass your correlation id to SQL Server you have two options:
explicitly pass it as a parameter to your queries & stored procedures.
This is annoying as it requires work to change all your db calls to have a parameter like #correlationId, and often doesn't make sense having that parameter for simple data-retrieval queries. Perhaps you decide to only pass it for data-modification operations.
But on the positive side it's really obvious where the correlation info comes from (i.e. nobody reading the code will be confused) and doesn't require any additional db calls.
If all your data-modification is done using stored procs I think this is a good way to go.
use SQL Server's SESSION_CONTEXT(), which is a way you can set session state on a connection that can be retrieved from within stored procs etc.
You can find a way to inject it into your db layer (e.g. this) so the session context is always set on a connection before executing your real db calls. Then within your procs/queries you get the correlation id from the SESSION_CONTEXT and write to wherever you want to store it (e.g. some log table or as a column on tables being modified)
This can be good as you don't need to change each of your queries or procs to have the #correlationId parameter.
But it's often not so transparent how the session context is magically set. Also you need to be sure it's always set correctly which can be difficult with ORMs and connection pooling and other architectural complexities.
If you're not already using stored procs for all data modification, and you can get this working with your db access layer, and you don't mind the cost of the extra db calls this is a good option.
I wish this was easier.
Another option is to not pass it to SQL Server, but instead log all your SQL calls from the tier that makes the call and include the correlation id in those logs. That's how Application Insights & .NET seems to do it by default: logging SQL calls as a dependency along with the SQL statement and the correlation id.

Is it bad practice to store SQL stored procedures and parameters in a database table

We have a use case where an app that sends out emails finds a specific string ('smart tag') in an email and replaces it with the results of a stored procedure.
So for example the email could have Dear <ST:Name> in the body, and then the code would identify this string, run the stored procedure to find the client name passing in the client id as a parameter.
The list of these tags and the stored procedures that need to be run are currently hard coded, so every time a new 'smart tag' needs to be added, a code change and deployment is required.
Our BA's our skilled in SQL and want to be able to add new tags manually.
Is it bad practice to store the procedure and parameters in a database table? Would this be a suitable design for such a table? Would it be necessary to store parameter type?
SmartTag
SmartTagId SmartTag StoredProcedure
SmartTagParameters
SmartTagParameterId SmartTagId ParameterName
Table driven configuration, data driven programming, is good.
The primary thing to watch out for is SQL Injection risk (or in your case it would be called 'tag injection'...): one could use the email as an attack vector to gain elevated privileges by inserting a crafted procedure that would be run under higher privileges. Note that this is more than just the usuall caution around SQL Injection, since you are already accepting arbitrary code to be executed. This is more of a sandboxing problem.
Typical problems are from the type system: parameters have various types but the declaration tables have a string type for them. SQL_VARIANT can help.
Another potential problem is the language to declare and discover tags. Soon you'll be asked to recognize <tag:foo>, but only before <tag:bar>. A fully fledged context sensitive parser usually follows shortly after first iteration... It would be helpful if you can help by leveraging something already familiar (eg. think how JQuery uses the CSS selector syntax). HTMLAgilityPack could help you perhaps (and btw, this is a task perfect for SQLCLR, don't try to build elaborate statefull parser in T-SQL...).
It's not bad practice, what you are doing is totally fine. As long as only your admin/BA can add and change parameters and change configuration you do not have to worry about injection. If users can add and change parameters you really need to check there input and whitelist certain chars.
It's not only sql injection you have to check, but cross site scripting and dom injection and cross site request forgery as well. The merged text is displayed on a users computer, so you have to protect him when viewing your merge result.
Interesting question, will follow up as it has something to do with mine. Is not bad practice at all. In fact, we are using same approach. I'm trying to achieve similar goals on my XSL editor. I'm using a combination of XML tags, stored procedures and VB.Net logic to do the replacements.
I'm using a combination of a table with all the used XML tags (they are used on other places on the application) and stored procedures that do all the dirty job. One set of sp transforms from text with tags to a user readable text. Other set of procedures creates an XML tree from the XML tags table so the users can choose from to edit their text.
SQL Injection is not an issue for us as we use these procedures to create emails, not to parse them from external sources.
Regarding a comment on one of the question, we manage the tags also directly from SSMS, no admin window to manage them, at least for now. But we plan to add a simple admin window to manage the tags so it would be easier to add/delete/modify them once the application is deployed.

Best approach to incremently update application data

I have been working on an application for a couple of years that I updated using a back-end database. The whole key is that everything is cached on the client, so that it never requires an network connection to operate, but when it does have a connection it will always pickup the latest updates. Every application updated is shipped with the latest version of the database and I wanted it to download only the minimum amount of data when the database has been updated.
I currently use a table with a timestamp to check for updates. It looks something like this.
ID - Name - Description- Severity - LastUpdated
0 - test.exe - KnownVirus - Critical - 2009-09-11 13:38
1 - test2.exe - Firewall - None - 2009-09-12 14:38
This approach was fine for what I previously needed, but I am looking to expand more function of the application to use this type of dynamic approach. All the data is currently stored as XML, but I do not want to store complete XML files in the database and only transmit changed data.
So how would you go about allowing a fairly simple approach to storing dynamic content (text/xml/json/xaml) in a database, and have the client only download new updates? I was thinking of having logic that can handle XML inserted directly
ID - Data - Revision
15 - XXX - 15
XXX would be something like <Content><File>Test.dll<File/><Description>New DLL to load.</Description></Content> and would be inserted into the cache, but this would obviously be complicated as I would need to load them in sequence.
Another approach that has been mentioned was to base it on something similar to Source Control, storing the version in the root of the file and calculating the delta to figure out the minimal amount of data that need to be sent to the client.
Anyone got any suggestions on how to approach this with no risk for data corruption? I would also to expand with features that allows me to revert possibly bad revisions, and replace them with new working ones.
It really depends on the tools you are using and the architecture you already have. Is there already a server with some logic and a data access layer?
Dynamic approaches might get complicated, slow and limit the number of solutions. Why do you need a dynamic structure? Would it be feasible to just add data by using a name-value pair approach in a relational database? Static and uniform data structures are much easier to handle.
Before going into detail, you should consider the different scenarios.
Items can be added
Items can be changed
Items can be removed (I assume)
Adding is not a big problem. The client needs to remember the last revision number it got from the server and you write a query which get everything since there.
Changing is basically the same. You should care about identification of the items. You need an unchangeable surrogate key, as it seems to be the ID you already have. (Guids may be useful here.)
Removing is tricky. You need to either flag items as deleted instead of actually removing them, or have a list of removed IDs with the revision number when they had been removed.
Storing the data in the client: Consider using a relational database like SQLite in the client. (It doesn't need installation, it is just storing in a file. Firefox for instance stores quite a lot in SQLite databases.) When using the same in the server, you can probably reuse some code. It is also transaction based, which helps to keep it consistent (rollback in case of error during synchronization).
XML - if you really need it - can be stored just as a string in the database.
When using an abstraction layer or ORM that supports SQLite (eg. NHibernate), you may also reuse some code even when there is another database used by the server. Note that the learning curve for such an ORM might be rather steep. If you don't know anything like this, it could be too much.
You don't need to force reuse of code in the client and server.
Synchronization itself shouldn't be very complicated. You have a revision number in the client and a last revision in the server. You get all new / changed and deleted items since then in the client and apply it to the local store. Update the local revision number. Commit. Done.
I would never update only a part of a revision, because then you can't really know what changed since the last synchronization. Because you do differential updates, it is essential to have a well defined state of the client.
I would go with a solution using Sync Framework.
Quote from Microsoft:
Microsoft Sync Framework is a comprehensive synchronization platform enabling collaboration and offline for applications, services and devices. Developers can build synchronization ecosystems that integrate any application, any data from any store using any protocol over any network. Sync Framework features technologies and tools that enable roaming, sharing, and taking data offline.
A key aspect of Sync Framework is the ability to create custom providers. Providers enable any data sources to participate in the Sync Framework synchronization process, allowing peer-to-peer synchronization to occur.
I have just built an application pretty much exactly as you described. I built it on top of the Microsoft Sync Framework that DjSol mentioned.
I use a C# front end application with a SqlCe database, and a SQL 2005 Server at the other end.
The following articles were extremely useful for me:
Tutorial: Synchronizing SQL Server and SQL Server Compact
Walkthrough: Creating a Sync service
Step by step N-tier configuration of Sync services for ADO.NET 2.0
How to Sync schema changed database using sync framework?
You don't say what your back-end database is, but if it's SQL Server you can use SqlCE (SQL Server Compact Edition) as the client DB and then use RDA merge replication to update the client DB as desired. This will handle all your requirements for sure; there is no need to reinvent the wheel for such a common requirement.

Categories