Extending WCF Compact / Custom Headers - c#

I'm developing an app which will be deployed across various platform including Windows Phone. Because of this, I only have access to the WCF Compact / Portable classes.
I need to be able to catch every outgoing request and incoming response in order to apped headers to the request, and read the headers from the response.
When extending standard WCF I am able to achieve this using a custom behaviour, however in WCF compact this is not supported, so, I am able to use the following code to append headers to a specific request:
CalculatorServiceClient client = new CalculatorServiceClient();
using(new OperationContextScope(client.InnerChannel))
{
// We will use a custom class called UserInfo to be passed in as a MessageHeader
UserInfo userInfo = new UserInfo();
userInfo.FirstName = "John";
userInfo.LastName = "Doe";
userInfo.Age = 30;
// Add a SOAP Header to an outgoing request
MessageHeader aMessageHeader = MessageHeader.CreateHeader("UserInfo", "http://tempuri.org", userInfo);
OperationContext.Current.OutgoingMessageHeaders.Add(aMessageHeader);
}
However, I'm not able to catch the response headers in this example. I'm also worried that this isn't thread safe (where multiple requests could be happening at the same time). And, finally I'd like to implement this functionality in a way that is transparent to the developer - so that they don't need to do anything special their requests. I think I should be able to achieve it using something along the lines of a IClientMessageFormatter, however I'm at a loss as to how to implement this in WCF compact.
Any help would be greatly appreciated.
Thanks
David

Related

How to add custom http header to google big query client in c#

I am trying to add a custom header to google bigquery client in below way, some reasons it is not working. Can someone suggest me how I can add custom header to google bigquery client.
Below is my sample code:
var gClient = BigQueryClient.Create(projectId, credential);
gClient.Service.HttpClient.DefaultRequestHeaders.TryAddWithoutValidation("test", "this is default header");
var results = gClient.ExecuteQuery(query, null);
With above code, I can see that the custom header 'test' is added to the httpclient, but when gClient executes the query, I don't see this custom header.
I am using the fiddler to monitor the traffic from my machine. In fiddler, the I can see that there are two calls are made.
i. oauth authentication
ii. bigquery execution
In both of the messages I dont see the default http header 'test'.
I also tried, gClient.Service.HttpClientInitializer.Initialize() to initialize the httpclient, but didnt work.
var gClient = BigQueryClient.Create(projectId, credential);
ConfigurableHttpClient httpClient = new ConfigurableHttpClient(new ConfigurableMessageHandler(new CustomMessageHandler()), true);
httpClient.DefaultRequestHeaders.Add("xxxxx", "yyyyyyy");
gClient.Service.HttpClientInitializer.Initialize(httpClient);
In this case also, it is same problem... the default header is not part of the httprequest.
Can someone help me to solve this issue...?
FYI... we are intercepting all outbound calls using a proxy and based on this custom http header we need to take decision whether to allow outbound call or not. So, we would like to inject it at the service side and verify this custom header in the proxy.
As explained above, I have tried adding the DefaultRequestHeaders to the httpclient, but it is not working.
Also, I have tried httpclient.MessageHandler.AddExecuteInterceptor(). but still didnt work.
My question... can we inject a default httpclient for all outbound calls in c#? especially for google big queries.

c# mvc reroute request to different server

I have a web application which is a mesh of a few different servers and 1 server is the front-end server which handles all request external incoming requests.
So some of these request will have to be passed along to different servers and ideally the only thing I want to change is the host and Uri fields of these request. Is there a way to map an entire incoming request to a new outgoing request and just change a few fields?
I tried something like this:
// some controller
public HttpResponseMessage get()
{
return this.Request.Rewrite("192.168.10.13/api/action");
}
//extension method Rewrite
public static HttpResponseMessage Rewrite(this HttpRequestMessage requestIn, string Uri) {
HttpClient httpClient = new HttpClient(new HttpClientHandler());
HttpRequestMessage requestOut = new HttpRequestMessage(requestIn.Method, Uri);
requestOut.Content = requestIn.Content;
var headerCollection = requestIn.Headers.ToDictionary(x => x.Key, y => y.Value);
foreach (var i in headerCollection)
{
requestOut.Headers.Add(i.Key, i.Value);
}
return httpClient.SendAsync(requestOut).Result;
}
The issue I am having is that this has a whole slew of issues. If the request is a get Content shouldn't be set. THe headers are incorrect since it also copies things like host which shouldn't be touched afterwards etc.
Is there an easier way to do something like this?
I had to do this in C# code for a Silverlight solution once. It was not pretty.
What you're wanting is called reverse proxying and application request routing.
First, reverse proxy solutions... they're relatively simple.
Here's Scott Forsyth and Carlos Aguilar Mares guides for creating a reverse proxy using web.config under IIS.
Here's a module some dude named Paul Johnston wrote if you don't like the normal solution. All of these focus on IIS.
Non-IIS reverse proxies are more common for load balancing. Typically they're Apache based or proprietary hardware. They vary from free to expensive as balls. Forgive the slang.
To maintain consistency for the client's perspective you may need more than just a reverse proxy configuration. So before you go down the pure reverse proxy route... there's some considerations.
The servers likely need to share Machine Keys to synchronize view state and other stuff, and share the Session Store too.
If that's not consistent enough, you may want to implement session stickiness through Application Request Routing (look for Server Affinity), such that a given session cookie (or IP address, or maybe have it generate a token cookie) maps the user to the same server on every request.
I also wrote a simple but powerful reverse proxy for asp.net / web api. It does exactly what you need.
You can find it here:
https://github.com/SharpTools/SharpReverseProxy
Just add to your project via nuget and you're good to go. You can even modify on the fly the request, the response, or deny a forwarding due to authentication failure.
Take a look at the source code, it's really easy to implement :)

Using Bing API: easiest way to connect with an API and get data from it

I've searched some time, looking for easy way to connect with some other sites WebAPI. There are some solutions, but they are made in very complicated way.
What I want to do:
Connect with server using URL adress
Provide login and password to get some data
Get data as JSON/XML
Save this data in an "easy-to-read" way. I mean: save it to C# variable which could be easy to modify.
Currently, API that I want to work with is Bing Search, but I'm looking for some universal way. I found an example, but it doesn't work for me and in my app I can't use this class: "DataServiceQuery" because it doesn't exsist.
How do you usually do it? Do you have your favourite solutions? Are there some universal ways or it depends on type of API that you work with?
I'm currently working on .NET MVC app (in case it could make any difference)
From server side
You can use that like below.
// Create an HttpClient instance
HttpClient client = new HttpClient();
// Send a request asynchronously continue when complete
client.GetAsync(_address).ContinueWith(
(requestTask) =>
{
// Get HTTP response from completed task.
HttpResponseMessage response = requestTask.Result;
// Check that response was successful or throw exception
response.EnsureSuccessStatusCode();
// Read response asynchronously as JsonValue
response.Content.ReadAsAsync<JsonArray>().ContinueWith(
(readTask) =>
{
var result = readTask.Result
//Do something with the result
});
});
You can see example on following link.
https://code.msdn.microsoft.com/Introduction-to-HttpClient-4a2d9cee
For JavaScirpt:
You could use jQuery and WebAPI both together to do your stuff.
There are few steps to it.
Call web api with Ajax jquery call.
Get reponse in JSON
Write javascript code to manipulate that response and do your stuff.
This is the easiest way.
See following link for reference:
http://www.codeproject.com/Articles/424461/Implementing-Consuming-ASP-NET-WEB-API-from-JQuery
It entirely depends on the type of API you want to use. From a .Net point of view, there could be .Net 2 Web Services, WCF Services and Web API Services.
Web APIs today are following the REST standard and RMM. Some APIs need API Keys provided as url parameters, others require you to put in request's header. Even some more robust APIs, use authentication schemes such as OAuth 2. And some companies have devised their own standards and conventions.
So, the short answer is that there is no universal way. The long answer comes from documentation of each API and differs from one to another.

Secure WCF REST Webservice and headers

I'm writing a secure WCF REST webservice using C#.
My code is something like this:
public class MyServiceAuthorizationManager : ServiceAuthorizationManager
{
protected override bool CheckAccessCore(OperationContext operationContext)
{
base.CheckAccessCore(operationContext);
var ctx = WebOperationContext.Current;
var apikey = ctx.IncomingRequest.Headers[HttpRequestHeader.Authorization];
var hash = ctx.IncomingRequest.Headers["Hash"];
var datetime = ctx.IncomingRequest.Headers["DateTime"];
...
I use headers (Authorization,Hash,DateTime) to store informations about apikey, current datetime and the hashed request URL while request body contains only URL and webservice parameters.
Example:
http://127.0.0.1:8081/helloto/daniele
Is this the right way or I've to pass and retieve those parameters from URL like this:
http://127.0.0.1:8081/helloto/daniele&apikey=123&datetime=20120101&hash=ddjhgf764653ydhgdhgfjiutu56
are there differences between those two methods?
I think both methods would work for simple cases. However, if you want to make maximum use of native HTTP behaviours, you should go with the headers approach, not the URL query parameters one.
This will allow you to (for example) use HTTP response codes to indicate to client that a resource has been permanently moved (response code 301) so the client can automatically update links. If the URL included the authentication information, it is not clear to a client that two different URLs are actually referring to the same resource. In other redirect scenarios, the headers will be automatically included so you don't have to worry about appending parameters to redirect URLs.
Also, it should allow better caching behaviour on clients (if that is relevant in your scenario).
As another example, using headers would allow you to authenticate a request based just on the headers without requiring the client to send the message body. The idea is that you authenticate with the headers, then send the client an HTTP 100 Continue response. The client should not send the message body until it gets the 100. This could be an important optimisation if you are doing POSTs or PUTs with large message bodies.
There are other examples, but whether any given one is relevant depends on your scenarios and on the clients you expect to serve.
In summary, I would say it is better to make use of elements of the protocol as they were explicitly intended - this gives you the best chance of behaving as a client expects and should make your service more accessible, efficient and usable in the longer term.
Based on your implementation, your required parameters would have to be passed in the HTTP Headers of the request, which would most certainly not be on the query string.

Implementing a client-side cache using WCF, REST and standard HTTP headers

I have a Perl based REST service and I'm using C# and WCF to make a client to talk to the service. I have a few expensive calls and would like to construct a caching system. I need the ability to check and see if newer versions of the cached data exist on the server. I had the idea to use the standard "If-Modified-Since" request header and "304 Not Modified" response status code, however I'm having trouble catching the exception that is thrown on the response.
My client class derives from ClientBase<>. Here is the method that I use to call a service method:
private T RunMethod<T>(ReqHeaderType reqHeaders, ResHeaderType resHeaders, Func<T> meth)
{
//Get request and response headers
var reqProp = GetReqHeaders(reqHeaders);
var resProp = GetResHeaders(resHeaders);
using (var scope = new OperationContextScope(this.InnerChannel))
{
//Set headers
OperationContext
.Current
.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = reqProp;
OperationContext
.Current
.OutgoingMessageProperties[HttpResponseMessageProperty.Name] = resProp;
//Return the result of the call
return meth();
}
}
The exception occurs when the call back, which runs the service method, is executed. Is there a way to catch the exception and check if it is a "Not Modified" response?
In my opinion, you really only want to use WCF channels on the client if you are using non-web WCF bindings on the server.
In your case you are not even using .Net on the server so I think WCF is going to cause you a whole lot of pain.
I suggest you simply use the HttpWebRequest and HttpWebResponse classes in System.Net. If you do that you can also take advantage of the built in caching that is provided by WinINet cache. If you set the caching policy in the client Http client you will get all the caching behaviour you need for free.

Categories