I have an ASP.NET Core application which calls a service from another library. The
service works with an external API, which requires a sessionId. We have to call a Login API method to get the sessionId. How long this sessionId lives and when it can be changed - we don't know. Rule is: sessionId can be valid for 1 request, for 10 requests, for 100 requests, can be valid 1 minute, 10 minutes, 1 day... Nobody knows it.
The service has many methods to call similar APIs:
public class BillRequest
{
private readonly HttpClient client;
public BillRequest()
{
client = new HttpClient
{
BaseAddress = new Uri("https://myapi.com/api/v2/")
};
}
public async Task<List<Dto1>> CustomerBankAccountListAsync(int start, int count)
{
List<KeyValuePair<string, string>> nvc = new List<KeyValuePair<string, string>>
{
new KeyValuePair<string, string>("sessionId", CURRENT_SESSION_ID)
};
var customerStream = await client.PostAsync("List/CustomerBankAccount.json", new FormUrlEncodedContent(nvc));
var customerString = await customerStream.Content.ReadAsStringAsync();
//....
}
public async Task<List<Dto2>> Method2(int start, int count)
{
List<KeyValuePair<string, string>> nvc = new List<KeyValuePair<string, string>>
{
new KeyValuePair<string, string>("sessionId", CURRENT_SESSION_ID)
};
var customerStream = await client.PostAsync("List/Method2.json", new FormUrlEncodedContent(nvc));
var customerString = await customerStream.Content.ReadAsStringAsync();
//....
}
// logic to get SessionId here
public async Task LoginAsync()
{
}
How to implement to save this sessionId inside service?
There are many options to implement:
Call Login method every time before calling a method. Easy to implement, but bad approach, because we have many unnecessary requests then and use the sessionId only once
Save the sessionId on web application level and try to catch exception, when any method gets an 'invalid sessionId' back, and then call Login method, which will return a new sessionId. In this case we have to pass sessionId to constructor of BillRequest class. It works, but I don't like to move responsibility of service to other, because it's internal responsibility of service how to work with API.
Save sessionId inside the service itself and recall Login method inside service, when old sessionId is considered invalid, rewrite it by new etc. But how to save it as "static" in memory? I don't want to save it to any external places (file system, cloud etc), but I can't save to variable of class too, because object of class can be recreated...
I'd suggest certain mental shift here towards functional programming.
Think of sessionID as of a stream of independet values rather than a single object. Then your problem can be redefined in a following (semantically equivalent) way: given a typed stream (string in your case), how to observe its flow and react on incomming changes, which your source code does not control?
Well, there is an answer, proven by an Enterpriseā¢: reactive extensions.
Techinically such a shift impliest that you're dealing with an IObservable<string> inside of your controller, which either can be injected via the standard .NET Core DI approach, or simply defined by the constructor. That's quite flexible, since rX gives your fully testable, unbelivable powerful toolset to deal with taks of this kind; rX is also compatible with native Task and hence, async/await feature. Nice fact is that it is really easy to inject required behavior from an outerworld and decorate exising observable with a more appropriate one: so, you're safe: once 3rd party's service logic changes, you can adopt your codebase almost instantly and painlessly.
What is gonna be inside that IObservable<string>? Well, I can't say, since you did not give enough information. It might be an interval asking remote server whether current sessionID is still valid and in case not - runs relogin procedure and notifies it's subscrivers about new value; it might be a timer responsible for compile-time known rule of expiration, it might be as sophisticated logic as you need: rX is flexible enough not to limit you on what can be achieved with it as long as you deal with (possible infinite) streams.
As a consequence, it means that you don't need any global value. Just subscribe to a stream of session ids and take latest - the one which is currently valid, do the job and dispose your subscription. It is not expensive and won't hit performance; neither would mess up concurency. Wrap rX into Task and await it, if you'd like to stick to a common .NET fashion.
P.S. 99% of what you would need to deliver an implementation is already there; you just need to combine it.
Related
I need to create an REST API that connect to a third party SOAP API. The third party API events are sent by callback to an URL I provide.
The typical steps my API go through is it starts a session with the third party by providing an ID and an callback URL. The third party can now send new events to my API through this URL when, for example, a new participant connects. Now sometimes i need to request specific info, like the list of participants for a given session(ID), and wait for the event containing the info.
Note that there may be multiple open sessions at the same time.
An example of what I need:
private string url = "http://myapi/callback";
[HttpGet]
[Route("createSession")]
public async Task<string> CreateSession()
{
var id = Guid.NewGuid().ToString();
var result = await ExternAPI.CreateSession(id, this.url);
return result; //contains the id
}
[HttpGet]
[Route("endSession")]
public async Task<string> EndSession([FromUri] string id)
{
var result = await ExternAPI.EndSession(id);
return result;
}
[HttpGet]
[Route("partipants")]
public async Task<string> Partipants([FromUri] string id)
{
ExternAPI.participants(id); // The results of this method will be sent to the callback function
results = // Wait for the results for this id
return results;
}
[HttpPost]
[Route("callback")]
public void Callback(body)
{
// notify waiting function and pass body
}
I came up with a solution using ReactiveX but I'm not really sure about its reliability in production. What I have in mind is to create a subject that never terminate and handle all the events but it is not a usual lifetime for a subject, what happens on error ? And I don't think I did it the "RX-way" (state concerns).
Here it is (you will need System.Reactive to run this code):
class Data
{
public int id;
public string value;
}
class Program
{
private static Subject<Data> sub;
static void Main(string[] args)
{
sub = new Subject<Data>();
Task.Run(async () => {
int id = 1;
ExternAPI(CallBackHook, id);
Data result = await sub.Where(data => data.id == id).FirstAsync();
Console.WriteLine("{0}", result.value);
});
Console.ReadLine();
}
static void CallBackHook(Data data)
{
sub.OnNext(data);
}
static String ExternAPI(Action<Data> callback, int id)
{
// Third-party API, access via SOAP. callback is normally an url (string)
Task.Run(() =>
{
Thread.Sleep(1000);
callback(new Data { id = id, value = "test" });
});
return "success";
}
}
An other way will be a dictionary of subjects, one for each session, so I could manage their lifetimes.
it is not a usual lifetime for a subject
what happens on error?
And I don't think I did it the "RX-way"
Yes, these are all perfectly valid concerns with this kind of approach. Personally, I don't much mind the last one, because even though Subjects are frowned-upon, many times they're just plain easier to use than the proper Rx way. With the learning curve of Rx what it is, I tend to optimize for developer maintainability, so I do "cheat" and use Subjects unless the alternative is equally understandable.
Regarding lifetime and errors, the solutions there depend on how you want your application to behave.
For lifetime, it looks like currently you have a WebAPI resource (the SOAP connection) requiring an explicit disconnect call from your client; this raises some red flags. At the very least, you'd want some kind of timeout there where that resource is disposed even if endSession is never called. Otherwise, it'll be all too easy to end up with dangling resources.
Also for errors, you'll need to decide the appropriate approach. You could "cache" the error and report it to each call that tries to use that resource, and "clear" the error when endSession is called. Or, if it's more appropriate, you could let an error take down your ASP.NET process. (ASP.NET will restart a new one for you).
To delay an API until you get some other event, use TaskCompletionSource<T>. When starting the SOAP call (e.g., ExternAPI.participants), you should create a new TCS<T>. The API call should then await the TaskCompletionSource<T>.Task. When the SOAP service responds with an event, it should take that TaskCompletionSource<T> and complete it. Points of note:
If you have multiple SOAP calls that are expecting responses over the same event, you'll need a collection of TaskCompletionSource<T> instances, along with some kind of message-identifier to match up which events are for which calls.
Be sure to watch your thread safety. Incoming SOAP events are most likely arriving on the thread pool, with (possibly multiple) API requests on other thread pool threads. TaskCompletionSource<T> itself is threadsafe, but you'd need to make your collection threadsafe as well.
You may want to write a Task-based wrapper for your SOAP service first (handling all the TaskCompletionSource<T> stuff), and then consume that from your WebAPI.
As a very broad alternative, instead of bridging SOAP with WebAPI, I would consider bridging SOAP with SignalR. You may find that this is a more natural translation. Among other things, SignalR will give you client-connect and client-disconnect events (complete with built-in timeouts for clients). So that may solve your lifetime issues more naturally. You can use the same Task-based wrapper for your SOAP service as well, or just expose the SOAP events directly as SignalR messages.
I have a member of my controller
private Lazy<MyCache> loadedComponentCache = new Lazy<MyCache>(() =>
{
MyCache instance = MyCacheb.Instance;
instance.LoadStuffAsync().Wait();
return instance;
}, LazyThreadSafetyMode.PublicationOnly);
that I'm using to lazy-call a long-running method LoadAsync() that will only need called if a certain API endpoint is hit after the user goes to the page.
[HttpGet]
public ActionResult GetStuff()
{
var results = from component in loadedComponentCache.Value.All()
// ...
}
Any idea why it's re-loading every time the API endpoint is hit? My understanding is that an instance of my controller is created only when the user goes to the page and thus this will only be hit once per API call per user visiting the page.
You could make loadedComponentCache static but that's not ideal. If you are using an IoC container you could register it as a singleton. These long lived objects are generally to be avoided though if possible.
If you you truely need this long lived cache then you should probably consider using something like Redis which is designed and optimised for this sort of scenario and can be distributed across multiple nodes. https://redis.io/topics/introduction
I'm pretty new to working with HTTP stuff so I'm rather confused as to what would be the best approach to request data from a HTTP address every few seconds or so.
The API I'm using has - at least to my knowledge no webhook support. So I imagine the way to update my data would be a rather crude way of doing so.
I want this to happen in the background so the GUI does not freeze and become unresponsive. So I know I (probably) need to fiddle with threads.
Best results I've had has been with a async/await Timer. I'm not entirely sure how to work with this and the only way for me to get it to work is to throw an exception after it has elapsed. If I don't - it says that not all nodes return a value and I can't even use return which really, really confuses me.
How should I be doing this?
If it's of any use, I'm working on creating my own RCON tool for a game which has all kinds of server data available via a HTTP API - but documentation for this API is very lackluster.
if you go to .net core you can see my previous answer on: Start multiple background threads inside Self Hosted ASP.NET Core Microservice
for .net framework you have to do a little more yourself. But still very do-able!
in your global.asax you have (or should I say: should) have your dependency injection. Something like:
protected void Application_Start()
{
Bootstrap();
//and do something more
}
private static void Bootstrap()
{
var container = new Container();
container.Register(() => new HttpClient());
container.RegisterSingleton<IApiCaller, ApiCaller>();
container.RegisterInitializer<IApiCaller>(ApiCaller=> apicaller.StartCallingAsync());
// Suppress warnings for HttpClient
var registration = container.GetRegistration(typeof(HttpClient)).Registration;
registration.SuppressDiagnosticWarning(DiagnosticType.DisposableTransientComponent, "Dispose is being called by code.");
registration.SuppressDiagnosticWarning(DiagnosticType.LifestyleMismatch, "Every HttpCient is unique for each dependency.");
container.Verify();
GlobalConfiguration.Configuration.DependencyResolver = new SimpleInjectorWebApiDependencyResolver(container);
}
In this case, I let SimpleInjector start my background thread to do a lot of work.
In the apicaller you can do your httpcalls.
something like:
public async Task StartCallingAsync(CancellationToken cancellationToken = (default)CancellationToken)
{
while(true)
{
var response = await _httpClient.GetAsync(url);
if (response.IsSuccessStatusCode)
{
//do work
}
await Task.Delay(10000, cancellationToken);
}
}
for the GetAsync there are extension methods that can cast it directly to your object.
can you work with this?
I have desktop application, that uses WCF services. I have got great usability improve when I implemented async WCF calls.
My question is: what is the best practice to initialize service client?
In previous realization there was single static object with credentials and a public method GetClient(), that was creating new ServiceClient before every call. In application there was commonly used such construction:
using (var svc = ServiceClientFactory.GetClient()) {
var data = svc.CallMethod(...);
some_application_context.specific_attribute = data;
}
so, before any call, was created new client, that was destroyed immediately after operation was finished and received data was used.
My question is: is it the best practice to call client constructor before every call?
I've tried to create single static client object, that is initialized once at startup and destroyed once on application closing, but I haven't got any notional performance gain.
Seems like it works fine, but I wonder if there any not very obvious obstacles in using single client? And what is recommended?
It's kind of a broad question, it depens on a lot of factors and also on style I guess.
When using reliable sessions or sessions in general you have to store the reference of course.
When calling the service many times it might be better to store the reference, or it might not. Better profile it then and there.
I always store a reference and create a property which check if the client is null or in the Faulted State.
Service.ServiceClient ShippingService
{
get
{
if (mService == null || mService.State == CommunicationState.Faulted)
{
mService = new Service.ServiceClient("netTcpService");
mShippingService.Open();
}
return mService;
}
}
You should look at dependency injection for getting your service references. Effectively it would be similar if not the same performance to what your doing now but it would make long term management easier and allow easier unit testing.
Most of the WCF overhead is connection negotiation so singleton vs new on each call won't really end up making a huge difference.
So I've decided to up the performance a bit in my WCF application, and attempt to cache Channels and the ChannelFactory. There's two questions I have about all of this that I need to clear up before I get started.
1) Should the ChannelFactory be implemented as a singleton?
2) I'm kind of unsure about how to cache/reuse individual channels. Do you have any examples of how to do this you can share?
It's probably important to note that my WCF service is being deployed as a stand alone application, with only one endpoint.
EDIT:
Thank you for the responses. I still have a few questions though...
1)I guess I'm confused as to where the caching should occur. I'm delivering a client API that uses this code to another department in our company. Does this caching occur on the client?
2)The client API will be used as part of a Silverlight application, does this change anything? In particular, what caching mechanisms are available in such a scenario?
3)I'm still not clear about the design of the GetChannelFactory method. If I have only one service, should only one ChannelFactory ever be created and cached?
I still haven't implemented any caching feature (because I'm utterly confused about how it should be done!), but here's what I have for the client proxy so far:
namespace MyCompany.MyProject.Proxies
{
static readonly ChannelFactory<IMyService> channelFactory =
new ChannelFactory<IMyService>("IMyService");
public Response DoSomething(Request request)
{
var channel = channelFactory.CreateChannel();
try
{
Response response = channel.DoSomethingWithService(request);
((ICommunicationObject)channel).Close();
return response;
}
catch(Exception exception)
{
((ICommenicationObject)channel).Abort();
}
}
}
Use the ChannelFactory to create an instance of the factory, then cache that instance. You can then create communicatino channels as needed/desired from the cached istance.
Do you have a need for multiple channel factories (i.e.., are there multiple services)? In my experience, that's where you'll see the biggest benefit in performance. Creating a channel is a fairly inexpensive task; it's setting everything up at the start that takes time.
I would not cache individual channels - I'd create them, use them for an operation, and then close them. If you cache them, they may time out and the channel will fault, then you'll have to abort it and create a new one anyway.
Not sure why you'd want to usea singleton to implement ChannelFactory, especially if you're going to create it and cache it, and there's only one endpoint.
I'll post some example code later when I have a bit more time.
UPDATE: Code Examples
Here is an example of how I implemented this for a project at work. I used ChannelFactory<T>, as the application I was developing is an n-tier app with several services, and more will be added. The goal was to have a simple way to create a client once per life of the application, and then create communication channels as needed. The basics of the idea are not mine (I got it from an article on the web), though I modified the implementation for my needs.
I have a static helper class in my application, and within that class I have a dictionary and a method to create communication channels from the channelf factory.
The dictionary is as follows (object is the value as it will contain different channel factories, one for each service). I put "Cache" in the example as sort of a placeholder - replace the syntax with whatever caching mechanism you're using.
public static Dictionary<string, object> OpenChannels
{
get
{
if (Cache["OpenChannels"] == null)
{
Cache["OpenChannels"] = new Dictionary<string, object>();
}
return (Dictionary<string, object>)Cache["OpenChannels"];
}
set
{
Cache["OpenChannels"] = value;
}
}
Next is a method to create a communication channel from the factory instance. The method checks to see if the factory exists first - if it does not, it creates it, puts it in the dictionary and then generates the channel. Otherwise it simply generates a channel from the cached instance of the factory.
public static T GetFactoryChannel<T>(string address)
{
string key = typeof(T.Name);
if (!OpenChannels.ContainsKey(key))
{
ChannelFactory<T> factory = new ChannelFactory<T>();
factory.Endpoint.Address = new EndpointAddress(new System.Uri(address));
factory.Endpoint.Binding = new BasicHttpBinding();
OpenChannels.Add(key, factory);
}
T channel = ((ChannelFactory<T>)OpenChannels[key]).CreateChannel();
((IClientChannel)channel).Open();
return channel;
}
I've stripped this example down some from what I use at work. There's a lot you can do in this method - you can handle multiple bindings, assign credentials for authentication, etc. Its pretty much your one stop shopping center for generating a client.
Finally, when I use it in the application, I generally create a channel, do my business, and close it (or abort it if need be). For example:
IMyServiceContract client;
try
{
client = Helper.GetFactoryChannel<IMyServiceContract>("http://myserviceaddress");
client.DoSomething();
// This is another helper method that will safely close the channel,
// handling any exceptions that may occurr trying to close.
// Shouldn't be any, but it doesn't hurt.
Helper.CloseChannel(client);
}
catch (Exception ex)
{
// Something went wrong; need to abort the channel
// I also do logging of some sort here
Helper.AbortChannel(client);
}
Hopefully the above examples will give you something to go on. I've been using something similar to this for about a year now in a production environment and it's worked very well. 99% of any problems we've encountered have usually been related to something outside the application (either external clients or data sources not under our direct control).
Let me know if anything isn't clear or you have further questions.
You could always just make your ChannelFactory static for each WCF Contract...
You should be aware that from .Net 3.5 the proxy objects are pooled for performance reasons by the channel factory. Calling the ICommunicationObject.Close() method actually returns the object to the pool in the hope it can be reused.
I would look at the profiler if you want to do some optimisation, if you can prevent just one IO call being made in your code it could far outweigh any optimisation you will make with the channel factory. Don't pick an area to optimise, use the profiler to find where you can target an optimisation. If you have an SQL database for instance, you will probably find some low hanging fruit in your queries that will get you orders of magnitude performance increases if these haven't already been optimised.
Creating the Channel costs the performance so much. actually , WCF already has the cache mechanism for the ChannelFactory if you use the ClientBase in the client instead of the pure ChannelFactory. But the cache will be expired if you make some anditional operations(Please google it for details if you want).
For the ErOx's issue i got another solution i think it is better. see below:
namespace ChannelFactoryCacheDemo
{
public static class ChannelFactoryInitiator
{
private static Hashtable channelFactories = new Hashtable();
public static ChannelFactory Initiate(string endpointName)
{
ChannelFactory channelFactory = null;
if (channelFactories.ContainsKey(endpointName))//already cached, get from the table
{
channelFactory = channelFactories[endpointName] as ChannelFactory;
}
else // not cached, create and cache then
{
channelFactory = new ChannelFactory(endpointName);
lock (channelFactories.SyncRoot)
{
channelFactories[endpointName] = channelFactory;
}
}
return channelFactory;
}
}
class AppWhereUseTheChannel
{
static void Main(string[] args)
{
ChannelFactory channelFactory = ChannelFactoryInitiator.Initiate("MyEndpoint");
}
}
interface IMyContract { }
}
you can customize the logic and the parameters of the Initiate method yourself if you got another requirement. but this initiator class is not limited only one endpoint. it is powerful for all of the endpoint in your application. hopefully. it works well for you. BTW. this solution is not from me. i got this from a book.