We have a system built using WCF and we are in the process of converting it to use Mass Transit and RabbitMQ.
Because it is/was a WCF service its quite RESTful in the way it operates, there are no states or sessions.
With this in mind, moving to Mass Tranist thus requires a huge amount of boiler plate code. For example we have possibly 100 WCF calls. For each one I would have to implement a separate Request/Response pair that, aside from the name, would be no different to each other. Apparently I cannot even use inheritance to abstract away the CorrelatedBy<Guid> syntax.
Is there any way I can reduce the amount of boilerplate code required to do this?
My current MassTransit code looks like this:
sbc.Subscribe(subConfig =>
{
subConfig.Handler<CanAllocateLicensedDeviceRequest>((ctx, req) =>
{
bool result = this.licenceActions.CanAllocateLicensedDevice();
ctx.Respond<CanAllocateLicensedDeviceResponse>(new CanAllocateLicensedDeviceResponse() { Result = result });
});
}
I understand the need to have the request typed, but could I not have a generic "bool" return type that I respond with. Would the Guid not ensure it got to the right place?
You could use a single response type to avoid creating an "ok" response type for each request/response pairing.
You also do not need to use CorrelatedBy as the RequestId is automatically set for you on the SendRequest call from the client-end of the request/response conversation.
Related
After searching for a while I did not find any answers to this question so here I go.
My general question is: Should interface methods return values produced by callbacks that are passed to them as parameters?
To demonstrate what I mean, I will give an example (the language in use is C# and the problem-domain is web but I think the question applies to many languages and domains).
Lets say you have a class that handles Http requests.
It can be configured to accept parameters, http method and url and returns the response.
The way I see it this method could either:
A) Provide a set of standard return types (maybe an error/success model and have a generic overload)
B) Accept a callback that gets the HttpResponse as parameter and lets the caller return whatever data he needs from it.
I realize that method A follows best practice principles but I think method B is applicable as well in some circumstances (for example you have a Services module that uses this Http handler class internally)
So my domain specific question is: Is it bad practice to have an interface method return a value produced by a callback in this specific situation?
To demonstrate my example with code:
public interface IWebRequestWrapper
{
T DoRequest<T>(params... , Func<HttpResponse,T> callback);
}
public class WebRequestWrapper : IWebRequestWrapper
{
public T DoRequest<T>(params... , Func<HttpResponse,T> callback)
{
using (var response = httpClient.GetResponse(params))
{
return callback(response);
}
}
}
public class Client
{
public SomeModel MakeServiceRequestX()
{
return iWebRequestWrapper.DoRequest(params, (i) => //construct your model through i and return it);
}
}
The question basically boils to
Should any consumer of this interface need such degree of freedom as to be able to produce any value from HttpResponse
Will it lead to easy-to-maintain-design, or will it be a proverbial callback hell
Should any consumer of that interface need such degree of freedom as to be able to produce any value from HttpResponse
If we assume that we need to create many WebApiClients like
public class FooApiClient: IFooApiClient
{
private readonly IWebRequestWrapper _webRequestWrapper;
public FooApiClient(IWebRequestWrapper webRequestWrapper) =>
_webRequestWrapper = webRequestWrapper ?? throw new ArgumentNullException();
public FooBaseData GetBaseData(string id) =>
_webRequestWrapper.DoRequest(id, response => new FooBaseData...);
public FooAdvancedData GetAdvancedData(string id) =>
_webRequestWrapper.DoRequest(id, response => new FooAdvancedData...);
}
Though unless we really need some custom HttpResponseProcessing, I'd rather expected that API to have swagger to allow those API clients to be generated automatically.
and our HttpResponse processing logic is at least slightly non-trivial (I mean not just get Json, parse if HttpStatusCode.Success/throw if anything else)
or if we have some generic(defined at run-time) endpoints
var result = _webRequestWrapper.DoRequest(
TextBox1.Text,
response => Deserializer.Deserializer(GetType(ComboxBox1.Type), ...)
then it is a pretty reasonable design
It is composable (you can easily reuse it anywhere)
It makes consumer classes testable (that non-trivial HttpResponse processing logic that produces your actual domain objects)
Well, not with the real HttpResponse but I assume that it was not meant literally
In the worst case we will just have use some one-to-one wrapper MyHttpResponse that can be easily created/mocked.
It is concise and easy to use
Basically IEnumerable.Select for WebAPI calls.
Then what are the disadvantages?
Well, it is yet another abstraction layer. You gain some composability and testability, but also have to maintain one more class and interface. Overall not a bad deal, but still, unless you really need/will use that composability and testability, its a price paid for little-to-nothing.
It protects against someone forgetting to dispose WebResponse, but it doesn't protect the same person from forgetting that it will be disposed, so response should not/cannot be cached and value cannot be lazy initialized. Though you may say that it is obvious and the problem stands basically the same for inlined using (...HttpClient.Get, any abstraction makes things more difficult to spot.
Again, if we really want to avoid such risks, we can use some wrapper class that is created as a full snapshot of the received HttpResponse. Though it will lead to a small (though usually irrelevant) performance loss.
Will it lead to easy-to-maintain-design, or will it be a proverbial callback hell
As the first comment probably meant by its vehemence is that callbacks lead to callback hell. And that's true - those should be avoided at any cost.
But in this case it is a synchronous callback, not the dreaded asynchronous one from JavaScript.
Unless you nest another callbacks inside it, it is not much different from the regular Enumerable.Select that can be misused just as well.
Overall
As the second question is a non-issue, we are left the first one.
So if
There is very little logic in callbacks (no need to test it)
And/Or there are very few distinct callbacks
And/Or callbacks are often reused (code duplication)
Then it probably makes little sense to have such interface. Strongly typed API clients are the way to go (especially if they can be just generated with NSwagStudio or the likes).
But if none of those points apply (or apply very weakly), then such abstraction is a perfectly viable solution.
As always there are no absolute answers - everything depends on the exact situation.
about the testable part I really thought it gets harder to test this way instead of returning a specific dto
I meant cases when
our HttpResponse processing logic is at least slightly non-trivial (I mean not just get Json, parse if HttpStatusCode.Success/throw if anything else)
For example,
DoRequest("PeculiarApi", response =>
{
if ((response.StatusCode == HttpStatusCode.Success) ||
(response.StatusCode == HttpStatusCode.Conflict))
{
if (response.Headers.TryGet(Headers.ContentType, out var contentType) &&
(contentType == ContentType.ApplicationJson))
{
return JsonConvert.Deserialize<TModel>(response.Content);
}
throw new Exception("No content type");
}
if (response.StatusCode == (HttpStatusCode)599) // Yes, that's what they return
{
return XmlConvert.Deserialize<T599Model>(response.Content).Errors[2];
}
...
}
is a piece of non-trivial logic.
This logic can be easily tested through mocked IWebRequestWrapper.
Can it be tested with regular HttpClient? Yes, but that will require some boilerplate that is not needed with IWebRequestWrapper.
By itself it is not worth much (not a deal-breaker), but if we already have to use IWebRequestWrapper it is at least a minor simplification.
I have a class which is basically a pipeline. It processes messages and then deletes them in batches. In order to do this the ProcessMessage() method doesn't directly delete messages; it adds them to a private Observable<IMessage>(). I then have another public method which watches that observable and deletes the messages en masse.
That results in code similar to:
public void CreateDeletionObservable(int interval = 30, int messageCount = 10)
{
this.processedMessages.Buffer(TimeSpan.FromSeconds(interval), messageCount).Subscribe(observer =>
{
client.Value.DeleteMessages(observer.ToList());
});
}
The problem is that my unit test doesn't have a value for processedMessages. I can't provide a moq'd value as it's private. I don't need to test what values are in processedMessages; I just need for them to exist in order to test that method's behavior. Specifically I need to test that my observable will continue running if an exception is thrown (that logic isn't in the code yet). As I see it I have a few options:
1) Refactor my class to use a single monster observable chain with a single entry point and a few exits (success, error, retry, etc.). This would avoid the use of private properties to pass collections around between public methods. However, that chain would be extremely difficult to parse much less unit test. I don't believe that making my code less readable and testable is a viable option.
2) Modify my CreateDeletionObservable method to accept a test list of Messages:
public void CreateDeletionObservable(int interval = 30, int messageCount = 10, IObservable<IMessage> processedMessages = null)
That would allow me to supply stubbed data for the method to use, but it's a horrible code smell. A variation on this is to inject that Observable at the constructor level, but that's no better. Possibly worse.
3) Make processedMessages public.
4) Don't test this functionality.
I don't like any of these options, but I'm leaning towards 2; injecting a list for testing purposes. Is there an option I'm missing here?
Your senses serve you well. I think in this case you can revert to guidance I find useful of "Test your boundaries" (Udi Dahan, but cant find the reference).
It seems that you can input message (via an Observable Sequence) and that as a side effect you will eventually delete these messages from the Client. So it seems that your test should read something like
"Given an EventProcessor, When 10 Messages are Processed, Then the Events are deleted from the client"
"Given an EventProcessor, When 5 Messages are Processed in 30s, Then the Events are deleted from the client"
So instead of testing this small part of the pipe that somehow knows about this.processedMessages (where did that instance come from?), test the chain. But this doesn't mean you need to create a massive unusable chain. Just create enough of the chain to make it testable.
Providing more of the code base would also help, e.g. where does this.processedMessages & client.Value come from? This is probably key and at a guess applying a more functional approach might help?
I have a command line application that has to be able to perform one of a number of discrete tasks, given a verb as a command line argument. Each task is handled by a class, each of which implements an interface containing a method Execute(). I'm trying to do this without using if or switch statements. So far, what I have is this:
var taskTypeName = $"MyApp.Tasks.{invokedVerb}Task";
var taskType = Type.GetType(taskTypeName, false);
var task = Activator.CreateInstance(taskType) as IMaintenanceTask;
task.Execute();
task is of type IMaintenanceTask, which is fundamentally what I'm trying to achieve. I'd prefer to avoid using dynamic - my understanding is that if it's only used once, like here, I won't see any of the benefits of caching, making it just reflection in fewer keystrokes.
Is this approach (or something along the same lines) likely to noticeably affect performance? I know it definitely increases the chance of runtime exceptions/bugs, but that's partly mitigated by the fact that this application is only going to be run via scripts; it will only deal with predictable input - also this will be the only place in the code that behaves dynamically. Is what I'm trying to achieve sensible? Or would it be better to just do this the boring normal way, via just switching on the input and constructing each type of task via a normal, compile-time constructor, and calling .Execute() on that.
As it is just one time call, you can go with your solution. Just add a few conditions to avoid exception chances.
var taskTypeName = $"MyApp.Tasks.{invokedVerb}Task";
var taskType = Type.GetType(taskTypeName, false);
if (taskType != null && typeof(IMaintenanceTask).IsAssignableFrom(taskType))
{
var task = Activator.CreateInstance(taskType) as IMaintenanceTask;
task.Execute();
}
Don't worry about the performance of a dispatch mechanism, unless it is in a tight loop. Switching a single direct method call to a single call through dynamic, a single call through reflection, a single call through emit API, or a single call through compiled LINQ expression will not make a detectable difference in execution time of your application. The time it takes operating system to start up your application is several orders of magnitude higher than the time it takes your application to decide what method to call, so your solution is as good as a switch, except it is a lot shorter (which is a good thing).
I am developing a C# WinForms application that contains a web browser control. The application contains a "scripting bridge" class that allows Javascript code to call into the web browser control (I need to know when certain functions are called in the JS code in order to perform some action in the WinForms application). Most of these operations are asynchronous because when I launch a request from the WinForms application, it will typically perform an ajax request within the JS code (not the C# code). Since this is an asynchronous operation, I was trying to come up with a better/easier way to manage the subscriptions/timeouts/error handling, etc. for these asynchronous events. I came across Reactive Extensions and decided to try it out.
I'm trying to determine if I am doing this correctly or not. I'm trying to wrap my head around Reactive Extensions. It's difficult to find simpler examples on the net for a lot of the Observable extension methods. Here is what I am doing right now:
public void SetupObservable()
{
IConnectableObservable<string> javascriptResponseObservable = Observable.Create<string>(
(IObserver<string> observer) =>
{
observer.OnNext("Testing");
observer.OnCompleted();
return Disposable.Create(() => Console.WriteLine("Observer has unsubscribed"));
})
.Timeout(DateTimeOffset.UtcNow.AddSeconds(5))
.Finally(() => Console.WriteLine("Observable sequence completed"))
.Publish();
IObserver<string> testObserver = Observer.Create<string>(
(value) => Console.WriteLine(value),
(e) => Console.WriteLine("Exception occurred: " + e.Message),
() => Console.WriteLine("Completed")
);
IDisposable unsubscriber = javascriptResponseObservable.Subscribe(testObserver);
}
// The following will be executed later (once the ajax request is completed)...
// Fire the event and notify all observables. If it took too long to get this point then the sequence will timeout with an exception.
public void OnSomeJavascriptFunctionCall()
{
// Somehow get the javascriptResponseObservable object...
javascriptResponseObservable.Connect();
}
I feel like I am doing this the wrong way or that there is a better way to accomplish this. For example, how do you retrieve the IObservable that was created earlier so that you can call more methods on it? Would I have to persist it in the class or somewhere else? It seems like a lot of the examples don't do this so it seems like I am doing something fundamentally wrong. Also, if several observers are subscribing to the IObservable from different classes, etc., again, how do you keep track of the IObservable? It seems like it needs to be persisted somewhere after it is created. Is there a Observable.GetExistingObservable() method of some sort that I am missing?
I feel like I am doing this the wrong way or that there is a better way to accomplish this.
Wrong is always a point of view, but I would argue, yes there is a better way to solve what you are doing.
I assume that your JavaScript bridge has some sort of way of raising events? And this is how it is able to call-you-back? If so, they you will want to leverage that call back and bridge that to Rx using either Observable.Create, Observable.FromEvent* or another Rx factory method.
That would be your first step, then you would need to pass your "commands" to your JS layer. This is where you would need to remember to subscribe to your callback sequence before you issue the command to mitigate any race conditions.
It is difficult to help any more, as you only show Rx code that seems to serve no purpose except trying to understand Rx, and no code that shows what you are trying to achieve in the C#-to-Js bridge. Please provide a "Minimum Complete Verifiable Example" - https://stackoverflow.com/help/mcve
I want to store delegates(Action) with one generic parameter in a Dictionary and I would like to avoid any Code Smell
regarding to down/up-casting if that is possible at all.
Basically I am implementing some kind of a Request/Response Callback manager where the user requests something from a
REST API and provides me a callback with the correct Response object. In a nutshell the method the user calls looks like that:
void GetUser(int id, Action<GetUserResponse> callback);
I then, sent out the request and save the callback into my "Callback Manager". When the response from the server
comes in (mostly json) I parse it into a GetUserResponse object and fire the callback. However there are many different
Requests which have different response objects and the Callback Manager has to prioritize them (and also some other stuff)
Instead of having a Dictionary for every single request which stores the callbacks of that request, I would like to
have a single Dictionary which stores all of the callbacks (and a unique id).
Basically something like this: (which obviously does not work like that)
Dictionary<GUID,Action<T>> AllCallbacks;
Is that possible without having to cast anything on the "user side"?
Have a look at Dictionary of Action<T> Delegates.
Should give you some guidance of options available to you. I don't think there is a simple and elegant solution out of the box.
You could use Dictionary<GUID,Action<dynamic>> AllCallbacks; but then you would need to type check and cast accordingly.