I dont know how to check if im right or wrong so your help will be great.
A. From my understanding, IsOneWay=true = the client doesnt want to wait for the method to end. so the service will execute this method when he want. But, in some cases, does the service will use multi-threading to execute the method?
B. when I use ConcurrencyMode.Multiple, what is the diffrence between using IsOneWay=true & IsOneWay=false.
ConcurrencyMode and Messaging Pattern are not so directly related.
IsOneWay affects how Client and Server interact.
The ConcurrencyMode is a server-side issue, the Client is not aware of this setting.
From: http://msdn.microsoft.com/en-us/library/ms751496.aspx
HTTP is, by definition, a request/response protocol; when a request is
made, a response is returned. This is true even for a one-way service
operation that is exposed over HTTP. When the operation is called, the
service returns an HTTP status code of 202 before the service
operation has executed. This status code means that the request has
been accepted for processing, but the processing has not yet been
completed. The client that called the operation blocks until it
receives the 202 response from the service. This can cause some
unexpected behavior when multiple one-way messages are sent using a
binding that is configured to use sessions. The wsHttpBinding binding
used in this sample is configured to use sessions by default to
establish a security context. By default, messages in a session are
guaranteed to arrive in the order in which they are sent. Because of
this, when the second message in a session is sent, it is not
processed until the first message has been processed. The result of
this is that the client does not receive the 202 response for a
message until the processing of the previous message has been
completed. The client therefore appears to block on each subsequent
operation call. To avoid this behavior, this sample configures the
runtime to dispatch messages concurrently to distinct instances for
processing. The sample sets InstanceContextMode to PerCall so that
each message can be processed by a different instance. ConcurrencyMode
is set to Multiple to allow more than one thread to dispatch messages
at a time.
Related
In MassTransit if you want to await the execution of a consumer so that you can get the response there is IRequestClient<TCommand> which has a method GetResponse<TResponse>(Command). Is that the only way you can await the execution of a consumer in MassTransit?
What I want to be able to say is after publishing did the consumer execute successfully or did it error out if it errored out I want to be able to notify interested parties that the command errored out.
It is the easiest way, yes. If you have a method that needs to publish/send a message and wait (via await, in this case) for a consumer to consume the message, using the request client creates a unique RequestId and specifies the response address so that the consumer can notify the requestor via a response.
If you're really more interested in knowing if there was an exception consuming the message, you can create a separate consumer that consumes Fault<TCommand>. If the consumer throws an exception, MassTransit will publish a fault message of this type which can then be consumed to deal with the exception.
Note that if the request client is used, faults are only sent back to the response address and are not published.
Beyond those basic capabilities, sagas may also be used to orchestrate the original message, faults, etc. if so required.
I'm working with a web API that will return code 404 if querying a data that doesn't exist, or other errors if the data is malformed of there's some other problem. Which then results in an HttpRequestException.
Now I'm thinking about a detail. I'm using Polly on that HttpClient connection to ensure it retries in case of communication problems.
In this case, will it work as expected, or will Polly keep retrying in the case of server-thrown errors like "not found" or "bad request"?
I'm configuring it like this
services.AddHttpClient<OntraportHttpClient>()
.AddTransientHttpErrorPolicy(p =>
p.WaitAndRetryAsync(3, _ => TimeSpan.FromMilliseconds(600)));
You have a bit of misunderstanding, 400 Bad Request or 404 Not Found will not result in HttpRequestException.
Unless you call EnsureSuccessStatusCode explicitly.
AddTransientHttpErrorPolicy will check the followings:
408 Timeout
5xx Server error
HttpRequestException
So as you can see neither 400, 404, nor 429 Too Many Requests (typical response code in case of back-pressure) will cause your Polly policy to be triggered. Unless you explicitly call EnsureSuccessStatusCode method.
UPDATE: Adding DELETE use case
Use Case
Let's suppose we have a REST service which exposes a removal functionality of a given resource (addressed by a particular URL and via the DELETE HTTP verb).
This removal can end up in one of the 3 different states from consumption point of view:
Succeeded
Already Done
Failed
You can find several arguments on the internet which is the correct state for succeeded. It can either 200 (OK) with body or 204 (No Content) without body or 202 (Accepted) if it is asynchronous. Sometimes 404 (Not Found) is also used.
The already done state can occur when you try to delete an already deleted item. Without soft deletion it is hard to tell that the given resource has ever existed before or it was never been part of your system. If you have soft deletion, then the service could return 404 for an already deleted resource and 400 (Bad Request) for an unknown resource.
Whenever something fails during the request processing then it can be treated as temporary or permanent failure. If there is a network issue then it can be considered as a temporary/transient issue (this can be manifested as HttpRequestException). If there is a database outage and the service is able to detect it then it can fail fast and return with a 5XX response or it can try to fail over. If there are too many pending requests then the service may consider to throttle them and use back-pressure to shed the load. It might return with 429 (Too Many Requests) along with the appropriate Retry-After header.
Permanent errors, like service has been shut down forever or active refusal of network connection attempts under TLS 1.3 need human intervention to fix them.
Idempotency
Whenever we are talking about retry pattern we need to consider the followings:
The potentially introduced observable impact is acceptable
The operation can be redone without any irreversible side effect
The introduced complexity is negligible compared to the promised reliability
The second criteria is usually referred as Idempotency. It says that if you call the method / endpoint multiple times with the same input then it should return the same output without any side effect.
If your service's removal functionality can be considered as idempotent then there is no such state as Already done. If you call it 100 times then it should always return with "yepp, that's gone". So with this is mind it might make sense to return with either 204 or 404 in case of idempotent deletion.
Resilient strategy
Whenever we are talking about strategy it means for me a chain of resilient policies. If a former policy could not "fix" the problem then the latter would try to do that (so there is a policy escalation).
Server-side: You can use Bulk-head policy to have control over the maximum number of concurrent calls but if the threshold has been exceeded then you can start to throttle requests.
Client-side: You can have a timeout for each individual request and you can apply retry policy in case of temporary/transient failure. You can also define a global timeout for all your retry attempts. Or you can apply a circuit breaker to monitor successive failures and back-off for a given period of time if the service is treated as overwhelmed or malfunctioning.
My 2 cents is applying a single resilient policy on the client-side might not be enough to have a robust and resilient system. It might require several policies (on both sides) to establish a communication protocol for problematic periods.
I need to be able to call my SS services from the controllers of an MVC application. Ideally i'd like to call them in-process to avoid the overhead of buiding a http request etc.
From scouring documentation I feel there are 2 suggested methods, but neither work fully.
1) ServiceGateway - Use the service gateway. This calls validation filters, but does not call other customer filters I've added. No option to applyFilters.
2) HostContext.ServiceController.Execute - There is a dedicated option on this method called applyFilters, and when I set it to true it works and applies filters and validation (though it only executes GlobalFilters, not TypedRequestFilters). However, if [CacheResponse] attribute is set on the service it overwrites and flushes a response to my client overriding the flow of the MVC controller and i don't know how to stop this. It does not do this if I set to applyFilters to false or if I take CacheResponse off. Changing the priority of the cache has no effect.
I'm calling the Execute method as follows from within an Action method on my controller:
HostContext.ServiceController.Execute(serviceRequest, HostContext.GetCurrentRequest(), true);
Before this method even returns control a response is flushed to the webpage on Chrome and then nothing/null is returned from method.
I feel there is regarding point 1) a feature missing and point 2) a bug in the implementation, though am not confident enough in my knowledge of SS to remedy either! Please help!
Thanks.
Filters are executed as part of the HTTP Request Pipeline and can terminate the current Request with a custom HTTP Response. You can check IRequest.IsClosed after executing the Request to check if it has been terminated by a Request Filter. They're behavior is incompatible with internal Gateway requests so there's no option to execute them in the Gateway.
I've marked these ServiceController methods as an In Process Request in this commit which should resolve the issue with the [CacheResponse] attribute which ignores In Process Requests.
This change is available from v4.5.13 that's now available on MyGet.
In reliable request reply I understand that the reply is acknowledged and reliable. If for some reason the reply message continually fails on all 8 attempts (the default number of retries being 8) then the channel will then be faulted.
In the server side service method, I need to take action if the reply fails, but I cannot see how I can achieve this as the service method is unaware of the WCF context.
/// <summary>
/// This is my service method, and does the reply in reliable request reply
/// </summary>
/// <returns></returns>
public IModelJob GetNextJob()
{
//dequeue the next item if there is any
var modelJob = _priorityQueue.Dequeue();
//if all attempts to reply fail (or at least fail to be acknowledged) then when and how do I get a chance to requeue this job?
return modelJob;
}
It seems much easier to handle failure when you are the client and calling a service method on the proxy itself, as you can implement your own proxy from ClientBase.
I've read: http://msdn.microsoft.com/en-us/library/aa480191.aspx, and searched about but can find nothing specific.
Think of it in terms of the business operation you are ultimately supporting. If, for example, the service expects a sequence of messages from the client at regular 30 minute intervals, they you may have a requirement (in the business sense) that if a message is not seen for 120 minutes, then the service should notify the administrator. This would be implemented in the business logic that drives your service.
It is not a shortcoming of WCF that you cannot have it throw an exception when it hasn't received a message - how would it know it was supposed to expect one in the first place?
Bear in mind that Reliable Messaging works at a layer below the application, just as TCP retransmissions take place without your HTTP application being aware at all. The fact that at the TCP level there needed to be a retransmission, or several, is not a concern of the recipient, and certainly does not throw exceptions up the protocol stack. Its up to the sender of the data to ultimately detect that the data could not be sent, and do something about it. Or, in the example I gave, for the business logic behind the service to implement the requirement at a business level.
You may be interested in a blog post I wrote about some of the shortcomings of WS-ReliableMessaging.
This is on .Net 4, full framework.
I'm trying to make a simple winforms app that will make some simple WCF REST calls. It's using ChannelFactory and the service contract interface. Of the ~20 methods in the interface, 2 of them involve Stream (an upload and a download method) so the service side (and currently also the client side) using TransferMode=Streamed.
My goal is to include the full HTTP request and response (much like you would see in ethereal/wireshark, or fiddler, or whatever), with headers, in a textbox of the winforms app (just to show what went over the wire)
In trying to use the built-in diagnostics (via SvcConfigEditor) and my own (via implementing IClientMessageInspector and then IEndpointBehavior to add the inspector, then channelFactory.Endpoint.Behaviors.Add to add the behavior :), I'm having 2 issues:
When doing request.ToString() or reply.ToString() in BeforeSendRequest and AfterReceiveReply, it only gets the 'body' and not the headers. Digging around in the objects in the debugger it looks like the reply has them in reply.Properties["httpResponse"], but the request's request.Properties["httpRequest"] has an empty Headers property even though Fiddler shows headers for Content-Type, Host, Accept-Encoding, and Connection. It seems like there's likely a better way to get the 'raw' message that I'm missing (and if there's not, someone probably knows an existing chunk of code to 'reconstruct' the raw one from the Message)
Since the transfer mode is Streamed, the 'body' part just shows up as the string '... stream ...', both in SvcTraceViewer (and the 'raw' svclog - even with logEntireMessage=true) and when doing a ToString(). If the mode is Buffered instead, it shows the actual body fine. I tried making a copy with reply.CreateBufferedCopy(int.MaxValue); but that then caused the actual WCF call to fail with an InvalidOperationException: This message cannot support the operation because it has been copied.
One fallback would be to move the client to Buffered and just change to StreamedRequest for the one upload call and StreamedResponse for the download call (but I'd have to do that programmatically AFAICT, as it's set at the binding level in the config and I don't see anyway of doing it via attributes on the calls), which would take care of the 'body' part and leave me with just the "get the http request headers" (issue #1, specifically request.Properties["httpRequest"].Headers being empty) to deal with, but I'm hoping there's some way of logging the 'raw' messages without doing so, leaving the TransferMode as Streamed.
Thanks!
I can't find any reference right now, but it's a known fact that you cannot capture the contents of a streamed message to WCF tracing. When streaming is enabled, only the headers of the message will be traced.
Here's the source: Configuring Message Logging on MSDN
See towards the end of the page:
Service Level
Messages logged at this layer are
about to enter (on receiving) or leave
(on sending) user code. If filters
have been defined, only messages that
match the filters are logged.
Otherwise, all messages at the service
level are logged. Infrastructure
messages (transactions, peer channel,
and security) are also logged at this
level, except for Reliable Messaging
messages. On streamed messages, only
the headers are logged. In addition,
secure messages are logged decrypted
at this level.