There is very little documentation (that I found) on how the distributed RedisEvents work in ServiceStack.
The documentation says:
One limitation the default MemoryServerEvents implementation has is being limited for use within a single App Server where all client connections are maintained. This is no longer a limitation with the new Redis ServerEvents back-end which utilizes a distributed redis-server back-end to provide a scale-out option capable of serving fan-out/load-balanced App Servers. If you’re familiar with SignalR, this is akin to SignalR’s scaleout with Redis back-end.
It also says how to add the plug-in, but then there is nothing else on how events are distributed, how you post a distributed event and how you handle what node to react to it and post to channel that will reach the correct end-client.
Am I missing something or is there almost no documentation on this?
The documentation for RedisServerEvents is at: http://docs.servicestack.net/redis-server-events
There is no difference in API between using an In Memory or Redis Server Events backend which works transparently behind the IServerEvents API. The only difference is in registration where you need to register RedisServerEvents with your configured IRedisClientsManager:
var redisHost = AppSettings.GetString("RedisHost");
if (redisHost != null)
{
container.Register<IRedisClientsManager>(
new RedisManagerPool(redisHost));
container.Register<IServerEvents>(c =>
new RedisServerEvents(c.Resolve<IRedisClientsManager>()));
container.Resolve<IServerEvents>().Start();
}
This replaces the default Memory IServerEvents with the RedisServerEvents implementation which sends API calls over Redis Pub/Sub to notify all App Servers configured with the same RedisServerEvents configuration who will send the Server Event to the connected clients on their local /event-stream.
Related
We're all being taught to use Dependency-Injection for coding in ASP.NET Core applications, but all of the examples I've seen so far that related to the retrieval of services via DI relate to situations where the method that has the service reference injected is strictly bound to a specific HTTP request (HttpContext) (e.g. MVC controllers, Routing delegates).
Service location is warned against as an anti-pattern, but I'm not sure on how to obtain a proper service (e.g. DbContext) reference via DI in code that is not bound a specific HTTP request, e.g. code that has to respond to messages arriving over a websocket.
Although the websocket itself is set-up initially with a specific HTTP request, messages will get responses over potentially a long lifetime of the websocket (as long as the user web session lasts). The server should not reserve/waste a DbContext/DB connection over this entire lifetime (this would result in exhaustion quickly), but rather obtain a DB connection temporarily when a message arrives and requires a response; discarding the DbContext/connection immediately afterwards - while the original HTTP request that set-up the websocket in the very beginning of the user-session technically is still there.
I haven't been able to find anything else but using:
httpContext.RequestServices.GetService(typeof(<MyNeededDbContext>)
This way I use the initial httpContext (obtained via DI when the websocket was set up), but at multiple times after that whenever a websocket message needs a response I can request a transient service object (a DbContext in this example), that may be recycled or pooled after the message response is complete, but while the original httpContext is very much still alive.
Anyone aware of a better approach?
You can create a new service scope to manage the lifetime of services yourself;
IServiceProvider provider = ...;
using (var scope = provider.CreateScope())
{
var context = scope.ServiceProvider.GetService<MyNeededDbContext>();
...
}
I am building a ServiceStack service that runs on several dozen embedded devices. I'd like to ensure that all communication to the device occurs over an encrypted channel. I've researched various SSL/TLS options, but managing several dozen different certs, or publishing a single cert to dozens of device, seems like a lot of overhead.
I've been looking at the Encrypted Messaging feature, but it appears that this only offers a transparent overlay, which would allow either a plain DTO or an encrypted DTO to be sent.
Is there any way to restrict my endpoints to ONLY accept EncryptedMessage DTOs, while preserving the ability to process them internally? Some sort of filter that can tell the original DTO came from an EncryptedMessage originally maybe?
I've considered the Service Gateway, but it seems like I'd have to have two separate AppHosts - one to receive the encrypted data and one (internal only) to process & respond. Seems like there should be a better way.
I've just marked Encrypted Messaging Requests as Secure in this commit which will allow you to use the Restricting Services Attribute to ensure only secure Requests are made with:
[Restrict(RequestAttributes.Secure)]
public class SecureOnlyServices { }
[Restrict(RequestAttributes.InSecure | RequestAttributes.InternalNetworkAccess,
RequestAttributes.Secure | RequestAttributes.External)]
public class InternalHttpAndExternalSecure { }
This change is available from v4.5.13 that's now available on MyGet.
Earlier versions of ServiceStack can check the IRequest.Items dictionary to determine if it's an Encrypted Messaging Request with:
var isEncryptedMessagingRequest = base.Request.Items.ContainsKey("_encryptCryptKey");
if (!isEncryptedMessagingRequest)
throw new HttpError.Forbidden("Only secure requests allowed");
I just got started with Application Insights and wanted to highlight dependencies between different operations. Currently I am using this code:
using (var x = telemetry.StartOperation<DependencyTelemetry>("my TEst")) {
x.Telemetry.Type = "SQL";
}
setting the Telemetry.Type to "SQL" makes the dependency appear as SQL DB which is fine and exactly what I want.
But I could not find any information about what other "Types" are supported here and what their exact Type would be?
e.g Blob Stores? Web APIs?
thanks in advance,
-gerhard
There's no limitation that I'm aware of.
Some dependencies are reported automatically by the SDK (such as SQL, Ajax), so these will get a pretty name in Application Map, but you can put there whatever makes sense in your application's BL.
The list of out-of-the-box dependency types Application Insights collect right now can be found here, although the documentation does not contain the dependency type string that you're interested in.
Non definitive list from my own experience:
SQL
HTTP
Azure queue
Azure table
Azure blob
Azure DocumentDb
Ajax
Redis
Azure Service Bus
MySQL
Azure IoT Hub
Azure Event Hubs
The are dependency types getting custom icons in Application Map:
- SQL
- Custom HTTP types, based on the following criteria:
1. Azure blob: when host name ends with blob.core.windows.net
2. Azure table: when host name ends with table.core.windows.net
3. Azure queue: when host name ends with queue.core.windows.net
4. Web Service: when host name ends with .asmx or contains .asmx/
5. WCF Service: when host name ends with .svc or contains .svc/
- All other HTTP or AJAX
Going forward the list will be extended with other dependency types that will get custom item in ApplicationMap.
I have a web application which is a mesh of a few different servers and 1 server is the front-end server which handles all request external incoming requests.
So some of these request will have to be passed along to different servers and ideally the only thing I want to change is the host and Uri fields of these request. Is there a way to map an entire incoming request to a new outgoing request and just change a few fields?
I tried something like this:
// some controller
public HttpResponseMessage get()
{
return this.Request.Rewrite("192.168.10.13/api/action");
}
//extension method Rewrite
public static HttpResponseMessage Rewrite(this HttpRequestMessage requestIn, string Uri) {
HttpClient httpClient = new HttpClient(new HttpClientHandler());
HttpRequestMessage requestOut = new HttpRequestMessage(requestIn.Method, Uri);
requestOut.Content = requestIn.Content;
var headerCollection = requestIn.Headers.ToDictionary(x => x.Key, y => y.Value);
foreach (var i in headerCollection)
{
requestOut.Headers.Add(i.Key, i.Value);
}
return httpClient.SendAsync(requestOut).Result;
}
The issue I am having is that this has a whole slew of issues. If the request is a get Content shouldn't be set. THe headers are incorrect since it also copies things like host which shouldn't be touched afterwards etc.
Is there an easier way to do something like this?
I had to do this in C# code for a Silverlight solution once. It was not pretty.
What you're wanting is called reverse proxying and application request routing.
First, reverse proxy solutions... they're relatively simple.
Here's Scott Forsyth and Carlos Aguilar Mares guides for creating a reverse proxy using web.config under IIS.
Here's a module some dude named Paul Johnston wrote if you don't like the normal solution. All of these focus on IIS.
Non-IIS reverse proxies are more common for load balancing. Typically they're Apache based or proprietary hardware. They vary from free to expensive as balls. Forgive the slang.
To maintain consistency for the client's perspective you may need more than just a reverse proxy configuration. So before you go down the pure reverse proxy route... there's some considerations.
The servers likely need to share Machine Keys to synchronize view state and other stuff, and share the Session Store too.
If that's not consistent enough, you may want to implement session stickiness through Application Request Routing (look for Server Affinity), such that a given session cookie (or IP address, or maybe have it generate a token cookie) maps the user to the same server on every request.
I also wrote a simple but powerful reverse proxy for asp.net / web api. It does exactly what you need.
You can find it here:
https://github.com/SharpTools/SharpReverseProxy
Just add to your project via nuget and you're good to go. You can even modify on the fly the request, the response, or deny a forwarding due to authentication failure.
Take a look at the source code, it's really easy to implement :)
We have a web service using ServiceStack (v3.9.60) that is currently gets an average (per New Relic monitoring) of 600 requests per minute (load balanced with two Windows 2008 web servers.)
The actual time spend in the coded request Service (including Request Filter) takes about an average of 5ms (From what we see from recorded log4net logs.) It is offloading the request to an ActiveMQ endpoint and automatic have ServiceStack generate a 204 (Return204NoContentForEmptyResponse enabled with "public void Post(request)")
On top of that we have:
PreRequestFilters.Insert(0, (httpReq, httpRes) =>
{
httpReq.UseBufferedStream = true;
});
since we use the raw body to validate a salted hash value (passed as a custom header) during a Request Filter for approval reasons that it comes from a correct source.
Overall we see in New Relic that the whole web service call takes an average around 700ms, which is a lot compared to the 5ms it actually takes to perform the coded process. So when we looked deeper in the data New Relic reports we saw some requests periodically take quite some time (10-150 seconds per request.) Drilling down in the reporting of New Relic we see that Applying the Pre-Request Filter takes time (see image below.) We were wondering why this could be the case and if it was related to the buffered stream on the Http Request object and what possibly could be done to correct this?
EDIT
Have been playing around with this some and still haven't found an answer.
Things done:
Moved the Virtual Folder out from a sub-folder location of the actual site folder (there are about 11 other Web Services located under this site)
Assigned this Web Service to use its own Application Pool so it is not shared with the main site and other Web Services under the site
Added the requirement to Web.Config for usage of Server GC as Phil suggested
Disabled the pre-request filter that turned on the usage of buffered stream (and bypass the code that used the RawBody)
Added more instrumentation to New Relic for a better drill-down (see image below)
I'm starting to wonder if this is a Windows Server/IIS limitation due to load. But would like to hear from someone that is more familiar with such.