We have an application running different services (c#, .NET Core) LOCAL on a Windows PC.
I now need some kind of mechanism to inform all interested services if data changed in one service (some kind of observer pattern for microservices, or some kind of MQTT (pub/sub) mechanism of c# and .NET Core microservices locally running on a windows pc).
First I want to use Sockets but the Windows documentation says use Signalr instead.
So here is what I have so far:
public class Startup
{
public Startup()
{
// empty
}
public void ConfigureServices(IServiceCollection services)
{
// Add services.
//Test bidirectional communication (pub / sub Pattern over SignalR groups)
services.AddSignalR();
// Add the localization services to the services container.
services.AddLocalization(options => options.ResourcesPath = "Properties");
services.AddMvc()
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseStaticFiles();
// Use sessions
// The order of middleware is important.
// An InvalidOperationException exception will occur when UseSession is invoked after UseMvc.
app.UseSession();
//Test bidirectional communication (pub / sub Pattern over SignalR groups)
//The SignalR Hubs API enables you to call methods on connected clients from the server.
//In the server code, you define methods that are called by client. In the client code, you define methods that are called from the server.
app.UseSignalR(routes =>
{
routes.MapHub<SignalRHub>("/SignalRHub");
});
app.UseMvc(
routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
For the .NET CORE Service
But I now need a client for the c# System.Web.Http.ApiController and can not find an example.
Seems some are confused by our "beautiful" architecture ;-)
I hope the following picture makes it clearer:
So, if Application 1 changes data in Microservice 2, than Application 2 has to be informed.
And again, this is all running local on a Windows PC, no clouds are involved.
Probably missing something from your description.
SignalR is fine if there are clients to report relevant information to.
In your scenario, however, it would seem that the clients are the APIs themselves and this makes little sense to me.
Maybe there's a piece missing in the middle that does the work you're saying.
In any case, you may find relevant technical information about SignalR starting from the official website.
https://learn.microsoft.com/en-us/aspnet/core/signalr/dotnet-client?view=aspnetcore-3.1&tabs=visual-studio
Related
I am trying to integrate Prometheus for my C# .NET Core Console application. I am not developing an ASP.NET Core application. How do I send the metrics data to prometheus the way we usually do for ASP.NET Core application?
In ASP.NET Core application,
Open Startup.cs and update ConfigureServices and Configure to look something along the lines of:
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<MetricReporter>();
services.AddControllers();
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// Other middleware components omitted for brevity
// Make sure these calls are made before the call to UseEndPoints.
app.UseMetricServer();
app.UseMiddleware<ResponseMetricMiddleware>();
app.UseEndpoints(endpoints => { endpoints.MapControllers(); });
}
How can I do this for a .NET Core console application?
You can use the Prometheus-net package which provides some useful features for integrating .Net and Prometheus.
Due to documentation you could start a kestrel-stand-alone server for console apps that do not have any accessible http endpoints.
in order to that ,you must have the .Web at the end of the Sdk attribute value in the project file.
after that you need to start the kestrel:
var metricServer = new KestrelMetricServer(port: 1234);
metricServer.Start();
Another way is to simply use the standalone Http-handlers as follows:
var metricServer = new MetricServer(port: 1234);
metricServer.Start();
The default configuration will publish metrics on the /metrics URL.
MetricServer.Start() may throw an access denied exception on Windows if your user does not have the right to open a web server on the specified port. You can use the netsh command to grant yourself the required permissions:
netsh http add urlacl url=http://+:1234/metrics user=DOMAIN\user
Good time, Stack Overflow community.
I have some questions about software architecture i'm working on and i will appreciate for help with them.
The components of the app are following:
Model project (net core class library). Here i define model classes & database context.
Business project (net core class library). It has reference on the Model assembly and implements business logic. Also here, placed a HostedService with code for working with microservices through EasyNetQ using Send/Receive & Request/Response patterns.
Web API project (net core web api app). It uses Business assembly and provides web api features. This app hosted on iis 10.
Web frontend project (net core razor web app). It also uses Business assembly and provides web UI features. This app hosted on iis 10.
Some microservice apps, that may communicate with Business assembly through EasyNetQ by receiving and sending messages. Every microservice runs in the one instance.
Web api app and web frontend app both working simultaneously. So we have two instances of business logic assembly working at the same time and both of them works with the same rabbitmq queues.
So, i'm afraid that one instance of Business assembly may send message to microservice (IBus.Send), but second instance of Business assembly may receive the message from microservice (IBus.Receive). In this case, as i understand, may be collision because the first instance of Business waits answer and does not receive it, at the same time second instance of Business receives not waitable answer.
A bit of code.
Web api app startup:
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddBusiness(Configuration);
...
}
Web frontend app startup:
public void ConfigureServices(IServiceCollection services)
{
services.AddBusiness(Configuration);
...
}
Business logic assembly startup:
public static IServiceCollection AddBusiness(this IServiceCollection services, IConfiguration configuration)
{
...
services.AddSingleton(sp =>
{
var rabbitMqSettings = sp.GetRequiredService<IOptions<RabbitMqSettings>>();
return RabbitHutch.CreateBus(rabbitMqSettings.Value.Connection);
});
services.AddHostedService<RabbitMessagesReceiverService>();
return services;
}
Business logic assembly EasyNetQ code examples:
public class RabbitMessagesReceiverService : BackgroundService
{
readonly IBus _bus;
public RabbitMessagesReceiverService(IBus bus)
{
_bus = bus;
}
protected override Task ExecuteAsync(CancellationToken stoppingToken)
{
// receives messages from microservice
_bus.Receive<OutgoingResult>(RabbitHelper.OUTGOING_RESPONSE, async response =>
{
...
}
}
}
or
// sends message to microservice
await _bus.SendAsync<OutgoingRequest>(RabbitHelper.OUTGOING_REQUEST, new OutgoingRequest
{
...
});
I'm setting up a proof of concept featuring two ASP.NET Core applications that are both instrumented with Jaeger to demonstrate how it can propagate a trace between services over the wire. Both applications are being deployed to Azure App Services.
I'm using the OpenTracing Contrib package to automatically inject the Jaeger trace context into my inter-service traffic in the form of HTTP Headers (the package is hardcoded to use that form of transmission). But it appears that those headers are going missing along the way, as the receiving application is unable to resume the tracing context.
Before deploying to Azure, I'm testing the applications locally with Docker Compose, and with that setup the context propagation works fine. It's only once the apps are in Azure that things break.
The applications communicate over HTTPS and I've disabled HSTS and HTTPS redirection in case that might be causing Azure to drop the headers, based on the answer in this previous thread.
I've also tried running both applications in Azure Container Instances, and that seems to be a non-starter - it doesn't fix the context propagation and seems to introduce more bugs around span relationships.
The two applications are nearly identical in their setup, and differ only in the API endpoints they serve.
My CreateWebHostBuild from program.cs:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureServices(services =>
{
// Registers and starts Jaeger (see Shared.JaegerServiceCollectionExtensions)
services.AddJaeger(CheckoutConfiguration.JaegerSettings.Host);
// Enables OpenTracing instrumentation for ASP.NET Core, CoreFx, EF Core
services.AddOpenTracing();
});
The contents of the AddJaeger extension method which is largely borrowed from the Contrib sample:
public static IServiceCollection AddJaeger(this IServiceCollection services, string jaegerHost = "localhost")
{
if (services == null)
throw new ArgumentNullException(nameof(services));
services.AddSingleton<ITracer>(serviceProvider =>
{
string serviceName = Assembly.GetEntryAssembly().GetName().Name;
ILoggerFactory loggerFactory = serviceProvider.GetRequiredService<ILoggerFactory>();
ISampler sampler = new ConstSampler(sample: true);
var reporter = new RemoteReporter.Builder()
.WithSender(new UdpSender(jaegerHost, 6831, 0))
.Build();
ITracer tracer = new Tracer.Builder(serviceName)
.WithLoggerFactory(loggerFactory)
.WithReporter(reporter)
.WithSampler(sampler)
.Build();
GlobalTracer.Register(tracer);
return tracer;
});
var jaegerUri = new Uri($"http://{jaegerHost}:14268/api/traces");
// Prevent endless loops when OpenTracing is tracking HTTP requests to Jaeger.
services.Configure<HttpHandlerDiagnosticOptions>(options =>
{
options.IgnorePatterns.Add(request => jaegerUri.IsBaseOf(request.RequestUri));
// We don't need to track Prometheus scraping requests
});
services.Configure<AspNetCoreDiagnosticOptions>(options => {
// We don't need to trace Prometheus scraping requests
options.Hosting.IgnorePatterns.Add(context => context.Request.Path.Equals("/metrics", StringComparison.OrdinalIgnoreCase));
});
return services;
}
My startup.cs configure method to show I'm not doing anything weird with the headers (the metrics extensions are for prometheus-net)
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseHttpMetrics();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
// Do release exception handling
}
app.UseMetricServer();
app.UseMvc();
}
I expect any calls from one application to the other to propagate the active Jaeger trace context. Instead, the two applications log their traces separately and no link can be discerned between them in the Jaeger UI.
Here's a screenshot of a trace that should have spanned both services, but instead only shows spans from the first service:
Maybe you should check whether the application services which you set up in a hurry are both in the same azure resource group as the VM running the Jaeger all-in-one instance, otherwise the second application might not be able to communicate with the Jaeger instance at all.
I have Api Controllers and MVC controllers in my .NET CORE application.
How can I route sub domain api.mysite.com to point only on Api controllers, and dashboard.mysite.com to point on Web Application all in same project?
If you want to implement this in a single ASP.NET Core application, you can do something like this:
Make Api controllers available say at path /Api. You can achieve this using routes, areas or application branches.
Use a reverse proxy which is capable of URL rewriting (e.g. IIS on Win, Nginx on Linux). Configure the reverse proxy so that the requests arriving at api.mysite.com/path are forwarded to your application as /Api/path.
A remark:
If you want to generate URLs in your Api controllers, you should remove the /Api prefix from the path to get correct URLs (and of course you have to configure your reverse proxy to append the necessary headers like X-Forwarded-Host, etc.) For this purpose you can use this simple middleware.
Update
As it was discussed in the comments, an application branch seems the best solution in this case because it enables separate pipelines for the MVC and API application parts.
Actually, it's very easy to define branches. All you need to do is to put a Map call at the beginning of your main pipeline in the Configure method of your Startup class:
public void Configure(IApplicationBuilder app)
{
app.Map("/Api", BuildApiBranch);
// middlewares for the mvc app, e.g.
app.UseStaticFiles();
// some other middlewares maybe...
app.UseMvc(...);
}
private static void BuildApiBranch(IApplicationBuilder app)
{
// middlewares for the web api...
app.UseMvc(...);
}
Now, when a request arrives and its path starts with /Api, the request gets "deflected" and goes through the branch pipeline (defined in BuildApiBranch method) instead of going through the main pipeline (defined in Configure method, following the Map call).
Some things to keep in mind:
When a request is "captured" by the branch, the prefix /Api is removed from the HttpContext.Request.Path property (and appended to HttpContext.Request.PathBase). So you need to define the API routes in the UseMvc method as if the request path had no prefix at all.
Using this code you have two separate pipelines but they share the components registered in Startup.ConfigureServices. If this is undesired, it's possible to create separate DI containers for each of the pipelines. However, this is a somewhat advanced topic.
I have a simple WebApi2 app that handles various REST requests. It's essentially a front end for various CRUD operations on an SQL Server Database. Up until now, I've never run it from outside of Visual Studio yet though and I usually don't do Windows specific stuff, but here I am.
My goal is to build this webapp's functionality into a Windows Desktop application (or at least be able to control the webapp from the windows program), mostly so the user can start the Webapp, stop it, see who is connecting to it, etc, but I've got no idea how to go about connecting this particular set of dots. It's actually a pretty tough thing to google.
The WebApp part also needs to be told some things at startup (just strings, so if the answer(s) involve executing various system command lines to tell the WebApp to start/stop/etc and I can pass in what I need on a command line somehow, that's fine).
Ultimately, the goal is to hand the user an install program and he doesn't have to know there is a webserver involved unless he really wants to.
So how would I go about accomplishing this part? (If this question is too vague, tell me why and I'll modify it as necessary).
One of the good things about Web API is the ability to be hosted outside of a web server such as IIS. For example you could host it inside your Windows Forms application. Here's an article with detailed instructions on how to achieve this.
You would have a Startup class that will be used for bootstrapping:
public class Startup
{
// This code configures Web API. The Startup class is specified as a type
// parameter in the WebApp.Start method.
public void Configuration(IAppBuilder appBuilder)
{
// Configure Web API for self-host.
HttpConfiguration config = new HttpConfiguration();
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
appBuilder.UseWebApi(config);
}
}
and then its just a matter of starting the listener:
using (WebApp.Start<Startup>("http://localhost:8080"))
{
Console.WriteLine("Web Server is running.");
Console.WriteLine("Press any key to quit.");
Console.ReadLine();
}
This will make available your Web API on port 8080 locally and your application can send HTTP requests to it.
So basically the keywords that you are looking for are: self hosting asp.net web api.