Site stops working after custom keep alive functionality - c#

I recently deployed a site that runs in a shared hosting environment. The problem is that the site receives sporadic traffic: after 20 minutes I suppose the server shuts down the instance and so when the site loads the first request, it's often slow. So I decided to add a functionality that loads the webpage every few minutes. I call the code in the Global.asax with new CodeKeepAlive.KeepAliveManager().SetKeepAlive(); and this is the complete code:
public class KeepAliveManager
{
Timer KeepAliveTimer;
public void SetKeepAlive()
{
KeepAliveTimer = new Timer(DoKeepAliveRequest, null, new Random().Next(200000, 900000), Timeout.Infinite);
}
public void DoKeepAliveRequest(object state)
{
string TheUrl = "https://www.the_website_url.com";
HttpWebRequest TheRequest = (HttpWebRequest)WebRequest.Create(TheUrl);
HttpWebResponse TheResponse = (HttpWebResponse)TheRequest.GetResponse();
KeepAliveTimer.Change(new Random().Next(200000, 900000), Timeout.Infinite);
}
}
For some reason, since I added this functionality, the site locks once in a while; after 30 seconds of load time, the server says the page can't be loaded. I also have an error logging functionality that triggers on Application_Error but there are no logs.
Is there anything wrong with my code?

The issue is in asp.net there is no reliable way to run background tasks with out the risk off IIS killing your thread. Once a request is complete IIS does not have reason to keep any of the threads you spawn up alive.
There are a few ways to do this, such as using some simple service on the free tier of amazon or google cloud to act as your heart beat.
But assuming you just want to use your shared hosting
You can use something like HangFire which specializes in this but they have their limitations see the docs for getting started:
GlobalConfiguration.Configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSqlServerStorage("Database=Hangfire.Sample; Integrated Security=True;", new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
UseRecommendedIsolationLevel = true,
UsePageLocksOnDequeue = true,
DisableGlobalLocks = true
})
.UseBatches()
.UsePerformanceCounters();
//Queue the job
BackgroundJob.Enqueue(() => Console.WriteLine("Hello, world!"));
//run it
using (var server = new BackgroundJobServer())
{
Console.ReadLine();
}
NOTE: Hang Fire it self has its limitations IIS will shut down any long running threads e.g things that take longer that 90 - or 180 seconds (I forget the limit) So make sure you queue your heart beat task on each request that comes in. If you want to make sure you don't fire too many you can add a header to the request and verify it's on the request.
See this answer on the new .Net-Core Background Tasks which is applicable because it relates to IIS

Add a Test program in PhantomJs, to randomly or periodically take screenshots of the url and deploy it on the same server .
IT will do that work for u

Related

Should I Have to Wait After Creating Team with Graph

I am using the MS Graph API from our web app to create an MS Teams Team in clients' systems and setup a few folders. But I will randomly get errors if I don't impose a hard-coded wait after creating the team. I call the following endpoints in the order shown:
//Create new Team and get basic info
POST teams
GET teams/{team-id}/primaryChannel
GET teams/{team-id}
GET teams/{team-id}/channels/{channel-id}/filesFolder
//Sometimes unknown users must be invited to join org as guest
POST invitations
//Everyone but the owner is added as a guest
POST teams/{team-id}/members
//This is done in a batch, because there is one folder per team guest + one for owner
POST groups/{team-id}/drive/items/{channel-folder-id}/children
//Team members' folders are permitted to them only. So all permissions are deleted and a single user added back
GET groups/{folder-id}/drive/items/{folder-id}/permissions
DELETE groups/{team-id}/drive/items/{folder-id}/permissions/{permission-id}
POST groups/{folder-id}/drive/items/{item-id}/invite
I will sporadically get Forbidden and/or Bad Request responses from:
POST teams/{team-id}/members
DELETE - groups/{team-id}/drive/items/{item-id}/permissions/{permission-id}
Obviously the return statuses of 403 are bugs, because the app definitely has permission to perform the action.
Imposing a 60 second wait after creating the Team seems to resolve this. However, I am currently testing on our Teams environment and am concerned that clients with larger Teams setups will require a longer wait period. I've seen other areas where the documentation says you should wait up to 15 minutes before using a Team that was created from a Group (I am not sure if this applies to creating a normal Team though).
Does anyone know what kind of latency I should be prepared for generally, and if there is any endpoint I can ping to see if the Team is ready for use?
Azure AD, Teams and Exchange are all different systems and need some kind of synchronization that sometimes needs some time.
Whenever you're going to create something in one of these systems, be prepared that it takes some time to access it.
One of the most awkward behaviour I came across is, when you create a group through Exchange Remote Powershell you'll get instantly the group object back. This object has an Azure Object ID. But if you immediately go to Graph and make a request for that group you'll get a 404. Also a look into Azure Portal shows nothing. But if you wait some time (minimum 30 secs, but up to 20!! minutes) the group suddenly appears.
The same also applies if you create a user in Azure through Graph. If you do this, you'll get back an object with the azure id. If you immediately try to add this user to a group or a directory role, it can also happen to get an error, but the timeout here is normally somewhere below 2 sec and I've never seen something above 10 secs.
So for everything, where I'm going to create something in Graph and immediately try to use it, I build some helper method, that tries it multiple times with some smaller timeout between each call:
internal static class Multiple
{
public static Task Try<TException>(int maxRetries, TimeSpan interval, Func<Task> task)
where TException : Exception
{
return Try<TException>(maxRetries, interval, task, exception => true);
}
public static async Task Try<TException>(int maxRetries, TimeSpan interval, Func<Task> task, Func<TException, bool> isExpectedException)
where TException : Exception
{
do
{
try
{
await task().ConfigureAwait(false);
return;
}
catch (Exception ex) when (ex.GetType() == typeof(TException) && isExpectedException((TException)ex))
{
maxRetries--;
if (maxRetries <= 0)
throw;
await Task.Delay(interval);
}
} while (true);
}
}
The usage of the class is as follows:
await Multiple.Try<ServiceException>(20, TimeSpan.FromSeconds(1), async () =>
{
educationClass = await serviceClient.Education.Classes[groupId.ToString()].Request().GetAsync();
}, ex => ex.Error.Code == "Request_ResourceNotFound");
This helper will call the inner method up to 20 times with a timeout of one second. Also the thrown exception must have the given error code. If the number of retries is exceeded or a different error is thrown, the call will rethrow the original exception and must be handled on a higher level.
Simply be aware that behind the Graph interface a highly distributed system works and it sometimes needs some time to get everything in sync.
I test it in my side and met same issues with yours. The 403 error should be a bug as you mentioned because I also have the permission to do the operation. But you mentioned that add guest user to owner, I test it with bad request response, I think it is by design.
Since you can request success after waiting 60 seconds, I think the solution is add a while loop in your code to request the graph api multiple times. In the while loop, if request fail, wait 10 seconds then request again(as Flydog57 mentioned in comments). But you also need to add a mechanism to break loop when request always fail in your code to avoid infinite loops.

ASP.NET Core: Idle timeout between the calls with a delay

I have an ASP.NET Core Web API running under Kestrel(Ubuntu) and I faced with a strange situation:
When I run the series of the first 20 API calls, the first 3-5 calls are quite slow, then the response time is ok.
Then I make a short delay (could be a minute or even less) and run the series of the API calls again, and again the first several calls are quite slow and only after the first 3-5 calls the response time becomes ok.
Initially, I thought the issue is in the Kestrel configuration, so I made the following settings:
var host = new WebHostBuilder()
.UseKestrel(options =>
{
options.Limits.MaxConcurrentConnections = 200;
options.Limits.MaxConcurrentUpgradedConnections = 200;
options.Limits.MaxRequestBodySize = 10000;
options.Limits.MinRequestBodyDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10)));
options.Limits.MinResponseDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10)));
options.Limits.KeepAliveTimeout = TimeSpan.FromDays(2);
})
.UseConfiguration(config)
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();
host.Run();
}
It helped me to make my service to work faster, but the issue is still on.
The basic logic of the service is the following:
1) get the request object
2) parse it into the instance of the POCO class
3) call the DB with the multiple SELECTs to get all the required data (for this purpose I am using Dapper and the method which allows running the multiple SQL queries with one go)
4) Update the instance of the object with the newly received data and insert the object into the DB
That's it.
And I can't figure out what causes this delay(idle time).
I had a guess that maybe I should have some dummy calls to keep the service running. So I added a Singleton which contains the Timer job to query the DB for some lookup data every min. But It did not help.
Then I tried to add another timer job to query the data which are required in the Step N3 for just the 1st record in the DB, without some specific req parameters and it did not help, moreover, it started working more slowly.
I also added the indexes on the table, to make the SELECTs to work faster and additionally I added WITH(NOLOCK) statement to all the SELECTs but it did not help.
Any ideas, guys?
When query execute getting time rather than desire then its throw timeout exception. We can resolve it by setting commandtimeout=0. When we set commandtimeout=0 then it will response after completing execution.
I was also thinking that maybe it's about the connection string, here it is:
Data Source=mydbserver;Initial Catalog=db1;Persist Security Info=True;User ID=bigboy;Password=bigboy;multipleactiveresultsets=True; Max Pool Size=200; Pooling=true;Connection Timeout=30; Connection Lifetime=0;Min Pool Size=0;

Better way to post to an external api at a specific time everyday in Asp.Net Core C#

I have a requirement where the Asp.Net Core Application(Deployed to IIS) needs to send data external domain("http://example.com/api/statistics") at a given time everyday(only once a day; say 6PM localTime where application is running). I am hesitant to place code any place(like in Startup.cs or Program.cs) that might create problems later. Something like the following : Your insights highly appreciated. Thank you.
Task.Run(() =>
{
while (true)
{
using (var client = new HttpClient())
{
var response = client.PostAsync("http://example.com/api/statistics",
new StringContent(JsonConvert.SerializeObject("data"),
Encoding.UTF8, "application/json"));
}
}
});
There's a number of ways to approach this, however the way I think works well is make a controller with an action which will do the post. That way anything can trigger the posting of statistics ( you will want an authorization token of some sort so only things that are meant to trigger the action can do it )
so something like mysite/statistics/post?url=<destination url>&...any other options
then you can use a scheduled task, or manually trigger it, or trigger it via a webhook, or any other mechanisim. You can even still use a Task that waits for a particular time then calls your hook.

Simple.OData BeforeRequest event not triggered

I'm using the Simple.OData adapter to try to connect to an OData service. The service needs authentication to connect.
I have registered the BeforeRequest event to set the neccesary headers before doing any request.
However, my BeforeRequest is not triggered at all which results in not being able to open the Context, due too missing credentials and my code hangs and waits forever.
See my code below, am I missing something?
public void GetData()
{
var oDataFeed = new ODataFeed(ApiBaseUrl);
oDataFeed.BeforeRequest += BeforeRequest;
oDataFeed.AfterResponse += AfterResponse;
Context = Database.Opener.Open(ApiBaseUrl);
// do some more
}
private void BeforeRequest(HttpRequestMessage httpRequestMessage)
{
// add headers.
}
It did seem to fire or trigger the event once, however, after a rebuild of the project it doesn't work anymore.
There is a know bug in Simple.Data.Client 3x that affects request interception in certain scenarios. The bug is fixed in the forthcoming version 4 of Simple.OData.Client, currently available as pre-release but it's very stable and comes with tons of new features including support for JSON payload and OData protocol V4.

Finding Connection by UserId in SignalR

I have a webpage that uses ajax polling to get stock market updates from the server. I'd like to use SignalR instead, but I'm having trouble understanding how/if it would work.
ok, it's not really stock market updates, but the analogy works.
The SignalR examples I've seen send messages to either the current connection, all connections, or groups. In my example the stock updates happen outside of the current connection, so there's no such thing as the 'current connection'. And a user's account is associated with a few stocks, so sending a stock notification to all connections or to groups doesn't work either. I need to be able to find a connection associated with a certain userId.
Here's a fake code example:
foreach(var stock in StockService.GetStocksWithBigNews())
{
var userIds = UserService.GetUserIdsThatCareAboutStock(stock);
var connections = /* find connections associated with user ids */;
foreach(var connection in connections)
{
connection.Send(...);
}
}
In this question on filtering connections, they mention that I could keep current connections in memory but (1) it's bad for scaling and (2) it's bad for multi node websites. Both of these points are critically important to our current application. That makes me think I'd have to send a message out to all nodes to find users connected to each node >> my brain explodes in confusion.
THE QUESTION
How do I find a connection for a specific user that is scalable? Am I thinking about this the wrong way?
I created a little project last night to learn this also. I used 1.0 alpha and it was Straight forward. I created a Hub and from there on it just worked :)
I my project i have N Compute Units(some servers processing work), when they start up they invoke the ComputeUnitRegister.
await HubProxy.Invoke("ComputeUnitReqisted", _ComputeGuid);
and every time they do something they call
HubProxy.Invoke("Running", _ComputeGuid);
where HubProxy is :
HubConnection Hub = new HubConnection(RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue("SignalREndPoint"):
"http://taskqueue.cloudapp.net/");
IHubProxy HubProxy = Hub.CreateHubProxy("ComputeUnits");
I used RoleEnviroment.IsAvailable because i can now run this as a Azure Role , a Console App or what ever in .NET 4.5. The Hub is placed in a MVC4 Website project and is started like this:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
RouteTable.Routes.MapHubs();
public class ComputeUnits : Hub
{
public Task Running(Guid MyGuid)
{
return Clients.Group(MyGuid.ToString()).ComputeUnitHeartBeat(MyGuid,
DateTime.UtcNow.ToEpochMilliseconds());
}
public Task ComputeUnitReqister(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, "ComputeUnits").Wait();
return Clients.Others.ComputeUnitCameOnline(new { Guid = MyGuid,
HeartBeat = DateTime.UtcNow.ToEpochMilliseconds() });
}
public void SubscribeToHeartBeats(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, MyGuid.ToString());
}
}
My clients are Javascript clients, that have methods for(let me know if you need to see the code for this also). But basicly they listhen for the ComputeUnitCameOnline and when its run they call on the server SubscribeToHeartBeats. This means that whenever the server compute unit is doing some work it will call Running, which will trigger a ComputeUnitHeartBeat on javascript clients.
I hope you can use this to see how Groups and Connections can be used. And last, its also scaled out over multiply azure roles by adding a few lines of code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
GlobalHost.DependencyResolver.UseServiceBus(
serviceBusConnectionString,
2,
3,
GetRoleInstanceNumber(),
topicPathPrefix /* the prefix applied to the name of each topic used */
);
You can get the connection string on the servicebus on azure, remember the Provider=SharedSecret. But when adding the nuget packaged the connectionstring syntax is also pasted into your web.config.
2 is how many topics to split it about. Topics can contain 1Gb of data, so depending on performance you can increase it.
3 is the number of nodes to split it out on. I used 3 because i have 2 Azure Instances, and my localhost. You can get the RoleNumber like this (note that i hard coded my localhost to 2).
private static int GetRoleInstanceNumber()
{
if (!RoleEnvironment.IsAvailable)
return 2;
var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
var li1 = roleInstanceId.LastIndexOf(".");
var li2 = roleInstanceId.LastIndexOf("_");
var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
return Int32.Parse(roleInstanceNo);
}
You can see it all live at : http://taskqueue.cloudapp.net/#/compute-units
When using SignalR, after a client has connected to the server they are served up a Connection ID (this is essential to providing real time communication). Yes this is stored in memory but SignalR also can be used in multi-node environments. You can use the Redis or even Sql Server backplane (more to come) for example. So long story short, we take care of your scale-out scenarios for you via backplanes/service bus' without you having to worry about it.

Categories