Handle a function taking a lot of time Threading - c#

I have a function which is taking a lot of time to execute in a web application.
I have tested this with a profiler and by my logging.
I have other functions running in the same pageload.
What is a best way to display the rest of the values from those functions and keep this function in a thread and display it in a label when it finishes?
This function is used to get events in application which takes time.
private void getEventErrors()
{
EventLog eventLog = new EventLog("Application", ".");
getEvents(eventLog.Entries);
}
private void getEvents(EventLogEntryCollection eventLogEntryCollection)
{
int errorEvents = 0;
foreach (EventLogEntry logEntry in eventLogEntryCollection)
{
if (logEntry.Source.Equals("XYZ"))
{
DateTime variable = Convert.ToDateTime(logEntry.TimeWritten);
long eventTimeTicks = (variable.Ticks);
long eventTimeUTC = (eventTimeTicks - 621355968000000000) / 10000000;
long presentDayTicks = DateTime.Now.Ticks;
long daysBackSeconds = ((presentDayTicks - 864000000000) - 621355968000000000) / 10000000;
if (eventTimeUTC > daysBackSeconds)
{
if (logEntry.EntryType.ToString() == "Error")
{
errorEvents = errorEvents + 1;
}
}
}
}
btn_Link_Event_Errors_Val.Text = errorEvents.ToString(GUIUtility.TWO_DECIMAL_PT_FORMAT);
if (errorEvents == 0)
{
lbl_EventErrorColor.Attributes.Clear();
lbl_EventErrorColor.Attributes.Add("class", "green");
}
else
{
lbl_EventErrorColor.Attributes.Clear();
lbl_EventErrorColor.Attributes.Add("class", "red");
}
}
I have 3 functions in the pageload event, two to get the values from the DB and the other one is shown above.
Should both these functions be service calls?
What i wanted was, the page should load fast and if there is a function taking a lot of time it should run in the background and display when done and in the process if the user want to navigate to a new page it should kill it and move on.

If you have a function that is running in a separate thread in ASP.NET, you may want to consider moving it to a service. There are many reason for this
See this answer (one of many on SO) for why running long running tasks in ASP.NET is not always a good idea.
One option for the service is to use WCF. You can get started here. Your service could implement a method, say GetEvents() which you could use to pull your events. That way you won't tie up your page waiting for this process to complete (using AJAX of course). Also, this allows you to change your implementation of GetEvents() without touching your code on your website.

Related

Check if the server side processing is done

I have an unity app that needs to make a webrequest. There is a html page (which is not under my control, otherwise I'd have done it differently) which, when opened does some server side processing that may take from 10~15 seconds to load. I want to recheck every 10 seconds for the word "Done" as it indicates the result is ready.
Here's my concept code:
IEnumerator MakeData()
{
using (UnityWebRequest www1 = UnityWebRequest.Get(link))
{
yield return www1.Send();
if (www1.isNetworkError || www1.isHttpError)
{
Debug.Log(www1.error);
}
else
{
string tt = "working";
while (tt.Contains("working"))
{
yield return new WaitForSeconds(10);
//how do I get new updated html?
tt = html_after_10_seconds;
}
if (tt.Contains("Done")) { Process(tt); }
}
www1.Dispose();
}
}
Few things:
Currently it gets the page while it is working. Process function can extract the text I want from the html, once it is ready. I have a good reason to use it this way. Also, it is an extremely niche thing, so very unlikely many people will make this webrequest at the same time.
Sorry for bothering! Have a great day!

Executing part of code exactly 1 time inside Parallel.ForEach

I have to query in my company's CRM Solution(Oracle's Right Now) for our 600k users, and update them there if they exist or create them in case they don't. To know if the user already exists in Right Now, I consume a third party WS. And with 600k users this can be a real pain due to the time it takes each time to get a response(around 1 second). So I managed to change my code to use Parallel.ForEach, querying each record in just 0,35 seconds, and adding it to a List<User> of records to be created or to be updated (Right Now is kinda dumb so I need to separate them in 2 lists and call 2 distinct WS methods).
My code used to run perfectly before multithread, but took too long. The problem is that I can't make a batch too large or I get a timeout when I try to update or create via Web Service. So I'm sending them around 500 records at once, and when it runs the critical code part, it executes many times.
Parallel.ForEach(boDS.USERS.AsEnumerable(), new ParallelOptions { MaxDegreeOfParallelism = -1 }, row =>
{
...
user = null;
user = QueryUserById(row["USER_ID"].Trim());
if (user == null)
{
isUpdate = false;
gObject.ID = new ID();
}
else
{
isUpdate = true;
gObject.ID = user.ID;
}
... fill user attributes as generic fields ...
gObject.GenericFields = listGenericFields.ToArray();
if (isUpdate)
listUserUpdate.Add(gObject);
else
listUserCreate.Add(gObject);
if (i == batchSize - 1 || i == (boDS.USERS.Rows.Count - 1))
{
UpdateProcessingOptions upo = new UpdateProcessingOptions();
CreateProcessingOptions cpo = new CreateProcessingOptions();
upo.SuppressExternalEvents = false;
upo.SuppressRules = false;
cpo.SuppressExternalEvents = false;
cpo.SuppressRules = false;
RNObject[] results = null;
// <Critical_code>
if (listUserCreate.Count > 0)
{
results = _service.Create(_clientInfoHeader, listUserCreate.ToArray(), cpo);
}
if (listUserUpdate.Count > 0)
{
_service.Update(_clientInfoHeader, listUserUpdate.ToArray(), upo);
}
// </Critical_code>
listUserUpdate = new List<RNObject>();
listUserCreate = new List<RNObject>();
}
i++;
});
I thought about using lock or mutex, but it isn't gonna help me, since they will just wait to execute afterwards. I need some solution to execute only ONCE in only ONE thread that part of code. Is it possible? Can anyone share some light?
Thanks and kind regards,
Leandro
As you stated in the comments you're declaring the variables outside of the loop body. That's where your race conditions originate from.
Let's take variable listUserUpdate for example. It's accessed randomly by parallel executing threads. While one thread is still adding to it, e.g. in listUserUpdate.Add(gObject); another thread could already be resetting the lists in listUserUpdate = new List<RNObject>(); or enumerating it in listUserUpdate.ToArray().
You really need to refactor that code to
make each loop run as independent from each other as you can by moving variables inside the loop body and
access data in a synchronizing way using locks and/or concurrent collections
You can use the Double-checked locking pattern. This is usually used for singletons, but you're not making a singleton here so generic singletons like Lazy<T> do not apply.
It works like this:
Separate out your shared data into some sort of class:
class QuerySharedData {
// All the write-once-read-many fields that need to be shared between threads
public QuerySharedData() {
// Compute all the write-once-read-many fields. Or use a static Create method if that's handy.
}
}
In your outer class add the following:
object padlock;
volatile QuerySharedData data
In your thread's callback delegate, do this:
if (data == null)
{
lock (padlock)
{
if (data == null)
{
data = new QuerySharedData(); // this does all the work to initialize the shared fields
}
}
}
var localData = data
Then use the shared query data from localData By grouping the shared query data into a subordinate class you avoid the necessity of making its individual fields volatile.
More about volatile here: Part 4: Advanced Threading.
Update my assumption here is that all the classes and fields held by QuerySharedData are read-only once initialized. If this is not true, for instance if you initialize a list once but add to it in many threads, this pattern will not work for you. You will have to consider using things like Thread-Safe Collections.

Restrict number of API calls per second with Thread.Sleep?

As always, im quite the noob, as im sure you will see from both my code and question. For practice im currently writing an Xamarin.Android app for a game called Eve Online. People there mine resources from planets to make cash. These mines have to be reset at different intervals, and the real pros can have up to 30 characters doing it. Each character can have 5 planets, usually there are at least 2 mines (extractors) on each. So there could be 300 timers going on.
In my app you save your characters in an sqlite db, and every hour a intentservice runs through the API and checks your times and if their expired or not. This is how i do that:
public async Task PullPlanets(long KeyID, long CharacterID, string VCode, string CharName)
{
XmlReader lesern = XmlReader.Create("https://api.eveonline.com/char/PlanetaryColonies.xml.aspx?keyID=" + KeyID + "&vCode=" + VCode + "&characterID=" + CharacterID);
while (lesern.Read())
{
long planet = 0;
string planetName;
planet = Convert.ToInt64(lesern.GetAttribute("planetID"));
planetName = lesern.GetAttribute("planetName");
if ((planet != 0) && (planetName != null))
{
planets.Add(planet);
planetNames.Add(planetName);
await GetExpirationTimes(CharName, planet, planetName, KeyID, CharacterID, VCode);
}
}
lesern.Close ();
}
public async Task GetExpirationTimes(string CharName, long planetID, string planetName, long KeyID, long CharacterID, string VCode)
{
string planet = planetID.ToString();
XmlReader lesern = XmlReader.Create("https://api.eveonline.com/char/PlanetaryPins.xml.aspx?keyID=" + KeyID + "&vCode=" + VCode + "&characterID=" + CharacterID + "&planetID=" + planet);
while (lesern.Read())
{
string expTime;
expTime = lesern.GetAttribute("expiryTime");
if ((expTime != null) && (expTime != "0001-01-01 00:00:00"))
{
allInfo.Add (new AllInfo (CharName, planetName, Convert.ToDateTime (expTime)));
}
}
lesern.Close ();
SendOrderedBroadcast (stocksIntent, null);
}
}
After this, it sends the times back to my Activity, where they get added to an extractor. It seems to work pretty fine, although ive only been able to test with 2 characters with a total of 14 extractors so far. An alarmmanger in activity calls the service every hour, and it sends a notification. When user opens the activity, it pulls the list from service, sorts it, and displays it. I would welcome input on if this is the way to do it.
I do see a problem in the horizon, though. The Eve API blocks if an app surpases 30 API-calls per second. Im pretty sure someone with 30 characters would do that. So, im wondering if i should add something to delay each call if a certain number is passed? This is how i call the first XML call.
var table = db.Table<CharsList> ();
foreach (var e in table) {
long KeyIDOut = Convert.ToInt64(e.KeyID);
long CharIDOut = Convert.ToInt64(e.CharacterID);
string VCodeOut = e.VCode.ToString();
string navnOut = e.Name.ToString();
PullPlanets(KeyIDOut, CharIDOut, VCodeOut, navnOut);
}
CheckTimes ();
}
Is it viable to add a
if (table.Count > 10) {
foreach (var e in table) {
//start the first characters call
Thread.Sleep(100)
}
The service is intentservice and not on UI thread. I guess this would bring the calls under 30 a sec, but i have never used Thread.Sleep and fear what else could happen in my code. Are there other things that could help me not blow the limit? Can this code handle 300 extractors?
I believe you are generally right in your approach. I had to do a similar thing for a reddit client I was writing, except their limits is once a second or so.
The only problem I see with your setup is that assume that Thread.Sleep does sleep for the amount of time you give it. Spurious wakeups are possible in some cases, so what I would suggest is that you give it a smaller value, save the last time you accessed the service and then put a loop around the sleep call that terminates once enough time has passed.
Finally if you are going to be firing up a lot of intent services for a relatively short amount of work, you might want to have a normal service with a thread to handle the work - that way it will only have to be created once but it is still of the UI thread.

Multi-threading for overload and thread name is empty

This situation might seem strange but this is what i have to do:
Situation, i have a sharepoint portal and there was such an issue that there might be a problem while retrieving user profiles that there might be too slow when a lot of people online and perfrom that kind of action, so there was made a descision to make a console application to test it out.
The console application needs to simulate behavior for retrieving the user profiles with as if many different users are doing that.
And there must be a log written.
The first question is this kind of testing a good way to really know where exactly the problme is?
And the other question is about my application, i have a strange behavior:
public class Program
{
static void Main(string[] args)
{
string filePath = #"C:\Users\User\Desktop\logfile.txt";
string siteUrl = #"http://siteurl";
int threads = 1;
//Multiplicator multiplicator = new Multiplicator(filePath, siteUrl, threads);
//Console.ReadLine();
for (int i = 0; i < 100; i++)
{
Thread t = new Thread(Execute);
t.Start();
}
Console.WriteLine("Main thread: " + Thread.CurrentThread.Name);
// Simultaneously, do something on the main thread.
}
static void Execute()
{
for (int i = 0; i < 100; i++)
{
using (SPSite ospSite = new SPSite(#"http://siteurl"))
{
SPServiceContext serviceContext = SPServiceContext.GetContext(ospSite);
UserProfileManager profileManager = new UserProfileManager(serviceContext);
UserProfile userProfile = profileManager.GetUserProfile("User Name");
string message = "Retrieved: " + userProfile.DisplayName + " on " +DateTime.Now + "by " Thread.CurrentThread.Name;
Console.WriteLine(message);
}
}
}
}
So the problem is i never get the name of the thread written why?
Thread.CurrentThread.Name is empty, is it normal, maybe i initialize the threading wrong? Altho many sources said that it is done like this?
You have not set the name of the thread. You should do so before you start it, and you can incorporate the iteration number in the name, if you like:
Thread t = new Thread(Execute);
t.Name = "My Thread" + i.ToString();
t.Start();
They are not given names automatically. The name can only be set once, after which you would get an InvalidOperationException
MSDN Reference: Thread.Name
Incidentally, creating 100 threads is probably not a good idea under normal circumstances.
You need to give it a name. Just when you create the thread, just name it based on i
Thread t = new Thread(Execute) { Name = i.ToString() };
Ok, I will go for your first question. No, there is a better way to do this.
Put the site under load. Maybe a friend can hit F5 all the time or you run a batch file with 1000 lines of a curl get
Attach the Visual Studio debugger to the webserver process
Hit break 10 times and see where it stops most of the time. That is your hotspot/problem.
This is called the poor mans profiler. It is built into every Visual Studio ;-)
In general, it is easy to find such problems by doing profiling. There are even sophisticated tools for this.

Get Application Pool Uptime in c#

Is there a way I can determine how long an application pool (in IIS7) has been up (time since started, or last restart) in c#?
DateTime.Now - Process.GetCurrentProcess().StartTime
Process.GetCurrentProcessInfo() doesn't exist.
Really stupid trick: in some class that everything uses, use a class constructor to remember your start time and use an aspx page to receive it. Now compare to current time.
From the ASP.NET application, you can try TimeSpan uptime = (DateTime.Now - ProcessInfo.GetCurrentProcessInfo ().StartTime)
Based on the above I created a simple class like so..
public static class UptimeMonitor
{
static DateTime StartTime { get; set; }
static UptimeMonitor()
{
StartTime = DateTime.Now;
}
public static int UpTimeSeconds
{
get { return (int)Math.Round((DateTime.Now - StartTime).TotalSeconds,0); }
}
}
and called it in Application_Start() in Global.asax.cs like
var temp = UptimeMonitor.UpTimeSeconds;
It can then be accessed anywhere using
UptimeMonitor.UpTimeSeconds
if you find that Process.GetCurrentProcessInfo() doesn't exist as another user mentioned,
System.Diagnostics.Process.GetCurrentProcess().StartTime
may work for you.
(I wanted to add this as a comment to Eric Humphrey's post but I'm not allowed)
One of two approaches exist that I personally use. Using a static class (as shown in #Original10's answer) or using Application variables.
I have found that using Application variables is acceptable because I noticed Process.GetCurrentProcess() survives application restarts (eg modification of web.config or bin directory). I needed something that would cater for the website restarting as well.
In your Global.asax, add the following to the Application_Start - and add the method it if it's not there.
public void Application_Start(Object sender, EventArgs e)
{
...
Application["ApplicationStartTime"] = DateTime.Now.ToString("o");
}
In your code where you need it, you could do something like:
var appStartTime = DateTime.MinValue;
var appStartTimeValue = Web.HttpCurrent.Application["ApplicationStartTime"].ToString();
DateTime.TryParseExact(appStartTimeValue, "o", null, Globalization.DateTimeStyles.None, Out appStartTime);
var uptime = (DateTime.Now - appStartTime).TotalSeconds
var lsOutput = $"Application has been running since {appStartTime:o} - {uptime:n0} seconds."
Which will produce something along the lines of
Application has been running since 2018-02-16T10:00:56.4370974+00:00 - 10,166 seconds.
There is no checking of the application variable or locking of the application if required. I'll leave this as an exercise to the user.
If you mashed Restarting (Recycling) an Application Pool and http://forums.iis.net/t/1162615.aspx, you should get it

Categories