Restrict number of API calls per second with Thread.Sleep? - c#

As always, im quite the noob, as im sure you will see from both my code and question. For practice im currently writing an Xamarin.Android app for a game called Eve Online. People there mine resources from planets to make cash. These mines have to be reset at different intervals, and the real pros can have up to 30 characters doing it. Each character can have 5 planets, usually there are at least 2 mines (extractors) on each. So there could be 300 timers going on.
In my app you save your characters in an sqlite db, and every hour a intentservice runs through the API and checks your times and if their expired or not. This is how i do that:
public async Task PullPlanets(long KeyID, long CharacterID, string VCode, string CharName)
{
XmlReader lesern = XmlReader.Create("https://api.eveonline.com/char/PlanetaryColonies.xml.aspx?keyID=" + KeyID + "&vCode=" + VCode + "&characterID=" + CharacterID);
while (lesern.Read())
{
long planet = 0;
string planetName;
planet = Convert.ToInt64(lesern.GetAttribute("planetID"));
planetName = lesern.GetAttribute("planetName");
if ((planet != 0) && (planetName != null))
{
planets.Add(planet);
planetNames.Add(planetName);
await GetExpirationTimes(CharName, planet, planetName, KeyID, CharacterID, VCode);
}
}
lesern.Close ();
}
public async Task GetExpirationTimes(string CharName, long planetID, string planetName, long KeyID, long CharacterID, string VCode)
{
string planet = planetID.ToString();
XmlReader lesern = XmlReader.Create("https://api.eveonline.com/char/PlanetaryPins.xml.aspx?keyID=" + KeyID + "&vCode=" + VCode + "&characterID=" + CharacterID + "&planetID=" + planet);
while (lesern.Read())
{
string expTime;
expTime = lesern.GetAttribute("expiryTime");
if ((expTime != null) && (expTime != "0001-01-01 00:00:00"))
{
allInfo.Add (new AllInfo (CharName, planetName, Convert.ToDateTime (expTime)));
}
}
lesern.Close ();
SendOrderedBroadcast (stocksIntent, null);
}
}
After this, it sends the times back to my Activity, where they get added to an extractor. It seems to work pretty fine, although ive only been able to test with 2 characters with a total of 14 extractors so far. An alarmmanger in activity calls the service every hour, and it sends a notification. When user opens the activity, it pulls the list from service, sorts it, and displays it. I would welcome input on if this is the way to do it.
I do see a problem in the horizon, though. The Eve API blocks if an app surpases 30 API-calls per second. Im pretty sure someone with 30 characters would do that. So, im wondering if i should add something to delay each call if a certain number is passed? This is how i call the first XML call.
var table = db.Table<CharsList> ();
foreach (var e in table) {
long KeyIDOut = Convert.ToInt64(e.KeyID);
long CharIDOut = Convert.ToInt64(e.CharacterID);
string VCodeOut = e.VCode.ToString();
string navnOut = e.Name.ToString();
PullPlanets(KeyIDOut, CharIDOut, VCodeOut, navnOut);
}
CheckTimes ();
}
Is it viable to add a
if (table.Count > 10) {
foreach (var e in table) {
//start the first characters call
Thread.Sleep(100)
}
The service is intentservice and not on UI thread. I guess this would bring the calls under 30 a sec, but i have never used Thread.Sleep and fear what else could happen in my code. Are there other things that could help me not blow the limit? Can this code handle 300 extractors?

I believe you are generally right in your approach. I had to do a similar thing for a reddit client I was writing, except their limits is once a second or so.
The only problem I see with your setup is that assume that Thread.Sleep does sleep for the amount of time you give it. Spurious wakeups are possible in some cases, so what I would suggest is that you give it a smaller value, save the last time you accessed the service and then put a loop around the sleep call that terminates once enough time has passed.
Finally if you are going to be firing up a lot of intent services for a relatively short amount of work, you might want to have a normal service with a thread to handle the work - that way it will only have to be created once but it is still of the UI thread.

Related

Trying to find a lock-less solution for a C# concurrent queue

I have the following code in C#:
(_StoreQueue is a ConcurrentQueue)
var S = _StoreQueue.FirstOrDefault(_ => _.TimeStamp == T);
if (S == null)
{
lock (_QueueLock)
{
// try again
S = _StoreQueue.FirstOrDefault(_ => _.TimeStamp == T);
if (S == null)
{
S = new Store(T);
_StoreQueue.Enqueue(S);
}
}
}
The system is collecting data in real time (fairly high frequency, around 300-400 calls / second) and puts it in bins (Store objects) that represent a 5 second interval. These bins are in a queue as they get written and the queue gets emptied as data is processed and written.
So, when data is arriving, a check is done to see if there is a bin for that timestamp (rounded by 5 seconds), if not, one is created.
Since this is quite heavily multi-threaded, the system goes with the following logic:
If there is a bin, it is used to put data.
If there is no bin, a lock gets initiated and within that lock, the check is done again to make sure it wasn't created by another thread in the meantime. and if there is still no bin, one gets created.
With this system, the lock is roughly used once every 2k calls
I am trying to see if there is a way to remove the lock, but it is mostly because I'm thinking there has to be a better solution that the double check.
An alternative I have been thinking about is to create empty bins ahead of time and that would entirely remove the need for any locks, but the search for the right bin would become slower as it would have to scan the list pre-built bins to find the proper one.
Using a ConcurrentDictionary can fix the issue you are having. Here i assumed a type double for your TimeStamp property but it can be anything, as long as you make the ConcurrentDictionary key match the type.
class Program
{
ConcurrentDictionary<double, Store> _StoreQueue = new ConcurrentDictionary<double, Store>();
static void Main(string[] args)
{
var T = 17d;
// try to add if not exit the store with 17
_StoreQueue.GetOrAdd(T, new Store(T));
}
public class Store
{
public double TimeStamp { get; set; }
public Store(double timeStamp)
{
TimeStamp = timeStamp;
}
}
}

Handle a function taking a lot of time Threading

I have a function which is taking a lot of time to execute in a web application.
I have tested this with a profiler and by my logging.
I have other functions running in the same pageload.
What is a best way to display the rest of the values from those functions and keep this function in a thread and display it in a label when it finishes?
This function is used to get events in application which takes time.
private void getEventErrors()
{
EventLog eventLog = new EventLog("Application", ".");
getEvents(eventLog.Entries);
}
private void getEvents(EventLogEntryCollection eventLogEntryCollection)
{
int errorEvents = 0;
foreach (EventLogEntry logEntry in eventLogEntryCollection)
{
if (logEntry.Source.Equals("XYZ"))
{
DateTime variable = Convert.ToDateTime(logEntry.TimeWritten);
long eventTimeTicks = (variable.Ticks);
long eventTimeUTC = (eventTimeTicks - 621355968000000000) / 10000000;
long presentDayTicks = DateTime.Now.Ticks;
long daysBackSeconds = ((presentDayTicks - 864000000000) - 621355968000000000) / 10000000;
if (eventTimeUTC > daysBackSeconds)
{
if (logEntry.EntryType.ToString() == "Error")
{
errorEvents = errorEvents + 1;
}
}
}
}
btn_Link_Event_Errors_Val.Text = errorEvents.ToString(GUIUtility.TWO_DECIMAL_PT_FORMAT);
if (errorEvents == 0)
{
lbl_EventErrorColor.Attributes.Clear();
lbl_EventErrorColor.Attributes.Add("class", "green");
}
else
{
lbl_EventErrorColor.Attributes.Clear();
lbl_EventErrorColor.Attributes.Add("class", "red");
}
}
I have 3 functions in the pageload event, two to get the values from the DB and the other one is shown above.
Should both these functions be service calls?
What i wanted was, the page should load fast and if there is a function taking a lot of time it should run in the background and display when done and in the process if the user want to navigate to a new page it should kill it and move on.
If you have a function that is running in a separate thread in ASP.NET, you may want to consider moving it to a service. There are many reason for this
See this answer (one of many on SO) for why running long running tasks in ASP.NET is not always a good idea.
One option for the service is to use WCF. You can get started here. Your service could implement a method, say GetEvents() which you could use to pull your events. That way you won't tie up your page waiting for this process to complete (using AJAX of course). Also, this allows you to change your implementation of GetEvents() without touching your code on your website.

Multi-threading for overload and thread name is empty

This situation might seem strange but this is what i have to do:
Situation, i have a sharepoint portal and there was such an issue that there might be a problem while retrieving user profiles that there might be too slow when a lot of people online and perfrom that kind of action, so there was made a descision to make a console application to test it out.
The console application needs to simulate behavior for retrieving the user profiles with as if many different users are doing that.
And there must be a log written.
The first question is this kind of testing a good way to really know where exactly the problme is?
And the other question is about my application, i have a strange behavior:
public class Program
{
static void Main(string[] args)
{
string filePath = #"C:\Users\User\Desktop\logfile.txt";
string siteUrl = #"http://siteurl";
int threads = 1;
//Multiplicator multiplicator = new Multiplicator(filePath, siteUrl, threads);
//Console.ReadLine();
for (int i = 0; i < 100; i++)
{
Thread t = new Thread(Execute);
t.Start();
}
Console.WriteLine("Main thread: " + Thread.CurrentThread.Name);
// Simultaneously, do something on the main thread.
}
static void Execute()
{
for (int i = 0; i < 100; i++)
{
using (SPSite ospSite = new SPSite(#"http://siteurl"))
{
SPServiceContext serviceContext = SPServiceContext.GetContext(ospSite);
UserProfileManager profileManager = new UserProfileManager(serviceContext);
UserProfile userProfile = profileManager.GetUserProfile("User Name");
string message = "Retrieved: " + userProfile.DisplayName + " on " +DateTime.Now + "by " Thread.CurrentThread.Name;
Console.WriteLine(message);
}
}
}
}
So the problem is i never get the name of the thread written why?
Thread.CurrentThread.Name is empty, is it normal, maybe i initialize the threading wrong? Altho many sources said that it is done like this?
You have not set the name of the thread. You should do so before you start it, and you can incorporate the iteration number in the name, if you like:
Thread t = new Thread(Execute);
t.Name = "My Thread" + i.ToString();
t.Start();
They are not given names automatically. The name can only be set once, after which you would get an InvalidOperationException
MSDN Reference: Thread.Name
Incidentally, creating 100 threads is probably not a good idea under normal circumstances.
You need to give it a name. Just when you create the thread, just name it based on i
Thread t = new Thread(Execute) { Name = i.ToString() };
Ok, I will go for your first question. No, there is a better way to do this.
Put the site under load. Maybe a friend can hit F5 all the time or you run a batch file with 1000 lines of a curl get
Attach the Visual Studio debugger to the webserver process
Hit break 10 times and see where it stops most of the time. That is your hotspot/problem.
This is called the poor mans profiler. It is built into every Visual Studio ;-)
In general, it is easy to find such problems by doing profiling. There are even sophisticated tools for this.

Parallels.ForEach Taking same Time as Foreach

All,
I am using the Parallels.ForEach as follows
private void fillEventDifferencesParallels(IProducerConsumerCollection<IEvent> events, Dictionary<string, IEvent> originalEvents)
{
Parallel.ForEach<IEvent>(events, evt =>
{
IEvent originalEventInfo = originalEvents[evt.EventID];
evt.FillDifferences(originalEventInfo);
});
}
Ok, so the problem I'm having is I have a list of 28 of these (a test sample, this should be able to scale to 200+) and the FillDifferences method is quite time consuming (about 4s per call). So the Average time for this to run in a normal ForEach has been around 100-130s. When I run the same thing in Parallel, it takes the same amount of time and Spikes my CPU (Intel I5, 2 Core, 2 Threads per Core) causing the app to become sluggish while this query is running (this is running on a thread that was spawned by the GUI thread).
So my question is, what am I doing wrong that is causing this to take the same amount of time? I read that List wasn't thread safe so I rewrote this to use the IProducerConsumerCollection. Is there any other pitfalls that may be causing this?
The FillDifferences Method calls a static class that uses reflection to find out how many differences there are between the original and the modified object. The static object has no 'global' variables, just ones local to the methods being invoked.
Some of you wanted to see what the FillDifferences() method called. This is where it ends up ultimately:
public List<IDifferences> ShallowCompare(object orig, object changed, string currentName)
{
List<IDifferences> differences = new List<IDifferences>();
foreach (MemberInfo m in orig.GetType().GetMembers())
{
List<IDifferences> temp = null;
//Go through all MemberInfos until you find one that is a Property.
if (m.MemberType == MemberTypes.Property)
{
PropertyInfo p = (PropertyInfo)m;
string newCurrentName = "";
if (currentName != null && currentName.Length > 0)
{
newCurrentName = currentName + ".";
}
newCurrentName += p.Name;
object propertyOrig = null;
object propertyChanged = null;
//Find the property Information from the orig object
if (orig != null)
{
propertyOrig = p.GetValue(orig, null);
}
//Find the property Information from the changed object
if (changed != null)
{
propertyChanged = p.GetValue(changed, null);
}
//Send the property to find the differences, if any. This is a SHALLOW compare.
temp = objectComparator(p, propertyOrig, propertyChanged, true, newCurrentName);
}
if (temp != null && temp.Count > 0)
{
foreach (IDifferences difference in temp)
{
addDifferenceToList(differences, difference);
}
}
}
return differences;
}
I believe you may be running into the cost of thread context switching. Since these tasks are long running I can imagine many threads are being created on the ThreadPool to handle them.
0ms == 1 thread
500ms == 2 threads
1000 ms == 3 threads
1500 ms == 4 threads
2000 ms == 5 threads
2500 ms == 6 threads
3000 ms == 7 threads
3500 ms == 8 threads
4000 ms == 9 threads
By 4000ms only the first task has been completed so this process will continue. A possible solution is as follows.
System.Threading.ThreadPool.SetMaxThreads(4, 4);
Looking at what it's doing, the only time your threads aren't doing anything is when the OS switches them out to give another thread a go, so you've got the gain of being able to run on an other core - the cost of all the context switches.
You'd have to chuck some logging in to find out for definite, but I suspect the bottle neck is physical threads, unless you have one somewhere else you' haven't posted.
If that's true, I'd be tempted to rejig the code. Have two threads one for finding properties to compare, and one for comparing them and a common queue. May be another one to throw classes in the list and collate the results.
Could be me old time batch processing head though.

Dual-queue producer-consumer in .NET (forcing member variable flush)

I have a thread which produces data in the form of simple object (record). The thread may produce a thousand records for each one that successfully passes a filter and is actually enqueued. Once the object is enqueued it is read-only.
I have one lock, which I acquire once the record has passed the filter, and I add the item to the back of the producer_queue.
On the consumer thread, I acquire the lock, confirm that the producer_queue is not empty,
set consumer_queue to equal producer_queue, create a new (empty) queue, and set it on producer_queue. Without any further locking I process consumer_queue until it's empty and repeat.
Everything works beautifully on most machines, but on one particular dual-quad server I see in ~1/500k iterations an object that is not fully initialized when I read it out of consumer_queue. The condition is so fleeting that when I dump the object after detecting the condition the fields are correct 90% of the time.
So my question is this: how can I assure that the writes to the object are flushed to main memory when the queue is swapped?
Edit:
On the producer thread:
(producer_queue above is m_fillingQueue; consumer_queue above is m_drainingQueue)
private void FillRecordQueue() {
while (!m_done) {
int count;
lock (m_swapLock) {
count = m_fillingQueue.Count;
}
if (count > 5000) {
Thread.Sleep(60);
} else {
DataRecord rec = GetNextRecord();
if (rec == null) break;
lock (m_swapLock) {
m_fillingQueue.AddLast(rec);
}
}
}
}
In the consumer thread:
private DataRecord Next(bool remove) {
bool drained = false;
while (!drained) {
if (m_drainingQueue.Count > 0) {
DataRecord rec = m_drainingQueue.First.Value;
if (remove) m_drainingQueue.RemoveFirst();
if (rec.Time < FIRST_VALID_TIME) {
throw new InvalidOperationException("Detected invalid timestamp in Next(): " + rec.Time + " from record " + rec);
}
return rec;
} else {
lock (m_swapLock) {
m_drainingQueue = m_fillingQueue;
m_fillingQueue = new LinkedList<DataRecord>();
if (m_drainingQueue.Count == 0) drained = true;
}
}
}
return null;
}
The consumer is rate-limited, so it can't get ahead of the consumer.
The behavior I see is that sometimes the Time field is reading as DateTime.MinValue; by the time I construct the string to throw the exception, however, it's perfectly fine.
Have you tried the obvious: is microcode update applied on the fancy 8-core box(via BIOS update)? Did you run Windows Updates to get the latest processor driver?
At the first glance, it looks like you're locking your containers. So I am recommending the systems approach, as it sound like you're not seeing this issue on a good-ol' dual core box.
Assuming these are in fact the only methods that interact with the m_fillingQueue variable, and that DataRecord cannot be changed after GetNextRecord() creates it (read-only properties hopefully?), then the code at least on the face of it appears to be correct.
In which case I suggest that GregC's answer would be the first thing to check; make sure the failing machine is fully updated (OS / drivers / .NET Framework), becasue the lock statement should involve all the required memory barriers to ensure that the rec variable is fully flushed out of any caches before the object is added to the list.

Categories