I wrote a small program to calculate the total value of some items in the Steam market. Everything works fine but the program needs around 2 minutes to start, it's because it took some time to get the price of the items.
Now I would like to know if I could change the code a little bit to optimize the program so it doesn't take 2 minutes to start, maybe 1 or less would be great.
Between each connection to the website where I get the prices, I have to wait a few seconds (around 3,5 sec seems to be the best), or I get an error because of too many requests, so this is a thing I think I can't really change.
This is how I get the price from the website:
string urlsteammarkt = "https://steamcommunity.com/market/priceoverview/?appid=730¤cy=3&market_hash_name=";
string Chroma()
{
WebClient chroma = new WebClient();
//get the informations
srcChroma = chroma.DownloadString(urlsteammarkt + "Chroma%20Case");
//cute out the min price
srcChroma = srcChroma.Remove(0, 32);
srcChroma = srcChroma.Remove(srcChroma.IndexOf('\\'));
//replace -- when it's a round price (e.g. 13€ will be displayed as 13,--)
srcChroma = srcChroma.Replace("--", "00");
return srcChroma + "€\n";
}
Code is too long, here is the rest of the code:
https://pastebin.com/ZrTqjMd0
First of all: you are literally telling your program to sleep (just wait and do nothing).
And you need to learn programming, especially oop (https://www.w3schools.com/cs/cs_oop.asp)
Related
I am currently trying to make a program to find blocks of a specific color in a game save and move their position, however with some of the bigger saves my method of searching for the blocks in the save can start to take a bit. My current fastest method takes about 42 seconds to search for and move every block in a string about the size of 1MB. There are a lot of blocks in the save (Roughly one every 50-300 characters in the string, with a total of around 7k) so I'm not sure if string search algorithms would speed up or slow down this process.
So, I was wondering if I could get any tips of if anyone had any ideas on how to further speed up my code I would be very greatfull.
progressBar2.Maximum = blueprint.Length;
int i = 0;
while (i < blueprint.Length - 15)
{
progressBar2.Value = i;
try
{
if (!blueprint.Substring(i, 110).ToLower()
.Contains("\"color\""))
{
i += 100;
}
}
catch
{
return;
}
checkcolor(i, color, colortf, posset, axis);
i++;
}
I am currently optimizing the method checkcolor and it's the cause for most of the delay, but my current method runs it way more than needed.
I've tried adding a second if to skip at an interval of 10 as well as 100 but that caused it to take over 2 min, I've also tried different values to skip other then 100 but 100 seems to be the fastest.
Edit: I was making 2 new temporary strings just to check for a small bit of text millions of times, it's a lot faster to use .IndexOf which I did not know existed. Thanks for the help and sorry if this was off topic.
I would try to compare efficiency without creation substring and using ToLower():
if (!blueprint.IndexOf("\"color\"", StringComparison.OrdinalIgnoreCase) >= 0)
I have a landing page that plays a video. The client would like one video for half the users, and a different video for the second half.
I was thinking about generating doing a random number, if it's even then video 1 else video 2. I figure overtime this should end up being roughly 50 / 50. Another approach was setting a Application variable within Application_Start and using that. I was perhaps thinking I could configure something within IIS that I could evaluate when the page is requested.
The site is simple and will be served from a single source.
Is there a way to do this, before I start wasting time throwing things at the wall to see what sticks? I am not sure what to even search for.
You could generate a random number where the options are limited to 2 possible outcomes then use that:
Random random = new Random();
var randomNumber = random.Next(0, 2);
The above will give you either a 0 or a 1 as output which can be used to determine one of your two paths.
I ran this 10,000 times in a little program and it came out pretty even (4948/5052).
As long as your boss isn't worried about EXACTLY even numbers I think this should be OK.
Edit: In my test program I created a new randon in each iteration of the loop, because I felt this better simulated the use-case of an ASP.NET page.
EDIT: I realized that I had been going about it completely the wrong way and after an overhaul, got it working. Thanks for the tips guys, I'll keep them in mind for the future.
I've hit an unusual problem in my program. What I need to do is find the difference between two times, divide it by 1.5 hours, then return the starting time followed by each 1.5 hour increment of the starting time. So if the time was 11:45 am - 2:45 pm, the time difference is three hours, 3/1.5 = 2, then return 11:45 am and 1:15 pm. At the moment, I can do everything except return more than one time. Depending on what I've tried, it returns either the initial time (11:45 am), the first increment (1:15 pm) or the end time (2:45 pm). So far I've tried a few different types of for and do/while loops. The closest I've come was simply concatenating the start time and the incremented time, but the start and end times can range anywhere from 3 - 6 hours so that's not a practical way to do it.
The latest thing I tried:
int i = 0;
do{
i++;
//Start is the starting time, say 11:45 am
start = start.AddMinutes(90);
return start.ToShortTimeString();
} while (i < totalSessions); //totalSessions is the result of hours / 1.5
and I'm calling the function on a dynamic label (which is also in a for loop):
z[i] = new Label();
z[i].Location = new Point(PointX, PointZ);
z[i].Name = "sessionTime_" + i;
z[i].Text = getPlayTimes(dt.Rows[i][1].ToString());
tabPage1.Controls.Add(z[i]);
z[i].BringToFront();
PointZ += z[i].Height;
I'm pretty new to c# so I guess I've just misunderstood something somewhere.
I think you're trying to solve the problem the wrong way. Instead of returning each value as you come to it, create a collection ie List, and push each of your results onto the list until your finishing condition is met.
Then, return the whole array as your return value. This way you will have a nice self-contained function that doesn't have cross-concerns with other logic - it does only it's one little task but does it well.
Good luck!
It's a bit difficult to determine your exact use case, as you haven't offered the complete code you are using. However, you can do what you are asking by using the yield return functionality:
public IEnumerable<string> GetPlayTimes()
{
int i = 0;
do
{
i++;
//Start is the starting time, say 11:45 am
start = start.AddMinutes(90);
yield return start.ToShortTimeString();
} while (i < totalSessions); //totalSessions is the result of hours / 1.5
}
And then use it like so:
foreach (var time in GetPlayTimes())
{
// Do something with time
}
I've made an C# application which connects to my webcam and reads the images at the speed the webcam delivers them. I'm parsing the stream, in a way that I have a few jpeg's per seconds.
I don't want to write all the webcam data to the disk, I want to store images in memory. Also the application will act as a webserver which I can supply a datetime in the querystring. And the webserver must server the image closest to that time which it still has in memory.
In my code I have this:
Dictionary<DateTime, byte[]> cameraImages;
of which DateTime is the timestamp of the received image and the bytearray is the jpeg.
All of that works; also handling the webrequest works. Basically I want to clean up that dictionary by keep images according to their age.
Now I need an algorithm for it, in that it cleans up older images.
I can't really figure out an algorithm for it, one reason for it is that the datetimes aren't exactly on a specific moment, and I can't be sure that an image always arrives. (Sometimes the image stream is aborted for several minutes). But what I want to do is:
Keep all images for the first minute.
Keep 2 images per second for the first half hour.
Keep only one image per second if it's older than 30 minutes.
Keep only one image per 30 seconds if it's older than 2 hours.
Keep only one image per minute if it's older than 12 hours.
Keep only one image per hour if it's older than 24 hours.
Keep only one image per day if it's older than two days.
Remove all images older than 1 weeks.
The above intervals are just an example.
Any suggestions?
I think #Kevin Holditch's approach is perfectly reasonable and has the advantage that it would be easy to get the code right.
If there were a large number of images, or you otherwise wanted to think about how to do this "efficiently", I would propose a thought process like the following:
Create 7 queues, representing your seven categories. We take care to keep the images in this queue in sorted time order. The queue data structure is able to efficiently insert at its front and remove from its back. .NET's Queue would be perfect for this.
Each Queue (call it Qi) has an "incoming" set and an "outgoing" set. The incoming set for Queue 0 is those images from the camera, and for any other set it is equal to the outgoing set for Queue i-1
Each queue has rules on both its input and output side which determine whether the queue will admit new items from its incoming set and whether it should eject items from its back into its outgoing set. As a specific example, if Q3 is the queue "Keep only one image per 30 seconds if it's older than 2 hours", then Q3 iterates over its incoming set (which is the outcoming set of Q2) and only admits item i where i's timestamp is 30 seconds or more away from Q3.first() (For this to work correctly the items need to be processed from highest to lowest timestamp). On the output side, we eject from Q3's tail any object older than 12 hours and this becomes the input set for Q4.
Again, #Kevin Holditch's approach has the virtue of simplicity and is probably what you should do. I just thought you might find the above to be food for thought.
You could do this quite easily (although it may not be the most efficient way by using Linq).
E.g.
var firstMinImages = cameraImages.Where(
c => c.Key >= DateTime.Now.AddMinutes(-1));
Then do an equivalent query for every time interval. Combine them into one store of images and overwrite your existing store (presuming you dont want to keep them). This will work with your current criteria as the images needed get progressively less over time.
My strategy would be to Group the elements into buckets that you plan to weed out, then pick 1 element from the list to keep... I have made an example of how to do this using a List of DateTimes and Ints but Pics would work exactly the same way.
My Class used to store each Pic
class Pic
{
public DateTime when {get;set;}
public int val {get;set;}
}
and a sample of a few items in the list...
List<Pic> intTime = new List<Pic>();
intTime.Add(new Pic() { when = DateTime.Now, val = 0 });
intTime.Add(new Pic() { when = DateTime.Now.AddDays(-1), val = 1 });
intTime.Add(new Pic() { when = DateTime.Now.AddDays(-1.01), val = 2 });
intTime.Add(new Pic() { when = DateTime.Now.AddDays(-1.02), val = 3 });
intTime.Add(new Pic() { when = DateTime.Now.AddDays(-2), val = 4 });
intTime.Add(new Pic() { when = DateTime.Now.AddDays(-2.1), val = 5 });
intTime.Add(new Pic() { when = DateTime.Now.AddDays(-2.2), val = 6 });
intTime.Add(new Pic() { when = DateTime.Now.AddDays(-3), val = 7 });
Now I create a helper function to bucket and remove...
private static void KeepOnlyOneFor(List<Pic> intTime, Func<Pic, int> Grouping, DateTime ApplyBefore)
{
var groups = intTime.Where(a => a.when < ApplyBefore).OrderBy(a=>a.when).GroupBy(Grouping);
foreach (var r in groups)
{
var s = r.Where(a=> a != r.LastOrDefault());
intTime.RemoveAll(a => s.Contains(a));
}
}
What this does is lets you specify how to group the object and set an age threshold on the grouping. Now finally to use...
This will remove all but 1 picture per Day for any pics greater than 2 days old:
KeepOnlyOneFor(intTime, a => a.when.Day, DateTime.Now.AddDays(-2));
This will remove all but 1 picture for each Hour after 1 day old:
KeepOnlyOneFor(intTime, a => a.when.Hour, DateTime.Now.AddDays(-1));
If you are on .net 4 you could use a MemoryCache for each interval with CachItemPolicy objects to expire them when you want them to expire and UpdateCallbacks to move some to the next interval.
I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and
store the response to a stream and open a stream using the following code:
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream stream = response.GetResponseStream ();
StreamReader reader = new StreamReader( stream )
the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files,
It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins.
I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is
what I have now after some optimization:
// skip lines at the top
for(int i=0;i<1500;++i)
reader.ReadLine();
// read the line that contains the price
string theLine = reader.ReadLine();
// ... extract the price from the line
now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I
can further speed up the parsing and extracting and have everything done within 2 mins?
I was doing HTML screen scraping for a while with stock quotes but I found that Yahoo offers a great simple web service that is much better that loading websites.
http://www.gummy-stuff.org/Yahoo-data.htm
With this service you can request up to 100 stock quotes in a single request and it returns a csv formatted response with one line for every symbol. You can set what columns you want returned in the query string of the request. I built a small program that would query the service once a day for every stock in the stock market to get prices. It seemed to work well for me and was way faster than hitting websites for the data.
An example querystring would be
http://finance.yahoo.com/d/quotes.csv?s=GE&f=nkqwxyr1l9t5p4
Which returns text of
"GENERAL ELEC CO",32.98,"Jun 26","21.30 - 32.98","NYSE",2.66,"Jul 25",28.55,"Jul 3","-0.21%"
for(int i=0;i<1500;++i)
reader.ReadLine();
this particulary is not good. ReadLine reads all line and stores it somewhere, but no one uses it. Extra work for GC. Read byte-by-byte and catch \D \A.
Then don't use StreamReader at all! It is fat overhead, read from stream.
Hard to see how this is possible, StreamReader is blindingly fast compared to HttpWebRequest. Some basic assumptions: say you are downloading 900 files with 5000 lines, 100 chars each in 6 minutes. That means you need to download 900 x 5000 x 100 = 450 Megabytes. In 6 minutes, that requires a bandwidth of 450E6 / 6 / 60 * 8 = 10 Mbps.
What do you have? 10 Mbps is about typical for high-speed Internet service, although you need a server that can sustain this. To get it down to 2 seconds, you'll need to upgrade your service to 30 Mbps. Your boss can fix that.
About the speed improvement you saw: watch out for the cache.
If you really need to have real-time data fast then you should subscribe to the data feeds rather than scrape them off a site.
Alternatively, isn't there some token that you can search for to find the field/data pair(s) you need.
4 minutes sounds ridiculously long for reading in 900 files.