Using parallelism to speed up the code - c#

I've made a tool that sends HTTP requests (GET) (reads the info of what to send from a .txt), captures the json and parses it + writes it to a .txt. The tool is a console app targetting .NET Framework 4.5.
Could I possibly speed this up with the help of "multi-threading"?
System.IO.StreamReader file =
new System.IO.StreamReader(#"file.txt");
while ((line = file.ReadLine()) != null)
{
using (var client = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate }))
{
client.BaseAddress = new Uri("https://www.website.com/");
HttpResponseMessage response = client.GetAsync("index?userGetLevel=" + line).Result;
response.EnsureSuccessStatusCode();
string result = response.Content.ReadAsStringAsync().Result;
dynamic json = JObject.Parse(result);
string level = json.data.level;
if (level == "null")
{
Console.WriteLine("{0}: User does not exist.", line);
}
else
{
Console.WriteLine("{0}: User level is {1}.", line, level);
using (StreamWriter sw = File.AppendText(levels.txt))
{
sw.WriteLine("{0} : {1}", line, level);
}
}
}
}
file.Close();
Answers to questions:
"Speed up what?": I'd like to speed up the whole process (the amount of requests it sends each time, not how fast it sends them.
"How many requests?": The tool reads a string from a text file and puts that information in to a part of the URL, and then captures the response, then places that in a result.txt. I'd like to increase the speed it does this at/how many it does at a time.
"Can the requests happen concurrently or are dependant on prior request responses?": Yes, and no, they are not dependent.
" Does your web server impose a limit on the number of concurrent requests?": No.
"How long does a typical request take?": The request + the time the response shows up on console per each requests is a little bit more than 1/3 of a second.

There are several ways to asymmetric programming, one of them could be something like this:
var messages = new ConcurrentQueue<string>();
//or
//var lockObj = new object();
public int main()
{
var fileText = File.ReadAllLines(#"file.txt");
var taskList = new List<Task>();
foreach (var line in fileText)
{
taskList.Add(Task.Factory.StartNew(HandlerMethod, line));
//you can control the amount of produced task if you want:
//if(taskList.Count > 20)
//{
// Task.WaitAll(taskList.ToArray());
// taskList.Clear();
//}
}
Task.WaitAll(taskList.ToArray()); //this line may not work as I expected.
//for the first way
var results = new StringBuilder();
foreach (var msg in messages)
{
results.AppendLine("{0} : {1}", line, level);
}
File.WriteAllText("path", results.ToString());
}
For writing the results, either you can use a public concurrent collection or use a lock pattern:
public void HandlerMethod(object obj)
{
var line = (string)obj;
var result = string.Empty;
using (var client = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate }))
{
client.BaseAddress = new Uri("https://www.website.com/");
HttpResponseMessage response = client.GetAsync("index?userGetLevel=" + line).Result;
response.EnsureSuccessStatusCode();
string result = response.Content.ReadAsStringAsync().Result;
dynamic json = JObject.Parse(result);
result = json.data.level;
}
//for the first way
if (string.IsNullOrWhiteSpace(result))
{
messages.Enqueue("{0}: User does not exist.", line);
}
else
{
messages.Enqueue("{0}: User level is {1}.", line, result);
}
//for the second way
//lock(lockObj)
//{
// using (StreamWriter sw = File.AppendText(levels.txt))
// {
// sw.WriteLine("{0} : {1}", line, level);
// }
//}
}

Related

Recursive method is returning first iteration object C#

static void Main(string[] args)
{
token objtoken = new token();
var location = AddLocations();
OutPutResults outPutResultsApi = new OutPutResults();
GCPcall gCPcall = new GCPcall();
OutPutResults finaloutPutResultsApi = new OutPutResults();
var addressdt = new AddressDataDetails();
finaloutPutResultsApi.addressDatas = new List<AddressDataDetails>();
Console.WriteLine("Hello World!");
List<string> placeId = new List<string>();
var baseUrl = "https://maps.googleapis.com/maps/api/place/textsearch/json?";
var apiKey = "&key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx".Trim();
foreach (var itemlocations in location)
{
var searchtext = "query=" + itemlocations.Trim();
var finalUrl = baseUrl + searchtext + apiKey;
gCPcall.RecursiveApiCall(finalUrl, ref placeId, objtoken.NextToken);
}
var ids = gCPcall.myPalceid;
}
public List<string> RecursiveApiCall(string finalUrl, ref List<string> placeId, string nextToken = null)
{
try
{
var token = "&pagetoken=" + nextToken;
using (var client = new HttpClient())
{
var responseTask = client.GetAsync(finalUrl + token);
responseTask.Wait();
var result = responseTask.Result;
if (result.IsSuccessStatusCode)
{
var readTask = result.Content.ReadAsStringAsync();
readTask.Wait();
var students = readTask.Result;
Rootobject studentsmodel = JsonConvert.DeserializeObject<Rootobject>(students);
nextToken = studentsmodel.next_page_token;
foreach (var item in studentsmodel.results)
{
placeId.Add(item.place_id);
}
}
}
if (nextToken != null)
{
RecursiveApiCall(finalUrl, ref placeId, nextToken);
}
return placeId;
}
catch (Exception ex)
{
throw;
}
}
My recursive method has some issue. Here whenever I am debugging this code it work fine. It goes in recursive call twice.
As debugging result I am getting list place_id with 20 items in first call and next call 9 items total 29 items in place_id object which is correct in static main method.
But if I run without debugging mode I am getting only 20 place_id. next recursive iteration data is missing even if it has next valid token.
I don't have any clue why this is happening. Can someone tell me what is the issue with my code?
Here are my suggestions, which may or may not solve the problem:
// First of all, let's fix the signature : Go async _all the way_
//public List<string> RecursiveApiCall(string finalUrl, ref List<string> placeId, string nextToken = null)
// also reuse the HttpClient!
public async Task ApiCallAsync(HttpClient client, string finalUrl, List<string> placeId, string nextToken = null)
{
// Loop, don't recurse
while(!(nextToken is null)) // C# 9: while(nextToken is not null)
{
try
{
var token = "&pagetoken=" + nextToken;
// again async all the way
var result = await client.GetAsync(finalUrl+token);
if (result.IsSuccessStatusCode)
{
// async all the way!
var students = await result.Content.ReadAsStringAsync();
Rootobject studentsmodel = JsonConvert.DeserializeObject<Rootobject>(students);
nextToken = studentsmodel.next_page_token;
foreach (var item in studentsmodel.results)
{
// Will be reflected in main, so no need to return or `ref` keyword
placeId.Add(item.place_id);
}
}
// NO recursion needed!
// if (nextToken != null)
// {
// RecursiveApiCall(finalUrl, ref placeId, nextToken);
// }
}
catch (Exception ex)
{
// rethrow, only is somewhat useless
// I'd suggest using a logging framework and
// log.Error(ex, "Some useful message");
throw;
// OR remove try/catch here all together and wrap the call to this method
// in try / catch with logging.
}
}
Mind that you'll need to make your main :
async Task Main(string[] args)
and call this as
await ApiCallAsync(client, finalUrl, placeId, nextToken);
Also create an HttpClient in main and reuse that:
"HttpClient is intended to be instantiated once and re-used throughout the life of an application. Instantiating an HttpClient class for every request will exhaust the number of sockets available under heavy loads. This will result in SocketException errors." - Remarks
... which shouldn't do much harm here, but it's a "best practice" anyhow.
Now, as to why you get only 20 instead of expected 29 items, I cannot say if this resolves that issue. I'd highly recommend to introduce a Logging Framework and make log entries accordingly, so you may find the culprid.

Handling Parallel POST to API

I have a API Post method that takes is a string which represents a Bae64 string of bytes from a word document that the API converts to PDF. My test client sends multiple documents, each on its own task, to the API to be converted. The problem is with concurrency and writing the files. I end up with a file in use since the calls are parallel. I have tried a lot of different way to block the conversion process until a document is converted but none of it has worked. Everything works fine if it's jsut a single file being converted but as soon as it's 2 or more, the problem happens. Can anyone guide me in the correct direction to solve this issue?
API:
[HttpPost]
public async Task<SimpleResponse> Post([FromBody]string request)
{
var response = new SimpleResponse();
Task t = Task.Factory.StartNew(async () =>
{
try
{
Converter convert = new Converter();
var result = await convert.CovertDocToPDF(request, WebConfigurationManager.AppSettings["tempDocPath"], WebConfigurationManager.AppSettings["tempPdfPath"]);
response.Result = result;
response.Success = true;
}
catch (Exception ex)
{
response.Exception = ex;
response.Success = false;
response.Errors = new List<string>();
response.Errors.Add(string.Format("{0}, {1}", ex.Message, ex.InnerException?.Message ?? ""));
}
});
t.Wait();
return response;
}
Conversion code
public Task<string> CovertDocToPDF(string blob, string tempDocPath, string tempPdfPath)
{
try
{
// Convert blob back to bytes
byte[] bte = Convert.FromBase64String(blob);
// Process and return blob
return Process(bte, tempDocPath, tempPdfPath);
}
catch (Exception Ex)
{
throw Ex;
}
}
private async Task<string> Process(byte[] bytes, string tempDocPath, string tempPdfPath)
{
try
{
string rs = RandomString(16, true);
tempDocPath = tempDocPath + rs + ".docx";
tempPdfPath = tempPdfPath + rs + ".pdf";
// This is where the problem happens with concurrent calls. I added
// the try catch when the file is in use to generate a new
// filename but the error still happens.
try
{
// Create a temp file
File.WriteAllBytes(tempDocPath, bytes);
}
catch (Exception Ex)
{
rs = RandomString(16, true);
tempDocPath = tempDocPath + rs + ".docx";
tempPdfPath = tempPdfPath + rs + ".pdf";
File.WriteAllBytes(tempDocPath, bytes);
}
word.Application app = new word.Application();
word.Document doc = app.Documents.Open(tempDocPath);
doc.SaveAs2(tempPdfPath, word.WdSaveFormat.wdFormatPDF);
doc.Close();
app.Quit(); // Clean up the word instance.
// Need the bytes to return the blob
byte[] pdfFileBytes = File.ReadAllBytes(tempPdfPath);
// Delete temp files
File.Delete(tempDocPath);
File.Delete(tempPdfPath);
// return blob
return Convert.ToBase64String(pdfFileBytes);
}
catch (Exception Ex)
{
throw Ex;
}
}
Client:
public async void btnConvert_Click(object sender, EventArgs e)
{
var response = await StartConvert();
foreach (SimpleResponse sr in response)
{
if (sr.Success)
{
byte[] bte = Convert.FromBase64String(sr.Result.ToString());
string rs = RandomString(16, true);
string pdfFileName = tempPdfPath + rs + ".pdf";
if (File.Exists(pdfFileName))
{
File.Delete(pdfFileName);
}
System.IO.File.WriteAllBytes(pdfFileName, bte);
}
else
{
}
}
}
private async Task<IEnumerable<SimpleResponse>> StartConvert()
{
var tasks = new List<Task<SimpleResponse>>();
foreach (string s in docPaths)
{
byte[] bte = File.ReadAllBytes(s);
tasks.Add(ConvertDocuments(Convert.ToBase64String(bte)));
}
return (await Task.WhenAll(tasks));
}
private async Task<SimpleResponse> ConvertDocuments(string requests)
{
using (var client = new HttpClient(new HttpClientHandler() { UseDefaultCredentials = true }))
{
client.BaseAddress = new Uri(BaseApiUrl);
client.DefaultRequestHeaders.Add("Accept", "application/json");
// Add an Accept header for JSON format.
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));//application/json
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, BaseApiUrl + ApiUrl);
var data = JsonConvert.SerializeObject(requests);
request.Content = new StringContent(data, Encoding.UTF8, "application/json");
HttpResponseMessage response1 = await client.PostAsync(BaseApiUrl + ApiUrl, request.Content).ConfigureAwait(false);
var response = JsonConvert.DeserializeObject<SimpleResponse>(await response1.Content.ReadAsStringAsync());
return response;
}
}
Random String Generator
public string RandomString(int size, bool lowerCase = false)
{
var builder = new StringBuilder(size);
// Unicode/ASCII Letters are divided into two blocks
// (Letters 65–90 / 97–122):
// The first group containing the uppercase letters and
// the second group containing the lowercase.
// char is a single Unicode character
char offset = lowerCase ? 'a' : 'A';
const int lettersOffset = 26; // A...Z or a..z: length = 26
for (var i = 0; i < size; i++)
{
var #char = (char)_random.Next(offset, offset + lettersOffset);
builder.Append(#char);
}
return lowerCase ? builder.ToString().ToLower() : builder.ToString();
}
First, get rid of Task.Factory.StartNew ... t.Wait() - you don't need an additional task, the root level method is async and your blocking Wait just spoils the benefits of async by blocking synchronously.
Second, like a comment suggested above, the file name random string generator is most likely to be not really random. Either do not supply anything to the seed value of your pseudo-random gen, or use something like Environment.TickCount which should be sufficient for this. Guid.NewGuid() will work too.
Another good option for temp files is Path.GetTempFileName (also generates an empty file for you): https://learn.microsoft.com/en-us/dotnet/api/system.io.path.gettempfilename?view=netstandard-2.0
[HttpPost]
public async Task<SimpleResponse> Post([FromBody]string request)
{
var response = new SimpleResponse();
try
{
...
var result = await convert.CovertDocToPDF(...);
...
}
catch (Exception ex)
{
...
}
return response;
}
Based on your code it seems that you have a "faulty" random string generator for file name (I would say _random.Next is a suspect, possibly some locking and/or "app wide" instance could fix the issue). You can use Guid.NewGuid to create random part of file name (which in theory can have collisions also but in most practical cases should be fine) or Path.GetTempFileName:
var rs = Guid.NewGuid().ToString("N");
tempDocPath = tempDocPath + rs + ".docx";
tempPdfPath = tempPdfPath + rs + ".pdf";

DateTime difference inside Thread

I have a for-loop that creates a new Thread each iteration. In short, my loop is creating 20 threads that does some action, at the same time.
My goal inside each of these threads, is to create a DateTime variable with a start time, execute an operation, and create a DateTime variable with an end time. Hereafter I'll take the difference between these two variables to find out, how long this operation took in this SPECIFIC thread. Then log it out.
However that isn't working as expected, and I'm confused on why.
It seems like it justs "adds" the time to the variables, each iteration of a new thread, instead of creating a completely new and fresh version of the variable, only to be taking into consideration in that specific thread.
This is my for-loop code:
for(int i = 0; i < 20; i++)
{
Thread thread = new Thread(() =>
{
Stopwatch sw = new Stopwatch();
sw.Start();
RESTRequest(Method.POST, ....),
sw.Stop();
Console.WriteLine("Result took (" + sw.Elapsed.Seconds + " seconds, " + sw.Elapsed.Milliseconds + " milliseconds)");
});
thread.IsBackground = true;
thread.Start();
}
Long operation function:
public static string RESTRequest(Method method, string endpoint, string resource, string body, SimplytureRESTRequestHeader[] requestHeaders = null, SimplytureRESTResponseHeader[] responseHeaders = null, SimplytureRESTAuthentication authentication = null, SimplytureRESTParameter[] parameters = null)
{
var client = new RestClient(endpoint);
if(authentication != null)
{
client.Authenticator = new HttpBasicAuthenticator(authentication.username, authentication.password);
}
var request = new RestRequest(resource, method);
if (requestHeaders != null)
{
foreach (var header in requestHeaders)
{
request.AddHeader(header.headerType, header.headerValue);
}
}
if(body != null)
{
request.AddParameter("text/json", body, ParameterType.RequestBody);
}
if(parameters != null)
{
foreach (var parameter in parameters)
{
request.AddParameter(parameter.key, parameter.value);
}
}
IRestResponse response = client.Execute(request);
if (responseHeaders != null)
{
foreach (var header in responseHeaders)
{
var par = new Parameter();
par.Name = header.headerType;
par.Value = header.headerValue;
response.Headers.Add(par);
}
}
var content = response.Content;
return content;
}
This is my results:
EDIT:
I also tried using the Stopwatch class, but it didn't do any difference, but definitely more handy. I also Added the long operation for debugging.
There is a limitation for concurrent calls to the same ServicePoint.
The default is 2 concurrent connections for each unique ServicePoint.
Add System.Net.ServicePointManager.DefaultConnectionLimit = 20; to raise that limit to match the thread count.
System.Net.ServicePointManager.DefaultConnectionLimit
You can also set this value in config file
<system.net>
<connectionManagement>
<add address="*" maxconnection="20" />
</connectionManagement>
</system.net>

First time creating a method that uses parallel linq and getting out of memory exception

I wrote a method to download data from the internet and save it to my database. I wrote this using PLINQ to take advantage of my multi-core processor and because it is downloading thousands of different files in a very short period of time. I have added comments below in my code to show where it stops but the program just sits there and after awhile, I get an out of memory exception. This being my first time using TPL and PLINQ, I'm extremely confused so I could really use some advice on what to do to fix this.
UPDATE: I found out that I was getting a webexception constantly because the webclient was timing out. I fixed this by increasing the max amount of connections according to this answer here. I was then getting exceptions for the connection not opening and I fixed it by using this answer here. I'm now getting connection timeout errors for the database even though it is a local sql server. I still haven't been able to get any of my code to run so I could totally use some advice
static void Main(string[] args)
{
try
{
while (true)
{
// start the download process for market info
startDownload();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
}
}
public static void startDownload()
{
DateTime currentDay = DateTime.Now;
List<Task> taskList = new List<Task>();
if (Helper.holidays.Contains(currentDay) == false)
{
List<string> markets = new List<string>() { "amex", "nasdaq", "nyse", "global" };
Parallel.ForEach(markets, market =>
{
Downloads.startInitialMarketSymbolsDownload(market);
}
);
Console.WriteLine("All downloads finished!");
}
// wait 24 hours before you do this again
Task.Delay(TimeSpan.FromHours(24)).Wait();
}
public static void startInitialMarketSymbolsDownload(string market)
{
try
{
List<string> symbolList = new List<string>();
symbolList = Helper.getStockSymbols(market);
var historicalGroups = symbolList.AsParallel().Select((x, i) => new { x, i })
.GroupBy(x => x.i / 100)
.Select(g => g.Select(x => x.x).ToArray());
historicalGroups.AsParallel().ForAll(g => getHistoricalStockData(g, market));
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
}
}
public static void getHistoricalStockData(string[] symbols, string market)
{
// download data for list of symbols and then upload to db tables
Uri uri;
string url, line;
decimal open = 0, high = 0, low = 0, close = 0, adjClose = 0;
DateTime date;
Int64 volume = 0;
string[] lineArray;
List<string> symbolError = new List<string>();
Dictionary<string, string> badNameError = new Dictionary<string, string>();
Parallel.ForEach(symbols, symbol =>
{
url = "http://ichart.finance.yahoo.com/table.csv?s=" + symbol + "&a=00&b=1&c=1900&d=" + (DateTime.Now.Month - 1) + "&e=" + DateTime.Now.Day + "&f=" + DateTime.Now.Year + "&g=d&ignore=.csv";
uri = new Uri(url);
using (dbEntities entity = new dbEntities())
using (WebClient client = new WebClient())
using (Stream stream = client.OpenRead(uri))
using (StreamReader reader = new StreamReader(stream))
{
while (reader.EndOfStream == false)
{
line = reader.ReadLine();
lineArray = line.Split(',');
// if it isn't the very first line
if (lineArray[0] != "Date")
{
// set the data for each array here
date = Helper.parseDateTime(lineArray[0]);
open = Helper.parseDecimal(lineArray[1]);
high = Helper.parseDecimal(lineArray[2]);
low = Helper.parseDecimal(lineArray[3]);
close = Helper.parseDecimal(lineArray[4]);
volume = Helper.parseInt(lineArray[5]);
adjClose = Helper.parseDecimal(lineArray[6]);
switch (market)
{
case "nasdaq":
DailyNasdaqData nasdaqData = new DailyNasdaqData();
var nasdaqQuery = from r in entity.DailyNasdaqDatas.AsParallel().AsEnumerable()
where r.Date == date
select new StockData { Close = r.AdjustedClose };
List<StockData> nasdaqResult = nasdaqQuery.AsParallel().ToList(); // hits this line
break;
default:
break;
}
}
}
// now save everything
entity.SaveChanges();
}
}
);
}
Async lambdas work like async methods in one regard: They do not complete synchronously but they return a Task. In your parallel loop you are simply generating tasks as fast as you can. Those tasks hold onto memory and other resources such as DB connections.
The simplest fix is probably to just use synchronous database commits. This will not result in a loss of throughput because the database cannot deal with high amounts of concurrent DML anyway.

Optimizing getting data from REST API calls (more than 100)

I am trying to make lots of calls from my server to a REST API that is exposed by another server but the code is taking too long to run.
Below is the code which takes a lot of time. Right now I am using C# Async/Await along with HTTPClient, is there a better way I can do this?
static async Task RunAsync()
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(URL);
foreach (var workbk in (workbooksforuserresultingMessage.Items[1] as workbookListType).workbook)
{
if (workbk.project.name == "Ascend")
{
tsResponse viewresultingMessage = null;
//Get View Data
HttpRequestMessage viewrequestMessage = new HttpRequestMessage(HttpMethod.Get, "sites/" + ((siteresultingMessage.Items[0] as siteType).id.ToString()) + "/workbooks/" + workbk.id + "/views");
// Add our custom headers
viewrequestMessage.Headers.Add("X-Tableau-Auth", (resultingMessage.Items[0] as credentialsType).token.ToString());
HttpResponseMessage viewrequestMessageresponse = await client.SendAsync(viewrequestMessage);
if (viewrequestMessageresponse.IsSuccessStatusCode)
{
var viewresponsecontent = await viewrequestMessageresponse.Content.ReadAsStringAsync();
XmlSerializer siteserializer = new XmlSerializer(typeof(tsResponse));
using (TextReader reader = new StringReader(viewresponsecontent))
{
viewresultingMessage = (tsResponse)siteserializer.Deserialize(reader);
}
}
}
}
foreach (var workbk in (workbooksforuserresultingMessage.Items[1] as workbookListType).workbook)
{
if (workbk.project.name == "Ascend")
{
foreach (var vu in workbk.views)
{
tsResponse viewImageresultingMessage = null;
//Get View Data
HttpRequestMessage viewImagerequestMessage = new HttpRequestMessage(HttpMethod.Get, "sites/" + ((siteresultingMessage.Items[0] as siteType).id.ToString()) + "/workbooks/" + workbk.id + "/views/" + vu.id + "/previewImage");
// Add our custom headers
viewImagerequestMessage.Headers.Add("X-Tableau-Auth", (resultingMessage.Items[0] as credentialsType).token.ToString());
HttpResponseMessage viewImagewrequestMessageresponse = await client.SendAsync(viewImagerequestMessage);
if (viewImagewrequestMessageresponse.IsSuccessStatusCode)
{
var viewImageresponsecontent = await viewImagewrequestMessageresponse.Content.ReadAsByteArrayAsync();
XmlSerializer siteserializer = new XmlSerializer(typeof(tsResponse));
using (var ms = new MemoryStream(viewImageresponsecontent))
{
Image returnImage = Image.FromStream(ms);
//return returnImage;
}
}
}
}
}
}
Those 'await's inside your loop will serialize the requests. Consider instead using .ContinueWith on the task to do some work with the result, keep track of the tasks in an array, then call Task.WhenAll once you're done setting up all the parallel work.
AS #TheESJ said the awaits make your code execute in a serial fashion. That is once execution reaches the first await, execution is effectively "paused" and will not resume until that Task is complete. You can avoid this by not awaiting the Task, just add it to a list of tasks to keep track of.
To facilitate this I think it will help if you introduced a couple of helper methods to execute the body of your loops and return a Task.
public Task<tsResponse> GetViewString(HttpRequestMessage viewrequestMessage)
{
tsResponse viewresultingMessage = null;
HttpResponseMessage viewrequestMessageresponse = await client.SendAsync(viewrequestMessage);
if (viewrequestMessageresponse.IsSuccessStatusCode)
{
var viewresponsecontent = await viewrequestMessageresponse.Content.ReadAsStringAsync();
XmlSerializer siteserializer = new XmlSerializer(typeof(tsResponse));
using (TextReader reader = new StringReader(viewresponsecontent))
{
viewresultingMessage = (tsResponse)siteserializer.Deserialize(reader);
}
}
return viewresultingMessage;
}
public Task<tsResponse> GetViewImage(HttpRequestMessage viewImagerequestMessage)
{
tsResponse viewImageresultingMessage = null;
//Get View Data
HttpResponseMessage viewImagewrequestMessageresponse = await client.SendAsync(viewImagerequestMessage);
if (viewImagewrequestMessageresponse.IsSuccessStatusCode)
{
var viewImageresponsecontent = await viewImagewrequestMessageresponse.Content.ReadAsByteArrayAsync();
XmlSerializer siteserializer = new XmlSerializer(typeof(tsResponse));
using (var ms = new MemoryStream(viewImageresponsecontent))
{
return Image.FromStream(ms);
}
}
}
Then in your main loop you just add the tasks to a list as the methods are kicked off, this will essentially execute the tasks in parallel. You may need to increase the number of concurrent connections your server can make if it causes a problem. Your main method then becomes.
static async Task RunAsync()
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(URL);
var tasks = new List<Task<tsResponse>>();
foreach (var workbk in (workbooksforuserresultingMessage.Items[1] as workbookListType).workbook)
{
if (workbk.project.name == "Ascend")
{
tsResponse viewresultingMessage = null;
//Get View Data
HttpRequestMessage viewrequestMessage = new HttpRequestMessage(HttpMethod.Get, "sites/" + ((siteresultingMessage.Items[0] as siteType).id.ToString()) + "/workbooks/" + workbk.id + "/views");
// Add our custom headers
viewrequestMessage.Headers.Add("X-Tableau-Auth", (resultingMessage.Items[0] as credentialsType).token.ToString());
tasks.Add(GetViewString(viewrequestMessage);
}
}
foreach (var workbk in (workbooksforuserresultingMessage.Items[1] as workbookListType).workbook)
{
if (workbk.project.name == "Ascend")
{
foreach (var vu in workbk.views)
{
tsResponse viewImageresultingMessage = null;
//Get View Data
HttpRequestMessage viewImagerequestMessage = new HttpRequestMessage(HttpMethod.Get, "sites/" + ((siteresultingMessage.Items[0] as siteType).id.ToString()) + "/workbooks/" + workbk.id + "/views/" + vu.id + "/previewImage");
// Add our custom headers
viewImagerequestMessage.Headers.Add("X-Tableau-Auth", (resultingMessage.Items[0] as credentialsType).token.ToString());
tasks.Add(GetViewImage(viewImagerequestMessage);
}
}
}
// wait for all the tasks to complete (non blocking)
await Task.WhenAll(tasks);
}

Categories