HttpClient resulting in leaking Node<Object> in mscorlib - c#

Consider the following program, with all of HttpRequestMessage, and HttpResponseMessage, and HttpClient disposed properly. It always ends up with about 50MB memory at the end, after collection. Add a zero to the number of requests, and the un-reclaimed memory doubles.
class Program
{
static void Main(string[] args)
{
var client = new HttpClient {
BaseAddress = new Uri("http://localhost:5000/")};
var t = Task.Run(async () =>
{
var resps = new List<Task<HttpResponseMessage>>();
var postProcessing = new List<Task>();
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Firing..");
var req = new HttpRequestMessage(HttpMethod.Get,
"test/delay/5");
var tsk = client.SendAsync(req);
resps.Add(tsk);
postProcessing.Add(tsk.ContinueWith(async ts =>
{
req.Dispose();
var resp = ts.Result;
var content = await resp.Content.ReadAsStringAsync();
resp.Dispose();
Console.WriteLine(content);
}));
}
await Task.WhenAll(resps);
resps.Clear();
Console.WriteLine("All requests done.");
await Task.WhenAll(postProcessing);
postProcessing.Clear();
Console.WriteLine("All postprocessing done.");
});
t.Wait();
Console.Clear();
var t2 = Task.Run(async () =>
{
var resps = new List<Task<HttpResponseMessage>>();
var postProcessing = new List<Task>();
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Firing..");
var req = new HttpRequestMessage(HttpMethod.Get,
"test/delay/5");
var tsk = client.SendAsync(req);
resps.Add(tsk);
postProcessing.Add(tsk.ContinueWith(async ts =>
{
var resp = ts.Result;
var content = await resp.Content.ReadAsStringAsync();
Console.WriteLine(content);
}));
}
await Task.WhenAll(resps);
resps.Clear();
Console.WriteLine("All requests done.");
await Task.WhenAll(postProcessing);
postProcessing.Clear();
Console.WriteLine("All postprocessing done.");
});
t2.Wait();
Console.Clear();
client.Dispose();
GC.Collect();
Console.WriteLine("Done");
Console.ReadLine();
}
}
On a quick investigation with a memory profiler, it seems that the objects that take up the memory are all of the type Node<Object> inside mscorlib.
My initial though was that, it was some internal dictionary or a stack, since they are the types that uses Node as an internal structure, but I was unable to turn up any results for a generic Node<T> in the reference source since this is actually Node<object> type.
Is this a bug, or somekind of expected optimization (I wouldn't consider a proportional consumption of memory always retained to be a optimization in any way)? And purely academic, what is the Node<Object>.
Any help in understanding this would be much appreciated. Thanks :)
Update: To extrapolate the results for a much larger test set, I optimized it slightly by throttling it.
Here's the changed program. And now, it seems to stay consistent at 60-70MB, for a 1 million request set. I'm still baffled at what those Node<object>s really are, and its allowed to maintain such a high number of irreclaimable objects.
And the logical conclusion from the differences in these two results leads me to guess, this may not really be an issue in with HttpClient or WebRequest, rather something rooted directly with async - Since the real variant in these two test are the number of incomplete async tasks that exist at a given point in time. This is merely a speculation from the quick inspection.
static void Main(string[] args)
{
Console.WriteLine("Ready to start.");
Console.ReadLine();
var client = new HttpClient { BaseAddress =
new Uri("http://localhost:5000/") };
var t = Task.Run(async () =>
{
var resps = new List<Task<HttpResponseMessage>>();
var postProcessing = new List<Task>();
for (int i = 0; i < 1000000; i++)
{
//Console.WriteLine("Firing..");
var req = new HttpRequestMessage(HttpMethod.Get, "test/delay/5");
var tsk = client.SendAsync(req);
resps.Add(tsk);
var n = i;
postProcessing.Add(tsk.ContinueWith(async ts =>
{
var resp = ts.Result;
var content = await resp.Content.ReadAsStringAsync();
if (n%1000 == 0)
{
Console.WriteLine("Requests processed: " + n);
}
//Console.WriteLine(content);
}));
if (n%20000 == 0)
{
await Task.WhenAll(resps);
resps.Clear();
}
}
await Task.WhenAll(resps);
resps.Clear();
Console.WriteLine("All requests done.");
await Task.WhenAll(postProcessing);
postProcessing.Clear();
Console.WriteLine("All postprocessing done.");
});
t.Wait();
Console.Clear();
client.Dispose();
GC.Collect();
Console.WriteLine("Done");
Console.ReadLine();
}

Let’s investigate the problem with all the tools we have in hand.
First, let’s take a look at what those objects are, in order to do that, I put the given code in Visual Studio and created a simple console application. Side-by-side I run a simple HTTP server on Node.js to serve the requests.
Run the client to the end and start attaching WinDBG to it, I inspect the managed heap and get these results:
0:037> !dumpheap
Address MT Size
02471000 00779700 10 Free
0247100c 72482744 84
...
Statistics:
MT Count TotalSize Class Name
...
72450e88 847 13552 System.Collections.Concurrent.ConcurrentStack`1+Node[[System.Object, mscorlib]]
...
The !dumpheap command dumps all objects in the managed heap there. That could include objects that should be freed (but not yet because GC has not kicked in yet). In our case, that should be rare because we just called GC.Collect() before the print out and nothing else should run after the print out.
Worth notice is the specific line above. That should be the Node object you are referring to in the question.
Next, let’s look at the individual objects of that type, we grab the MT value of that object and then invoke !dumpheap again like this, this will filter out only the objects we are interested in.
0:037> !dumpheap -mt 72450e88
Address MT Size
025b9234 72450e88 16
025b93dc 72450e88 16
...
Now grabbing a random one in the list, and then asks the debugger why this object is still on the heap by invoking the !gcroot command as follow:
0:037> !gcroot 025bbc8c
Thread 6f24:
0650f13c 79752354 System.Net.TimerThread.ThreadProc()
edi: (interior)
-> 034734c8 System.Object[]
-> 024915ec System.PinnableBufferCache
-> 02491750 System.Collections.Concurrent.ConcurrentStack`1[[System.Object, mscorlib]]
-> 09c2145c System.Collections.Concurrent.ConcurrentStack`1+Node[[System.Object, mscorlib]]
-> 09c2144c System.Collections.Concurrent.ConcurrentStack`1+Node[[System.Object, mscorlib]]
-> 025bbc8c System.Collections.Concurrent.ConcurrentStack`1+Node[[System.Object, mscorlib]]
Found 1 unique roots (run '!GCRoot -all' to see all roots).
Now it is quite obvious that we have a cache, and that cache maintain a stack, with the stack implemented as a linked list. If we ponder further we will see in the reference source, how that list is used. To do that, let’s first inspect the cache object itself, using !DumpObj
0:037> !DumpObj 024915ec
Name: System.PinnableBufferCache
MethodTable: 797c2b44
EEClass: 795e5bc4
Size: 52(0x34) bytes
File: C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System\v4.0_4.0.0.0__b77a5c561934e089\System.dll
Fields:
MT Field Offset Type VT Attr Value Name
724825fc 40004f6 4 System.String 0 instance 024914a0 m_CacheName
7248c170 40004f7 8 ...bject, mscorlib]] 0 instance 0249162c m_factory
71fe994c 40004f8 c ...bject, mscorlib]] 0 instance 02491750 m_FreeList
71fed558 40004f9 10 ...bject, mscorlib]] 0 instance 025b93b8 m_NotGen2
72484544 40004fa 14 System.Int32 1 instance 0 m_gen1CountAtLastRestock
72484544 40004fb 18 System.Int32 1 instance 605289781 m_msecNoUseBeyondFreeListSinceThisTime
7248fc58 40004fc 2c System.Boolean 1 instance 0 m_moreThanFreeListNeeded
72484544 40004fd 1c System.Int32 1 instance 244 m_buffersUnderManagement
72484544 40004fe 20 System.Int32 1 instance 128 m_restockSize
7248fc58 40004ff 2d System.Boolean 1 instance 1 m_trimmingExperimentInProgress
72484544 4000500 24 System.Int32 1 instance 0 m_minBufferCount
72484544 4000501 28 System.Int32 1 instance 0 m_numAllocCalls
Now we see something interesting, the stack is actually used as a free list for the cache. The source code tells us how the free list is used, in particular, in the Free() method shown below:
http://referencesource.microsoft.com/#mscorlib/parent/parent/parent/parent/InternalApis/NDP_Common/inc/PinnableBufferCache.cs
/// <summary>
/// Return a buffer back to the buffer manager.
/// </summary>
[System.Security.SecuritySafeCritical]
internal void Free(object buffer)
{
...
m_FreeList.Push(buffer);
}
So that is it, when the caller is done with the buffer, it returns to the cache, the cache then put that in the free list, the free list is then used for allocation purpose
[System.Security.SecuritySafeCritical]
internal object Allocate()
{
// Fast path, get it from our Gen2 aged m_FreeList.
object returnBuffer;
if (!m_FreeList.TryPop(out returnBuffer))
Restock(out returnBuffer);
...
}
Last but not least, let’s understand why the cache itself is not freed when we are done with all those HTTP requests? Here is why. By adding a breakpoint on mscorlib.dll!System.Collections.Concurrent.ConcurrentStack.Push(), we see the following call stack (well, this could be just one of the cache use case, but this is representative)
mscorlib.dll!System.Collections.Concurrent.ConcurrentStack<object>.Push(object item)
System.dll!System.PinnableBufferCache.Free(object buffer)
System.dll!System.Net.HttpWebRequest.FreeWriteBuffer()
System.dll!System.Net.ConnectStream.WriteHeadersCallback(System.IAsyncResult ar)
System.dll!System.Net.LazyAsyncResult.Complete(System.IntPtr userToken)
System.dll!System.Net.ContextAwareResult.Complete(System.IntPtr userToken)
System.dll!System.Net.LazyAsyncResult.ProtectedInvokeCallback(object result, System.IntPtr userToken)
System.dll!System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(uint errorCode, uint numBytes, System.Threading.NativeOverlapped* nativeOverlapped)
mscorlib.dll!System.Threading._IOCompletionCallback.PerformIOCompletionCallback(uint errorCode, uint numBytes, System.Threading.NativeOverlapped* pOVERLAP)
At WriteHeadersCallback, we are done with writing the headers, so we return the buffer to the cache. At this point the buffer is pushed back to the free list, and therefore we allocate a new stack node. The key thing to notice is that the cache object is a static member of HttpWebRequest.
http://referencesource.microsoft.com/#System/net/System/Net/HttpWebRequest.cs
...
private static PinnableBufferCache _WriteBufferCache = new PinnableBufferCache("System.Net.HttpWebRequest", CachedWriteBufferSize);
...
// Return the buffer to the pinnable cache if it came from there.
internal void FreeWriteBuffer()
{
if (_WriteBufferFromPinnableCache)
{
_WriteBufferCache.FreeBuffer(_WriteBuffer);
_WriteBufferFromPinnableCache = false;
}
_WriteBufferLength = 0;
_WriteBuffer = null;
}
...
So there we go, the cache is shared across all requests and is not released when all requests are done.

We had the same problems, when we use System.Net.WebRequest for doing some http-requests. Size of w3wp process had range 4-8 Gb, because we do not have a constant load. Sometimes we have 10 request per second and 1000 in other time. Of course buffer does not reused in same scenario.
We are change all place when used System.Net.WebRequest on System.Net.Http.HttpClient because it doesn't have any buffer pools.
If you have many request through your httpclient, make it as static variable for avoid Socket leaks.
I think that more simple way analyze this problem - use PerfView.
This application can show reference tree so you can show root case of your problem.

We encountered a similar issue with the PinnableBufferCache becoming too large and leading to OutOfMemoryException's.
Andrew Au's analysis stopped at the point that the cache is static "and is not released when all requests are done". But the more interesting question "Under what conditions it is released?" was still open.
According to the sources it is trimmed on Gen2 GC event together with some other conditions which are pretty tricky (e.g. not often that every 10 msec, etc):
https://referencesource.microsoft.com/#System/parent/parent/parent/InternalApis/NDP_Common/inc/PinnableBufferCache.cs,203
My experiments have shown that if the process will survive the memory usage hype and a load (i.e. the number of HTTP requests) will decrease than the cache volume will decrease as well with time.
In our case, we found that we can greatly optimize the amount of content loaded via HTTP.
I think alternative solutions might be making more free virtual memory available for process or throttling a load when memory usage is too high.

Related

What is the correct way to use USBHIDDRIVER for multiple writes?

I am writing an application that needs to write messages to a USB HID device and read responses. For this purpose, I'm using USBHIDDRIVER.dll (https://www.leitner-fischer.com/2007/08/03/hid-usb-driver-library/ )
Now it works fine when writing many of the message types - i.e. short ones.
However, there is one type of message where I have to write a .hex file containing about 70,000 lines. The protocol requires that each line needs to be written individually and sent in a packet containing other information (start, end byte, checksum)
However I'm encountering problems with this.
I've tried something like this:
private byte[] _responseBytes;
private ManualResetEvent _readComplete;
public byte[][] WriteMessage(byte[][] message)
{
byte[][] devResponse = new List<byte[]>();
_readComplete = new ManualResetEvent(false);
for (int i = 0; i < message.Length; i++)
{
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
usbHid = null;
}
return devResponse;
}
private void ReadEvent()
{
if (_readComplete!= null)
{
_readComplete.Set();
}
_microHidReadBytes = (byte[])((ListWithEvent)sender)[0];
}
This appears to work. In WireShark I can see the messages going back and forth. However as you can see it's creating an instance of the USBInterface class every iteration. This seems very clunky and I can see in the TaskManager, it starts to eat up a lot of memory - current run has it above 1GB and eventually it falls over with an OutOfMemory exception. It is also very slow. Current run is not complete after about 15 mins, although I've seen another application do the same job in less than one minute.
However, if I move the creation and connection of the USBInterface out of the loop as in...
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
for (int i = 0; i < message.Length; i++)
{
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
}
usbHid = null;
... now what happens is it only allows me to do one write! I write the data, read the response and when it comes around the loop to write the second message, the application just hangs in the write() function and never returns. (Doesn't even time out)
What is the correct way to do this kind of thing?
(BTW I know it's adding a lot of data to that devResponse object but this is not the source of the issue - if I remove it, it still consumes an awful lot of memory)
UPDATE
I've found that if I don't enable reading, I can do multiple writes without having to create a new USBInterface1 object with each iteration. This is an improvement but I'd still like to be able to read each response. (I can see they are still sent down in Wireshark)

GRPC performance vs WCF performance

We have a legacy app that runs on top of WCF, so we are trying to move off of it, and find another technology. One of the problems is that we need performance from the wire, so part of evaluating GRPC is evaluating how quickly it works, but also how many simultaneous clients we can run.
So, to that end, we're simulating many calls with relatively low amount of data being passed through, but high number of calls. In that respect, WCF has turned out to be significantly better than GRPC, which is very unexpected. Is there possibly something wrong with the way the test were conceived and implemented?
The server code:
public override Task<TestReply> Test(TestRequest request, ServerCallContext context)
{
var ret = new char[request.Size];
var a = (int)'a';
for (var i = 0; i < request.Size; i++)
{
ret[i] = (char)(a + (i % 26));
}
return Task.FromResult(new TestReply { Message = new string(ret) });
}
The client code:
static void Main(string[] args)
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
using var channel = GrpcChannel.ForAddress("http://remote_server:8001", new GrpcChannelOptions { Credentials = ChannelCredentials.Insecure });
var client = new Greeter.GreeterClient(channel);
string TestMethod(int i)
{
var request = new TestRequest {Size = i};
return client.Test(request).Message;
}
var start = DateTime.Now;
for (var i = 0; i < 15625; i++)
{
var val = TestMethod(10);
}
var end = DateTime.Now;
}
If we run one a single instance of the client, it takes just under 7 seconds. If we run 64 instances simultaneously, each takes an average of 23 seconds. Part of the problem is that running 64 instances is also CPU intensive, on both client and server. With 64 clients, the client will see 85-95% CPU utilization, and the server will see 70-80%.
By comparison, WCF will run a single instance of that code in 2.4 seconds, and 64 in an average of 9 seconds, and never experience significant CPU utilization on either.
Are we using GRPC wrongly? Is there something wrong with the test? What can we do to make GRPC run a little faster/leaner?

Why C# OutOfMemory exception in System.Drawing.dll?

Let's simplify the model.
class Container
{
//other members
public byte[] PNG;
}
class Producer
{
public byte[] Produce(byte[] ImageOutside)
{
using (MemoryStream bmpStream = new MemoryStream(ImageOutside),
pngStream = new MemoryStream())
{
System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(bmpStream);
bitmap.Save(pngStream, System.Drawing.Imaging.ImageFormat.Png);
pngStream.Seek(0, System.IO.SeekOrigin.Begin);
byte[] PNG = new byte[pngStream.Length];
pngStream.Read(PNG, 0, (int)pngStream.Length);
bitmap.Dispose();
GC.Collect();
GC.WaitForPendingFinalizers();
return PNG;
}
}
}
The main function keep making Container container = new Container(); produce PNG for container.PNG, and Queue.Enqueue(container)
use using() clause doesn't work at all.
While this repeat for about 40+ times(it varies), it throws an exception. Sometime it's OutOfMemoryException and sometime It's something like "GDI+ normal error"(I am not sure how it is exactly in English, I just translated it).
But If I try and catch the exception and simply ignore it, it can still continue producing more but not unlimitedly though, just more forward.
The occupied memory shown in task manager is only about 600 - 700 MB when the first exception is thrown and it finally stops at about 1.2GB. I have tried this:
while (true)
{
Bitmap b = new Bitmap(4500, 5000);
list.Add(b);
Invoke((MethodInvoker)delegate { textBox1.Text = list.Count.ToString(); });
}
It never throws any exception though 99% memory(about 11GB) has been allocated for the program, and all happen is the number in textBox1 no longer raise.
The way to avoid this may be not to produce so many things, but I still want to know the internal principle and reason and thank you for your help.
with byte[] PNG = new byte[pngStream.Length]; you allocate a large portion of memory to store the image.
The follow call it's useless, you have already disposed the stream.
GC.Collect();
GC.WaitForPendingFinalizers();
The memory used by PNG array cannot be released, because there is active reference in function return.
I suggest to return a stream instead of an array of bytes.
Otherwise after you call the method Produce remember to remove the reference to PNG before call again.
sample:
while (true)
{
Byte[] b = new Byte[1000];
b = this.Produce(b);
//Use your array as you need, but you can't assign it to external property, otherwise memory cannot be released
b = null; //remove the reference, (in reality, in this example assign null is not necessary, because b will be overwritten at next loop.
GC.Collect(); //Force garbage collector, probably not necessarry, but can be useful
GC.WaitForPendingFinalizers();
}
The platform compilation can affect the maximum available memory:
In a 32 bit application, you have a maximum of 2 GiB of available memory
In a 64 bit application you have 2 Tib of available memory, but
single object (class) cannot exceed 2 Gib.
In a UWP application there are other limitation in dependence of the
device
Any CPU is complied just in time, when you launch the application,
and can be run both 32-bit and 64, it depends from machine architecture
and system configuration.

Pinging all computers in network with TPL

I'm developing an application that manages devices in the network, at a certain point in the applicaiton, I must ping (actually it's not a ping, it's a SNMP get) all computers in the network to check if it's type is of my managed device.
My problem is that pinging all computers in the network is very slow (specially because most of them won't respond to my message and will simply timeout) and has to be done asynchronously.
I tried to use TLP to do this with the following code:
public static void FindDevices(Action<IPAddress> callback)
{
//Returns a list of all host names with a net view command
List<string> hosts = FindHosts();
foreach (string host in hosts)
{
Task.Run(() =>
{
CheckDevice(host, callback);
});
}
}
But it runs VERY slow, and when I paused execution I checked threads window and saw that it only had one thread pinging the network and was thus, running tasks synchronously.
When I use normal threads it runs a lot faster, but Tasks were supposed to be better, I'd like to know why aren't my Tasks optimizing parallelism.
**EDIT**
Comments asked for code on CheckDevice, so here it goes:
private static void CheckDevice(string host, Action<IPAddress> callback)
{
int commlength, miblength, datatype, datalength, datastart;
string output;
SNMP conn = new SNMP();
IPHostEntry ihe;
try
{
ihe = Dns.Resolve(host);
}
catch (Exception)
{
return;
}
// Send sysLocation SNMP request
byte[] response = conn.get("get", ihe.AddressList[0], "MyDevice", "1.3.6.1.2.1.1.6.0");
if (response[0] != 0xff)
{
// If response, get the community name and MIB lengths
commlength = Convert.ToInt16(response[6]);
miblength = Convert.ToInt16(response[23 + commlength]);
// Extract the MIB data from the SNMP response
datatype = Convert.ToInt16(response[24 + commlength + miblength]);
datalength = Convert.ToInt16(response[25 + commlength + miblength]);
datastart = 26 + commlength + miblength;
output = Encoding.ASCII.GetString(response, datastart, datalength);
if (output.StartsWith("MyDevice"))
{
callback(ihe.AddressList[0]);
}
}
}
Your issue is that you are iterating a none thread safe item the List.
If you replace it with a thread safe object like the ConcurrentBag you should find the threads will run in parallel.
I was a bit confused as to why this was only running one thread, I believe it is this line of code:
try
{
ihe = Dns.Resolve(host);
}
catch (Exception)
{
return;
}
I think this is throwing exceptions and returning; hence you only see one thread. This also ties into your observation that if you added a sleep it worked correctly.
Remember that when you pass a string your passing the reference to the string in memory, not the value. Anyway, the ConcurrentBag seems to resolve your issue. This answer might also be relevant

Track dead WebDriver instances during parallel task

I am seeing some dead-instance weirdness running parallelized nested-loop web stress tests using Selenium WebDriver, simple example being, say, hit 300 unique pages with 100 impressions each.
I'm "successfully" getting 4 - 8 WebDriver instances going using a ThreadLocal<FirefoxWebDriver> to isolate them per task thread, and MaxDegreeOfParallelism on a ParallelOptions instance to limit the threads. I'm partitioning and parallelizing the outer loop only (the collection of pages), and checking .IsValueCreated on the ThreadLocal<> container inside the beginning of each partition's "long running task" method. To facilitate cleanup later, I add each new instance to a ConcurrentDictionary keyed by thread id.
No matter what parallelizing or partitioning strategy I use, the WebDriver instances will occasionally do one of the following:
Launch but never show a URL or run an impression
Launch, run any number of impressions fine, then just sit idle at some point
When either of these happen, the parallel loop eventually seems to notice that a thread isn't doing anything, and it spawns a new partition. If n is the number of threads allowed, this results in having n productive threads only about 50-60% of the time.
Cleanup still works fine at the end; there may be 2n open browsers or more, but the productive and unproductive ones alike get cleaned up.
Is there a way to monitor for these useless WebDriver instances and a) scavenge them right away, plus b) get the parallel loop to replace the task segment immediately, instead of lagging behind for several minutes as it often does now?
I was having a similar problem. It turns out that WebDriver doesn't have the best method for finding open ports. As described here it gets a system wide lock on ports, finds an open port, and then starts the instance. This can starve the other instances that you're trying to start of ports.
I got around this by specifying a random port number directly in the delegate for the ThreadLocal<IWebDriver> like this:
var ports = new List<int>();
var rand = new Random((int)DateTime.Now.Ticks & 0x0000FFFF);
var driver = new ThreadLocal<IWebDriver>(() =>
{
var profile = new FirefoxProfile();
var port = rand.Next(50) + 7050;
while(ports.Contains(port) && ports.Count != 50) port = rand.Next(50) + 7050;
profile.Port = port;
ports.Add(port);
return new FirefoxDriver(profile);
});
This works pretty consistently for me, although there's the issue if you end up using all 50 in the list that is unresolved.
Since there is no OnReady event nor an IsReady property, I worked around it by sleeping the thread for several seconds after creating each instance. Doing that seems to give me 100% durable, functioning WebDriver instances.
Thanks to your suggestion, I've implemented IsReady functionality in my open-source project Webinator. Use that if you want, or use the code outlined below.
I tried instantiating 25 instances, and all of them were functional, so I'm pretty confident in the algorithm at this point (I leverage HtmlAgilityPack to see if elements exist, but I'll skip it for the sake of simplicity here):
public void WaitForReady(IWebDriver driver)
{
var js = #"{ var temp=document.createElement('div'); temp.id='browserReady';" +
#"b=document.getElementsByTagName('body')[0]; b.appendChild(temp); }";
((IJavaScriptExecutor)driver).ExecuteScript(js);
WaitForSuccess(() =>
{
IWebElement element = null;
try
{
element = driver.FindElement(By.Id("browserReady"));
}
catch
{
// element not found
}
return element != null;
},
timeoutInMilliseconds: 10000);
js = #"{var temp=document.getElementById('browserReady');" +
#" temp.parentNode.removeChild(temp);}";
((IJavaScriptExecutor)driver).ExecuteScript(js);
}
private bool WaitForSuccess(Func<bool> action, int timeoutInMilliseconds)
{
if (action == null) return false;
bool success;
const int PollRate = 250;
var maxTries = timeoutInMilliseconds / PollRate;
int tries = 0;
do
{
success = action();
tries++;
if (!success && tries <= maxTries)
{
Thread.Sleep(PollRate);
}
}
while (!success && tries < maxTries);
return success;
}
The assumption is if the browser is responding to javascript functions and is finding elements, then it's probably a reliable instance and ready to be used.

Categories