I have this service that, when request is received, runs a powershell command and returns result. Here is the invoker class code:
public class PowerShellScript {
public PowerShellScript() {
}
public Object[] Invoke( String strScriptName, NameValueCollection nvcParams ) {
Boolean bResult = true;
int n = 0;
Object[] objResult = null;
PowerShell ps = PowerShell.Create();
String strScript = strScriptName;
for (n = 0; n < nvcParams.Count; n++) {
strScript += String.Format( " -{0} {1}", nvcParams.GetKey( n ), nvcParams[n] );
}
//ps.AddScript( #"E:\snapins\Init-profile.ps1" );
ps.AddScript( strScript );
Collection<PSObject> colpsOutput = ps.Invoke();
if (colpsOutput.Count > 0)
objResult = new Object[colpsOutput.Count];
n = 0;
foreach (PSObject psOutput in colpsOutput) {
if (psOutput != null) {
try {
objResult[n] = psOutput.BaseObject;
}
catch (Exception ex) {
//exception should be handeled properly in powershell script
}
}
n++;
}
colpsOutput.Clear();
ps.Dispose();
return objResult;
}
}
Method Invoke returns all results returned by powershell script.
All fine and well. As long as this runs in a single thread. As some powershell scripts we invoke can take up to an hour to complete and we don't want for service to do nothing in that time, we decided to go multi-threaded.
Unfortunately Powershell class is not thread safe, resulting in sever memory leaks and cpu burn rate. However, if I use lock on Invoke method, this would mean that the entire idea why we went multithreaded will go down the drain.
Any ideas how to solve this?
You can use BeginInvoke() method of PowerShell class instead of Invoke() that you use. In this case you execute your script asynchronously and do not block the calling thread. But you have to review your whole scenario as well. Your old synchronous method returns results that can be easily consumed right after the call. In new asynchronous approach this is not possible in the same way.
see
http://msdn.microsoft.com/en-us/library/system.management.automation.powershell.begininvoke
Anyway... I gave up on multithreading when executing powershell commands. I created a small program that is able to execute powershell scripts. Then, each thread creates new process for that program. I know it is a bit of an overhead, but it works.
Basically, Powershell classes are not thread safe. Except static variables (http://msdn.microsoft.com/en-us/library/system.management.automation.powershell%28VS.85%29.aspx).
Hence, an attempt to call multiple scripts via separate threads results in memory leakage and some unexplained CPU usage. My wild guess is that it doesn't close properly. Changing from multi-threaded to multi-process environment should sort things out. However, that means a major politics change.
Related
While searching on above topic in the internet, I found two approaches,Both are working fine, But I need to know the difference between the two, Which one is suitable for what occasion etc... Our Jobs take some time and I need a way to wait till the Job finishes before the next C# line executes.
Approach One
var dbConn = new SqlConnection(myConString);
var execJob = new SqlCommand
{
CommandType = CommandType.StoredProcedure,
CommandText = "msdb.dbo.sp_start_job"
};
execJob.Parameters.AddWithValue("#job_name", p0);
execJob.Connection = dbConn;
using (dbConn)
{
dbConn.Open();
using (execJob)
{
execJob.ExecuteNonQuery();
Thread.Sleep(5000);
}
}
Approach Two
using System.Threading;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Smo.Agent;
var server = new Server(#"localhost\myinstance");
var isStopped = false;
try
{
server.ConnectionContext.LoginSecure = true;
server.ConnectionContext.Connect();
var job = server.JobServer.Jobs[jobName];
job.Start();
Thread.Sleep(1000);
job.Refresh();
while (job.CurrentRunStatus == JobExecutionStatus.Executing)
{
Thread.Sleep(1000);
job.Refresh();
}
isStopped = true;
}
finally
{
if (server.ConnectionContext.IsOpen)
{
server.ConnectionContext.Disconnect();
}
}
sp_start_job - sample 1
Your first example calls your job via the sp_start_job system stored procedure.
Note that it kicks off the job asynchronously, and the thread sleeps for an arbitrary period of time (5 seconds) before continuing regardless of the job's success or failure.
SQL Server Management Objects (SMO) - sample 2
Your second example uses (and therefore has a dependency on) the SQL Server Management Objects to achieve the same goal.
In the second case, the job also commences running asynchronously, but the subsequent loop watches the Job Status until it is not longer Executing. Note that the "isStopped" flag appears to serve no purpose, and the loop could be refactored somewhat as:
job.Start();
do
{
Thread.Sleep(1000);
job.Refresh();
} while (job.CurrentRunStatus == JobExecutionStatus.Executing);
You'd probably want to add a break-out of that loop after a certain period of time.
Other Considerations
It seems the same permissions are required by each of your examples; essentially the solution using SMO is a wrapper around sp_start_job, but provides you with (arguably) more robust code which has a clearer purpose.
Use whichever suits you best, or do some profiling and pick the most efficient if performance is a concern.
I am working on a multiplayer game, using the lidgren library for networking.
I am currently having issues with a my function that reads messages sent from my server.
The function looks like this:
public class Client
{
/* code omitted */
public void ReadMessage()
{
//Read Messages
while (running)
{
Debug.Log("InREAD");
//wClient is a NetClient (lidgren library)
NetIncomingMessage msg;
while ((msg = wClient.ReadMessage()) != null)
{
switch (msg.MessageType)
{
case NetIncomingMessageType.Data:
if (msg.ReadString().Contains("Position"))
{
Debug.Log("Hej");
/*string temp = msg.ReadString();
string[] Parts = temp.Split(" ");
int x = int.Parse(Parts [1]);
int y = int.Parse(Parts [2]);
int z = int.Parse(Parts [3]);*/
//set player position to xyz values below
} else if (msg.ReadString().Contains("Instantiate"))
{
Debug.Log("Instantiate");
/* string temp = msg.ReadString();
string[] Parts = temp.Split(" ");*/
}
break;
}
}
}
}
}
as you can see, there is a while-loop that runs when the bool running is true (and yes I am setting it as true when declaring.).
Now, in my GUI class where the button for connecting is etc, I have a function call to OnApplicationQuit which looks like this:
void OnApplicationQuit()
{
client.running = false;
client.Disconnect();
Debug.Log(client.running);
Debug.Log("Bye");
}
However, the change of running doesn't reach the thread (I believe the thread is running on a cached version of the variable?). So my question is, how do i make the while-loop stop when the program is closed? (Ive tried calling on the .Abort() function on the thread in the OnApplicationQuit(), but it doesn't work either.
Also, i know its not very efficient to send strings over a network unless you need to (so no need telling me about that!)
Just guessing (since I do not know library lidgren): isn't it possible that you're thread is stuck in call wClient.ReadMessage() just because you are not receiving any message from the client? If wClient.ReadMessage() is a blocking call then the resulting behaviour would be the one you described.
Furthermore: even calling Thread.Abort() won't work because the thread is in a sleep state (since it is waiting for something coming from the network connection): the thread will be aborted as soon as your wClient.ReadMessage() returns. Looking MSDN here it tells that "If Abort is called on a managed thread while it is executing unmanaged code, a ThreadAbortException is not thrown until the thread returns to managed code" and this exactly your situation assuming that ReadMessage() at some point will perform a system call just to wait for some data coming from the underlying socket.
You must call client.Shutdown().
I am seeing some dead-instance weirdness running parallelized nested-loop web stress tests using Selenium WebDriver, simple example being, say, hit 300 unique pages with 100 impressions each.
I'm "successfully" getting 4 - 8 WebDriver instances going using a ThreadLocal<FirefoxWebDriver> to isolate them per task thread, and MaxDegreeOfParallelism on a ParallelOptions instance to limit the threads. I'm partitioning and parallelizing the outer loop only (the collection of pages), and checking .IsValueCreated on the ThreadLocal<> container inside the beginning of each partition's "long running task" method. To facilitate cleanup later, I add each new instance to a ConcurrentDictionary keyed by thread id.
No matter what parallelizing or partitioning strategy I use, the WebDriver instances will occasionally do one of the following:
Launch but never show a URL or run an impression
Launch, run any number of impressions fine, then just sit idle at some point
When either of these happen, the parallel loop eventually seems to notice that a thread isn't doing anything, and it spawns a new partition. If n is the number of threads allowed, this results in having n productive threads only about 50-60% of the time.
Cleanup still works fine at the end; there may be 2n open browsers or more, but the productive and unproductive ones alike get cleaned up.
Is there a way to monitor for these useless WebDriver instances and a) scavenge them right away, plus b) get the parallel loop to replace the task segment immediately, instead of lagging behind for several minutes as it often does now?
I was having a similar problem. It turns out that WebDriver doesn't have the best method for finding open ports. As described here it gets a system wide lock on ports, finds an open port, and then starts the instance. This can starve the other instances that you're trying to start of ports.
I got around this by specifying a random port number directly in the delegate for the ThreadLocal<IWebDriver> like this:
var ports = new List<int>();
var rand = new Random((int)DateTime.Now.Ticks & 0x0000FFFF);
var driver = new ThreadLocal<IWebDriver>(() =>
{
var profile = new FirefoxProfile();
var port = rand.Next(50) + 7050;
while(ports.Contains(port) && ports.Count != 50) port = rand.Next(50) + 7050;
profile.Port = port;
ports.Add(port);
return new FirefoxDriver(profile);
});
This works pretty consistently for me, although there's the issue if you end up using all 50 in the list that is unresolved.
Since there is no OnReady event nor an IsReady property, I worked around it by sleeping the thread for several seconds after creating each instance. Doing that seems to give me 100% durable, functioning WebDriver instances.
Thanks to your suggestion, I've implemented IsReady functionality in my open-source project Webinator. Use that if you want, or use the code outlined below.
I tried instantiating 25 instances, and all of them were functional, so I'm pretty confident in the algorithm at this point (I leverage HtmlAgilityPack to see if elements exist, but I'll skip it for the sake of simplicity here):
public void WaitForReady(IWebDriver driver)
{
var js = #"{ var temp=document.createElement('div'); temp.id='browserReady';" +
#"b=document.getElementsByTagName('body')[0]; b.appendChild(temp); }";
((IJavaScriptExecutor)driver).ExecuteScript(js);
WaitForSuccess(() =>
{
IWebElement element = null;
try
{
element = driver.FindElement(By.Id("browserReady"));
}
catch
{
// element not found
}
return element != null;
},
timeoutInMilliseconds: 10000);
js = #"{var temp=document.getElementById('browserReady');" +
#" temp.parentNode.removeChild(temp);}";
((IJavaScriptExecutor)driver).ExecuteScript(js);
}
private bool WaitForSuccess(Func<bool> action, int timeoutInMilliseconds)
{
if (action == null) return false;
bool success;
const int PollRate = 250;
var maxTries = timeoutInMilliseconds / PollRate;
int tries = 0;
do
{
success = action();
tries++;
if (!success && tries <= maxTries)
{
Thread.Sleep(PollRate);
}
}
while (!success && tries < maxTries);
return success;
}
The assumption is if the browser is responding to javascript functions and is finding elements, then it's probably a reliable instance and ready to be used.
EDIT: Ok I had a problem with one of the string concatenation functions, has nothing to do with threads, but knowing that it couldn't be a problem with threading lead me to the answer thank you for answering.
I am making a simple tcp/ip chat program for practicing threads and tcp/ip. I was using asynchronous methods but had a problem with concurrency so I went to threads and blocking methods (not asynchronous). I have two private variables defined in the class, not static:
string amessage = string.Empty;
int MessageLength;
and a Thread
private Thread BeginRead;
Ok so I call a function called Listen ONCE when the client starts:
public virtual void Listen(int byteLength)
{
var state = new StateObject {Buffer = new byte[byteLength]};
BeginRead = new Thread(ReadThread);
BeginRead.Start(state);
}
and finally the function to receive commands and process them, I'm going to shorten it because it is really long:
private void ReadThread(object objectState)
{
var state = (StateObject)objectState;
int byteLength = state.Buffer.Length;
while (true)
{
var buffer = new byte[byteLength];
int len = MySocket.Receive(buffer);
if (len <= 0) return;
string content = Encoding.ASCII.GetString(buffer, 0, len);
amessage += cleanMessage.Substring(0, MessageLength);
if (OnRead != null)
{
var e = new CommandEventArgs(amessage);
OnRead(this, e);
}
}
}
Now, as I understand it only one thread at a time will enter BeginRead, I call Receive, it blocks until I get data, and then I process it. The problem: the variable amessage will change it's value between statements that do not touch or alter the variable at all, for example at the bottom of the function at: if (OnRead != null) "amessage" will be equal to 'asdf' and at if (OnRead != null) "amessage" will be equal to qwert. As I understand it this is indicative of another thread changing the value/running asynchronously. I only spawn one thread to do the receiving and the Receive function is blocking, how could there be two threads in this function and if there is only one thread how does amessage's value change between statements that don't affect it's value. As a side note sorry for spamming the site with these questions but I'm just getting a hang of this threading story and it's making me want to sip cyanide.
Thanks in advance.
EDIT:
Here is my code that calls the Listen Method in the client:
public void ConnectClient(string ip,int port)
{
client.Connect(ip,port);
client.Listen(5);
}
and in the server:
private void Accept(IAsyncResult result)
{
var client = new AbstractClient(MySocket.EndAccept(result));
var e = new CommandEventArgs(client, null);
Clients.Add(client);
client.Listen(5);
if (OnClientAdded != null)
{
var target = (Control) OnClientAdded.Target;
if (target != null && target.InvokeRequired)
target.Invoke(OnClientAdded, this, e);
else
OnClientAdded(this, e);
}
client.OnRead += OnRead;
MySocket.BeginAccept(new AsyncCallback(Accept), null);
}
All this code is in a class called AbstractClient. The client inherits the Abstract client and when the server accepts a socket it create's it's own local AbstractClient, in this case both modules access the functions above however they are different instances and I couldn't imagine threads from different instances combining especially as no variable is static.
Well, this makes no sense the way you described it. Which probably means that what you think is going on is not what is really happening. Debugging threaded code is quite difficult, very hard to capture the state of the program at the exact moment it misbehaves.
A generic approach is to add logging to your code. Sprinkle your code with Debug.WriteLine() statements that shows the current value of the variable, along with the thread's ManagedId. You get potentially a lot of output, but somewhere you'll see it going wrong. Or you get enough insight in how thread(s) are interacting to guess the source of the problem.
Just adding the logging can in itself solve the problem because it alters the timing of code. Sucks when that happens.
I assume OnRead is firing an event dispatched on a thread pool thread. If any registered event handler is writing to amessage, its value could change any time you're in the reading loop.
Still not very clear where you are gettingthe value assigned to amessage in the loop. Should cleanmessage read content?
I have some System.Diagnotics.Processes to run. I'd like to call the close method on them automatically. Apparently the "using" keyword does this for me.
Is this the way to use the using keyword?
foreach(string command in S) // command is something like "c:\a.exe"
{
try
{
using(p = Process.Start(command))
{
// I literally put nothing in here.
}
}
catch (Exception e)
{
// notify of process failure
}
}
I'd like to start multiple processes to run concurrently.
using(p = Process.Start(command))
This will compile, as the Process class implements IDisposable, however you actually want to call the Close method.
Logic would have it that the Dispose method would call Close for you, and by digging into the CLR using reflector, we can see that it does in fact do this for us. So far so good.
Again using reflector, I looked at what the Close method does - it releases the underlying native win32 process handle, and clears some member variables. This (releasing external resources) is exactly what the IDisposable pattern is supposed to do.
However I'm not sure if this is what you want to achieve here.
Releasing the underlying handles simply says to windows 'I am no longer interested in tracking this other process'. At no point does it actually cause the other process to quit, or cause your process to wait.
If you want to force them quit, you'll need to use the p.Kill() method on the processes - however be advised it is never a good idea to kill processes as they can't clean up after themselves, and may leave behind corrupt files, and so on.
If you want to wait for them to quit on their own, you could use p.WaitForExit() - however this will only work if you're waiting for one process at a time. If you want to wait for them all concurrently, it gets tricky.
Normally you'd use WaitHandle.WaitAll for this, but as there's no way to get a WaitHandle object out of a System.Diagnostics.Process, you can't do this (seriously, wtf were microsoft thinking?).
You could spin up a thread for each process, and call `WaitForExit in those threads, but this is also the wrong way to do it.
You instead have to use p/invoke to access the native win32 WaitForMultipleObjects function.
Here's a sample (which I've tested, and actually works)
[System.Runtime.InteropServices.DllImport( "kernel32.dll" )]
static extern uint WaitForMultipleObjects( uint nCount, IntPtr[] lpHandles, bool bWaitAll, uint dwMilliseconds );
static void Main( string[] args )
{
var procs = new Process[] {
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 2'" ),
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 3'" ),
Process.Start( #"C:\Program Files\ruby\bin\ruby.exe", "-e 'sleep 4'" ) };
// all started asynchronously in the background
var handles = procs.Select( p => p.Handle ).ToArray();
WaitForMultipleObjects( (uint)handles.Length, handles, true, uint.MaxValue ); // uint.maxvalue waits forever
}
For reference:
The using keyword for IDisposable objects:
using(Writer writer = new Writer())
{
writer.Write("Hello");
}
is just compiler syntax. What it compiles down to is:
Writer writer = null;
try
{
writer = new Writer();
writer.Write("Hello");
}
finally
{
if( writer != null)
{
((IDisposable)writer).Dispose();
}
}
using is a bit better since the compiler prevents you from reassigning the writer reference inside the using block.
The framework guidelines Section 9.3.1 p. 256 state:
CONSIDER providing method Close(), in addition to the Dispose(), if close is standard terminology in the area.
In your code example, the outer try-catch is unnecessary (see above).
Using probably isn't doing what you want to here since Dispose() gets called as soon as p goes out of scope. This doesn't shut down the process (tested).
Processes are independent, so unless you call p.WaitForExit() they spin off and do their own thing completely independent of your program.
Counter-intuitively, for a Process, Close() only releases resources but leaves the program running. CloseMainWindow() can work for some processes, and Kill() will work to kill any process. Both CloseMainWindow() and Kill() can throw exceptions, so be careful if you're using them in a finally block.
To finish, here's some code that waits for processes to finish but doesn't kill off the processes when an exception occurs. I'm not saying it's better than Orion Edwards, just different.
List<System.Diagnostics.Process> processList = new List<System.Diagnostics.Process>();
try
{
foreach (string command in Commands)
{
processList.Add(System.Diagnostics.Process.Start(command));
}
// loop until all spawned processes Exit normally.
while (processList.Any())
{
System.Threading.Thread.Sleep(1000); // wait and see.
List<System.Diagnostics.Process> finished = (from o in processList
where o.HasExited
select o).ToList();
processList = processList.Except(finished).ToList();
foreach (var p in finished)
{
// could inspect exit code and exit time.
// note many properties are unavailable after process exits
p.Close();
}
}
}
catch (Exception ex)
{
// log the exception
throw;
}
finally
{
foreach (var p in processList)
{
if (p != null)
{
//if (!p.HasExited)
// processes will still be running
// but CloseMainWindow() or Kill() can throw exceptions
p.Dispose();
}
}
}
I didn't bother Kill()'ing off the processes because the code starts get even uglier. Read the msdn documentation for more information.
try
{
foreach(string command in S) // command is something like "c:\a.exe"
{
using(p = Process.Start(command))
{
// I literally put nothing in here.
}
}
}
catch (Exception e)
{
// notify of process failure
}
The reason it works is because when the exception happens, the variable p falls out of scope and thus it's Dispose method is called that closes the process is how that would go. Additionally, I would think you'd want to spin a thread off for each command rather than wait for an executable to finish before going on to the next one.