What is the best method for determining if the Cisco Webex client is running on a user's computer? Currently, I'm checking for a running process like this:
public static bool IsWebExClientRunning()
{
// webex process name started from internet browser (could change). Just use Process Explorer to find the sub process name.
// alternate name - CiscoWebexWebService
Process[] pname = Process.GetProcessesByName("atmgr");
return pname.Length > 0;
}
While this method works, there could be an instance where Cisco pushes out updates to their client that changes the process name which would break this code if we're looking for a specific process name.
The Webex client starts as a child process from an Internet browser since it is technically a browser plugin and it doesn't show up on its own in Windows Task Manager. I have seen both atmgr and CiscoWebexWebService using Process Explorer to find the process. Sometimes, depending on the host operating system, Windows XP/Windows 7, it will just display atmgr and not the child process CiscoWebexWebService belonging to atmgr. It also varies slightly based on the browser that is used. It runs as a browser plugin for all supported browsers and for unsupported browsers, it will give the option to run as a standalone application.
The process tree can vary (i.e. other browsers/operating systems), but it looks something like this:
iexplore.exe
-> atmgr.exe
-> CiscoWebexWebService.exe
Obviously, all checks must be done client side and not server side, but is there a better method for approaching this?
I spoke with a Cisco specialist and they said that my current approach should be safe for detecting if the Webex client is running a user's machine. They were able to confirm that the process name is atmgr.exe and should not change in the near future.
Related
I'm working on an automation process in C# that is going to remotely reboot a Windows (2008/2012/2016) server and I need to wait until that server is back online before proceeding.
I know 'back online' can be ambiguous, so for my requirements, I need the server to be back at the Ctrl-Alt-Del screen.
The reason for this is to have the server in a consistent state before proceeding. In my experience, there are several factors that could prevent the server from reaching this screen, such as installing windows updates that gets stuck in a reboot cycle or getting stuck at 'Waiting for Local Session Manager' etc.
I've spent a few days looking in to this to no avail:
The server obviously starts responding to ping requests before it is available
System Boot Time occurs before the Server reaches the desired state
Any events indicating the system has booted are logged before the desired state
I can't simply poll for an essential service - when Windows is applying computer updates prior to logon these services can be already started. Additionally, sometimes a server will reboot itself whilst installing updates at this stage which could result in false positives.
Polling CPU activity could also produce false positives or introduce delays
Is there anyway to detect a Windows server has finished booting and is available for an interactive logon?
It sounds like you've covered most of the possible ways I know of. Which makes me revert to brute force ideas. I am curious what you're doing where you can't install a windows service on the box (or is that just not very viable because of the number)
First would just be trying to remote login or whatever, and having some way to test if it fails or not, wait 1 minute, try again. But seems like that might cause side-issues for you somehow?
My idea of a brute force method that wouldn't affect state:
Ping every 1-5seconds
Once it starts responding
wait 5 or 10 or even 15 minutes, whilst still pinging it
If pings fail reset that timer (windows updates restart case)
Then be pretty confident you're at the right state.
With potentially thousands of servers, I can't imagine 15 minutes each would be a big deal, especially if it is consistent enough to be able to run in larger batches
So I've been able to accomplish this by using a hacky method put seems to work in my test environment.
Note that the el.Current.Name property will equate to the Ctrl-Alt-Del text, so on 2008R2 this is 'Press CTRL-ALT-DEL to log on' and 'Press CTRL-ALT-DEL to sign in.' on 2012R2
I've built a C# console application that uses UI Automation:
using System;
using System.Windows.Automation;
namespace WorkstationLocked
{
class Program
{
static void Main()
{
AutomationElement el = AutomationUI.FindElementFromAutomationID("LockedMessage");
if (el !=null)
{
Console.WriteLine(el.Current.Name);
}
}
}
class AutomationUI
{
public static AutomationElement FindElementFromAutomationID(string automationID)
{
string className = "AUTHUI.DLL: LogonUI Logon Window";
PropertyCondition condition = new PropertyCondition(AutomationElement.ClassNameProperty, className);
AutomationElement logonui = AutomationElement.RootElement.FindFirst(TreeScope.Children, condition);
if (logonui != null)
{
condition = new PropertyCondition(AutomationElement.AutomationIdProperty, automationID);
return logonui.FindFirst(TreeScope.Descendants, condition);
}
else
{
return null;
}
}
}
}
I can then execute this console application via PsExec, however, because this needs to be launched in the winlogon desktop, which can only be done by running under the local system, PsExec is invoked twice. For example:
psexec.exe \\ServerA -s -d C:\PsTools\PsExec.exe -accepteula -d -x C:\Utils\WorkstationLocked.exe
This is very much a work in progress right now as I can't get the output of the command to pass through to the calling process so I may just look to populate a registry value or write to a file that can be subsequently interrogated.
I am trying to launch a process from a web page's back-end code/app pool. This process will launch an App that i built myself.
For some reason, the process only works / runs when i start it from VS2013... it never works when i launch it from IIS(7.5) itself.
I am on a Windows 7 machine (both IIS host, and App location), and I've setup my web site to only be accessible via internal network.
Here's the code, followed by the config / attempts to fix the issue:
protected void btn_DoIt_Click(object sender, EventArgs e)
{
string file_text = this.txt_Urls.Text;
if (!String.IsNullOrWhiteSpace(file_text))
File.WriteAllText(ConfigurationManager.AppSettings["filePath"], file_text);
ProcessStartInfo inf = new ProcessStartInfo();
SecureString ss = GetSecureString("SomePassword");
inf.FileName = #"........\bin\Release\SomeExecutable.exe";
inf.Arguments = ConfigurationManager.AppSettings["filePath"];
inf.UserName = "SomeUserName";
inf.Password = ss;
inf.UseShellExecute = false;
//launch desktop app, but don't close it in case we want to see the results!
try
{
Process.Start(inf);
}
catch(Exception ex)
{
this.txt_Urls.Text = ex.Message;
}
this.txt_Urls.Enabled = false;
this.btn_DoIt.Enabled = false;
this.txt_Urls.Text = "Entries received and process started. Check local machine for status update, or use refresh below.";
}
Here are the things I've tried to resolve the issue:
Made sure the executing assembly was built with AnyCPU instead of
x86
Ensured that the AppPool that runs the app, also runs under the same account (SomeUsername) as the ProcessStartInfo specified.
Ensured that the specific user account has full access to the executable's folder.
Ensured that IIS_USR has full access to the executable's folder.
Restarted both the app pool and IIS itself many times over implementing these fixes
I am now at a loss as to why this simply will not launch the app... when i first looked into the event log, i saw that the app would die immediately with code 1000:KERNELBASE.dll, which got me on the AnyCPU config instead of X86 fix... that fixed the event log entries but the app still doesn't start (nothing comes up in task manager), and i get no errors in the event log...
if someone could help me fix this problem i would really appreciate it. This would allow me to perform specific tasks on my main computer from any device on my network (phone, tablet, laptop, etc etc) without having to be in front of my main PC...
UPDATE
The comment to my OP, and ultimate answer from #Bradley Uffner actually nailed the problem on the head: My "app" is actually a desktop application with a UI, and in order to run that application, IIS would need to be able to get access to the desktop and the UI, just like if it were a person sitting down in front of the PC. This of course is not the case since IIS is running only as a service account and it makes sense that it shouldn't be launching UI programs in the background. Also see his answer for one way of getting around this.
Your best bet might be to try writing this as 2 parts. A web site that posts commands to a text file (or database, or some other persistent storage), and a desktop application that periodically polls that file (database, etc) for changes and executes those commands. You could write out the entire command line, including exe path command arguments, and switches.
This is the only way I can really think of to allow a service application like IIS to execute applications that require a desktop context with a logged in user.
You should assign a technical user with enough high priviliges to the running application pool. By default the application pool is running with ApplicationPoolIdentity identy which has a very low priviliges.
I have a small utility I am working on that deletes old user profiles from domain machines.
Basically, where I am stuck is looking for a better process to delete remote directories.
I know I can use the System.IO and delete it from the UNC path, but I am not happy with the performance of the network deletion. It can take hours to delete medium sized profiles, and if there are dozens or hundreds of profile or machines this is not feasible as a solution.
So this appears to be out of the question
The best I can find appears to be PSExec calls, but I want something managed.
Are there any .NET classes that can invoke the remote machine to complete the deletion of the directory instead of relying on the calling machine?
If your client computers don't detect this as a virus, you can use it to execute remote commands on your network computers, including folder deletions:
PsExec v1.98
Introduction
Utilities like Telnet and remote control programs like Symantec's PC
Anywhere let you execute programs on remote systems, but they can be a
pain to set up and require that you install client software on the
remote systems that you wish to access. PsExec is a light-weight
telnet-replacement that lets you execute processes on other systems,
complete with full interactivity for console applications, without
having to manually install client software. PsExec's most powerful
uses include launching interactive command-prompts on remote systems
and remote-enabling tools like IpConfig that otherwise do not have the
ability to show information about remote systems.
I have no idea how it works. In .NET code the idea would be to send RPC calls to a remote application that you control, which is easy enough provided you already have said application running on the target computers. The mechanism used would be .NET Remoting or WCF.
Inspired by this answer, I did some minor modifications. I can't get it to run 100% managed, as I get an error code 9 (The storage control block address is invalid) when I try to run the rd-command from within the code itself.
The base functionality is blindingly fast on my small test-setup, but given that you overrule the "Are you sure?" prompt, it is also fairly dangerous if you specify the wrong path, so wear your hard hat as you proceed:
If you execute echo Y | rd /S c:\Temp\test in any command shell, you'll remove C:\Temp\Test and anything below it very quickly and without warning.
But executing this solution directly in the code doesn't work. So my quick fix is to place a bat-file (called DeleteTest.bat) on the machine, containing only this line and then execute the bat file by WMI.
In my small test, it deletes ~900 files of a total of ~200 mb in a second or so.
Also, in addition to the answer cited I get the return code, so my full code becomes:
var processToRun = new[] { "c:\\Temp\\DeleteTest.bat" };
var connection = new ConnectionOptions();
connection.Username = "me";
connection.Password = "password";
var wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\cimv2", "MyRemoteMachine"), connection);
var wmiProcess = new ManagementClass(wmiScope, new ManagementPath("Win32_Process"), new ObjectGetOptions());
var result = wmiProcess.InvokeMethod("Create", processToRun);
Console.WriteLine(
"Creation of process returned: " + result);
You will obviously also need the bat file to be generated (by code or pre-generated) and copied to the destination, but that should be trivial.
Background
I'm writing an web application so I can control an Ubuntu Server from a web site.
One idea I had was to run the 'screen' application from mono and redirect my input and output from there.
Running 'screen' from mono:
ProcessStartInfo info = new ProcessStartInfo("screen", "-m");
info.UseShellExecute = false;
info.RedirectStandardOutput = true;
info.RedirectStandardInput = true;
var p = new Process();
p.StartInfo = info;
p.Start();
var output = p.StandardOutput;
var input = p.StandardInput;
but running 'screen' with the RedirectStandardInput gives out the error:
Must be connected to a terminal
I've tried many different arguments and none seems to work with 'Redirecting Standard Input'
Other ideas for controlling a server will be greatly appreciated
I think this is the typical question in which you're asking how to implement your solution to a problem, instead of asking how to solve your problem. I don't think you should do hacky things like making a web app that tunnels the user actions to the server via a terminal.
I think you can bypass all that and, without writing a single line of code, take advantage of what the platform (Gtk+ in this case) already provides you:
You could run gnome-terminal in the server with the Broadway GDK backend. This way the gnome-terminal app will not run in the server, but open a web server on the port you specify. Later, you can use any WebSockets-enabled browser to control it.
This is the easiest and less hacky solution compared to the other ones offered so far. If you still are excited about using Mono for web development you still can, and you could embed this access in an iFrame or something.
(PS: If you don't want to depend on GTK being installed in the server; you could just use WebSockets in your client part of the webpage to be able to send events from the server to the client, and the library SSHNET to send the user's input directly through the wire.)
screen will need a terminal of some sort. It's also gigantically overkill.
You may wish to investigate the pty program from the Advanced Programming in the Unix Environment book (pty/ in the sources) to provide a pseudo-terminal that you can drive programmatically. (You'd probably run the pty program as-provided and write your driver in Mono if you're so inclined.) (The pty program will make far more sense if studied in conjunction with the book.)
The benefit to using the pty program, or functionality similar to it, is that you'd properly handle programs such as passwd that open("/dev/tty") to prompt the user for a password. If you simply redirect standard IO streams via pipe() and dup2() system calls, you won't have a controlling terminal for the programs that need one. (This is still a lot of useful programs but not enough to be a remote administration tool.)
There may be a Mono interface to the pty(7) system; if so, it may be more natural to use it than to use the C API, but the C API is what does the actual work, so it may be easier to just write directly in the native language.
A different approach to solve the same problem is shellinabox. Also interesting is this page from the anyterm website that compares different products that implement this kind of functionality.
Using shellinabox is very simple:
# ./shellinaboxd -s /:LOGIN
(this is the example given on their website) will start a webserver (on in your case the Ubuntu server). When you point your browser to http://yourserver:4200 you'll see a login screen, just like you would see when opening a session with ssh/putty/telnet/... but in your browser.
You could provide the required remote access functionality to the server's shell by just including an iframe that points to that service in your application's webpage.
I'm working on a Mono application that will run on Linux, Mac, and Windows, and need the ability for apps (on a single os) to send simple string messages to each other.
Specifically, I want a Single Instance Application. If a second instance is attempted to be started, it will instead send a message to the single instance already running.
DBus is out, as I don't want to have that be an additional requirement.
Socket communication seems to be hard, as windows seems to not allow permission to connect.
Memory Mapped Files seems not to be supported in Mono.
Named Pipes appears not to be supported in Mono.
IPC seems not to be supported on Mono.
So, is there a simple method to send string messages on a single machine to a server app that works on each os, without requiring permissions, or additional dependencies?
On my ubuntu (10.10 mono version: 2.6.7) I've tried using WCF for interprocess communication with BasicHttpBinding, NetTcpBinding and NetNamedPipeBinding. First 2 worked fine, for NetNamedPipeBinding I got an error:
Channel type IDuplexSessionChannel is
not supported
when calling ChannelFactory.CreateChannel() method.
I've also tried using Remoting (which is a legacy technology since WCF came out) with IpcChannel; example from this msdn page started and worked without problems on my machine.
I suppose you shouldn't have problems using WCF or Remoting on Windows either, not sure about Mac though, don't have any of those around to test. Let me know if you need any code examples.
hope this helps, regards
I wrote about this on the mono-dev mailing list. Several general-purpose inter-process messaging systems were considered, including DBus, System.Threading.Mutex class, WCF, Remoting, Named Pipes... The conclusions were basically mono doesn't support Mutex class (works for inter-thread, not for inter-process) and there's nothing platform agnostic available.
I have only been able to imagine three possible solutions. All have their drawbacks. Maybe there's a better solution available, or maybe just better solutions for specific purposes, or maybe there exist some cross-platform 3rd party libraries you could include in your app (I don't know.) But these are the best solutions I've been able to find so far:
Open or create a file in a known location, with exclusive lock. (FileShare.None). Each application tries to open the file, do its work, and close the file. If failing to open, Thread.Sleep(1) and try again. This is kind of ghetto, but it works cross-platform to provide inter-process mutex.
Sockets. First application listens on localhost, some high numbered port. Second application attempts to listen on that port, fails to open (because some other process already has it) so second process sends a message to the first process, which is already listening on that port.
If you have access to a transactional database, or message passing system (sqs, rabbitmq, etc) use it.
Of course, you could detect which platform you're on, and then use whatever works on that platform.
Solved my problem with two techniques: a named mutex (so that the app can be run on the same machine by different users), and a watcher on a message file. The file is opened and written to for communication. Here is a basic solution, written in IronPython 2.6:
(mutex, locked) = System.Threading.Mutex(True, "MyApp/%s" % System.Environment.UserName, None)
if locked:
watcher = System.IO.FileSystemWatcher()
watcher.Path = path_to_user_dir
watcher.Filter = "messages"
watcher.NotifyFilter = System.IO.NotifyFilters.LastWrite
watcher.Changed += handleMessages
watcher.EnableRaisingEvents = True
else:
messages = os.path.join(path_to_user_dir, "messages")
fp = file(messages, "a")
fp.write(command)
fp.close()
sys.exit(0)
For your simple reason for needing IPC, I'd look for another solution.
This code is confirmed to work on Linux and Windows. Should work on Mac as well:
public static IList Processes()
{
IList<Process> processes = new List<Process>();
foreach (System.Diagnostics.Process process in System.Diagnostics.Process.GetProcesses())
{
Process p = new Process();
p.Pid = process.Id;
p.Name = process.ProcessName;
processes.Add(p);
}
return processes;
}
Just iterate through the list and look for your own ProcessName.
To send a message to your application, just use MyProcess.StandardInput to write to the applications standard input. This only works assuming your application is a GUI application though.
If you have problems with that, then you could maybe use a specialized "lock" file. Using the FileSystemWatcher class you can check when it changes. This way the second instance could write a message in the file and then the first instance notice that it changes and can read in the contents of the file to get a message.