I was playing around with Microsoft Spy++ and noticed that not only does it find the open processes, but can find the individual components running in each process. For example there is this application that allows you to open a window in which there is a textbox for an IP address and textbox for a port. Spy++ can detect these components. Knowing that Spy++ can detect them, is there anyway possible to find them in a separate c# application and go on to MODIFY their contents and otherwise interact with the program? (such as firing a click event on a button)
This is feasible. Try use PInvoke (InterOp) or AutomationElement, or AutomationPeer (for WPF applications) to automate all you wish to do.
Also you might wish to try Inspect and UISpy application as well.
Automation elements/peer is a non-intrusive mechanism to control UI using accessibility framework. One of the weaknesses in windows is its lack of defence against code injection. Put simply:
As a privileged user,
- You can Open and Modify a running Process image
- Make it load your OWN DLL
- Make it run your OWN thread (that potentially listens to commands from your process) and
- allows you to read any bits of memory you want.
Look at detours (http://research.microsoft.com/en-us/projects/detours/) for how to do it with Managed Processes.. Unfortunately, Microsoft removed the inject at runtime features.
Also look at http://msdn.microsoft.com/en-us/magazine/cc163617.aspx for doing things in the managed world (Apps like Snoop utilise that)
Related
I noticed for a few updates of Skype now that if you start 2 or 3 instances of Skype, in the windows taskbar they appear as separate windows and you can drag them individually as opposed to other applications when they are "glued" together and you can drag them all at once.
My question is how can I implement the individual appearance in my application and is it possible from C# or through winapi?
The shell groups windows in the taskbar using each window's Application User Model ID (AppUserModelID).
By default, every window generated by a given EXE (even in different processes) shares a system-generated AppUserModelID.
You can give each process its own AppUserModelID by calling SetCurrentProcessExplicitAppUserModelID. From your description this is probably what Skype is doing, though I haven't checked.
You can give each window its own AppUserModelID by setting a different PKEY_AppUserModel_ID property on the windows.
Note that these IDs are required to have a particular format:
CompanyName.ProductName.SubProduct.VersionInformation
Raymond Chen wrote an article about this, and it's also worth reading the documentation I linked to.
I'm not aware of WinForms having explicit support for this, but you could certainly use interop to call the Win32 API directly.
I've got an issue with the focus management in WinRT. The issue is specific for the application startup. Let me share the example of it:
If during the startup I change the focus (for instance I can start selecting some text in a browser), the runtime will decide that it doesn't need to show the application. The application is being started in a 'hidden mode'. It means that I do not see the UI, but I still can find it in process explorer.
So what I need here is to make the application be active in all possible cases. I tried to use several native functions such as ShowWindow, SetActiveWindow, SetForegroundWindow, but without any success.
I also noticed that any WinRT app is being started under WWAHOST.exe and mainwindowhandle is 0. The app shows up if I use 'Switch to' option in Process Explorer context menu.
WinRT applications are sandboxed and have very little control on the way the OS handles them, and almost no way to affect the behavior of other applications running on the same host. What I would suggest then is for you to design your application in such a way that it shows some UI as early as possible, then asynchronously you can load any other resources that your application may need.
I have an application that I am trying to automate on Windows. I need to find the location of a window that is running inside the application, and then automate a couple of mouse events on the application.
In a previous incarnation of the software that I am automating, I was able to search for child windows of the process which were named using the GetWindowText WinAPI function from C# (in combination with GetWindowTextLength).
The software manufacturers have now updated the software and updated the way that the child windows are drawn. Now each window lacks a caption, and has a class name of QWidget. I can no longer use my old strategy to find the child window location. I presume the use of QWidget means the windowing system uses the Qt framework.
Is there any way of pulling any data from the QWidget using PInvoke that I might be able to identify my windows with?
There's a couple problems here. One is that you can't get "unshared" data from another process. You can get at window data by pinvoking methods like GetWindowLong; but unless you know specific data about what QWidget does in that data (the other problem), there's not much you can do with the data.
Another problem is if you want to use most QT objects in a managed application (you can do this with C++/CLI and IJW) you need to initialize A QT Application object in your application... I'm not sure how this would impact what you want to do.
There's an application written in C# that doesn't have any means of remote controlling. The only use scenario possible is to click the buttons with the mouse to get some result.
I'd like to create a server that would expose some common usage scenario with pre-defined clicking logic. So for example the application has a button "do thing" and I'm willing to make an HTTP (or other) server that would click it when a certain URL is loaded.
The application is intended to be used on Windows, though it should work fine with wine - my primary OS is Ubuntu, but I think that running the app in a VM is a better option. To program the rest of of the logic I can use java, python, ruby, php or node.js (I don't know C#).
What is the best approach to handle this? I would prefer not relying on click at the predefined X*Y position on the screen. Ideally the solution would also allow reading the data back.
You can easily automate the gui using the ui automation api. Check for example the White framework on codeplex
http://white.codeplex.com/
I am not sure however if yiu will be able to easily expose such automated application from an application server. The automation is not possible without explicit user session with visible desktop interface thus limiting your server processing to one active session at a time.
I know there's hook in Win32 but I don't need to hook the whole system and it's low level.
What I want is something easy like Wordpress framework but for Winform which allows me to hook all events in my own application for example detecting all textbox leave or all forms closing.
Does this exist ? Is it possible technically or only Microsoft can do so in .NET Version X.X ?
Have a look at ManagedSpy. It's an application very similar to Spy++, but for managed applications. It appeared in an MSDN Magazine issue several years ago.
When you run ManagedSpy, you can attach it to a running .Net process. It will reflect on the assemblies and find all kinds of events (there's some filtering ability to only see certain events), then it attaches to them and outputs the sequence of them firing.
There is also source code for ManagedSpy, so you can see how they did things and use those ideas to build what you need.
There is no easy way to do this as I am aware. There are external tools that can help (such as Spy++) but I believe they operate at the Windows message level rather than the .NET event level.
If you really need to do this level of monitoring in your application, you'll need to sign up event handlers on each object you wish to monitor. You could consider writing code that walks the control tree for each Form and signs up their events, so you could run it at startup after the forms are created.