Before asking my question I would like to describe briefly background of my problem: I'm developing ms word COM addin on C# and I need to handle user's text selections. Now I'm able to catch selection event - it's look like
Microsoft.Interop.Word._Application app;
app = (Word._Application )Application; // Application object comes on addin's connection
app.Application.WindowSelectionChange+=
new Word.ApplicationEvents4_WindowSelectionChangeEventHandler(selChange);
///
void selChange(Word.Selection selection){
MessageBox.Show(selection.Text); // this is my problem, Text property is not available
}
// property Text doesn't exist,but documentation tells that it exists. I suspect, that this property is not available for ms word 2007 - in the documentation only 2003,2010 versions are mentioned. But how I can do something like selection.getSelectedText()? I tryed to play with selection.Rows, selection.Rows[0],selection.Words,selection.Words[0] - no success.
According to the documentation, the Selection.Text property should be available for Word 2007 as well. I made a small sample implementation of your case to test it, and I cannot make it fail on Word 2010 and 2013 at least:
var wordApplication = new Application() { Visible = true };
wordApplication.Documents.Add();
wordApplication.WindowSelectionChange += delegate(Selection mySelection) { Console.WriteLine(mySelection.Text); };
So, I suggest you check that you have included the right namespaces and that the Selection interface you are using are actually the one from the Microsoft.Office.Interop.Word namespace.
Related
The issue:
We have an application written in C# that uses UIAutomation to get the current text (either selected or the word behind the carret) in other applications (Word, OpenOffice, Notepad, etc.).
All is working great on Windows 10, even up to 21H2, last update check done today.
But we had several clients informing us that the application is closing abruptly on Windows 11.
After some debugging I've seen some System.AccessViolationException thrown when trying to use the TextPatternRange.GetText() method:
System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt.'
What we've tried so far:
Setting uiaccess=true in manifest and signing the app : as mentionned here https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/350ceab8-436b-4ef1-8512-3fee4b470c0a/problem-with-manifest-and-uiaccess-set-to-true?forum=windowsgeneraldevelopmentissues => no changes (app is in C:\Program Files\
In addition to the above, I did try to set the level to "requireAdministrator" in the manifest, no changes either
As I've seen that it may come from a bug in Windows 11 (https://forum.emclient.com/t/emclient-9-0-1317-0-up-to-9-0-1361-0-password-correction-crashes-the-app/79904), I tried to install the 22H2 Preview release, still no changes.
Reproductible example
In order to be able to isolate the issue (and check it was not something else in our app that was causing the exception) I quickly made the following test (based on : How to get selected text of currently focused window? validated answer)
private void btnRefresh_Click(object sender, RoutedEventArgs e)
{
var p = Process.GetProcessesByName("notepad").FirstOrDefault();
var root = AutomationElement.FromHandle(p.MainWindowHandle);
var documentControl = new
PropertyCondition(AutomationElement.ControlTypeProperty,
ControlType.Document);
var textPatternAvailable = new PropertyCondition(AutomationElement.IsTextPatternAvailableProperty, true);
var findControl = new AndCondition(documentControl, textPatternAvailable);
var targetDocument = root.FindFirst(TreeScope.Descendants, findControl);
var textPattern = targetDocument.GetCurrentPattern(TextPattern.Pattern) as TextPattern;
string text = "";
foreach (var selection in textPattern.GetSelection())
{
text += selection.GetText(255);
Console.WriteLine($"Selection: \"{selection.GetText(255)}\"");
}
lblFocusedProcess.Content = p.ProcessName;
lblSelectedText.Content = text;
}
When pressing a button, this method is called and the results displayed in labels.
The method uses UIAutomation to get the notepad process and extract the selected text.
This works well in Windows 10 with latest update, crashes immediately on Windows 11 with the AccessViolationException.
On Windows 10 it works even without the uiaccess=true setting in the manifest.
Questions/Next steps
Do anyone know/has a clue about what can cause this?
Is Windows 11 way more regarding towards UIAutomation?
On my side I'll probably open an issue by Microsoft.
And one track we might follow is getting an EV and sign the app itself and the installer as it'll also enhance the installation process, removing the big red warnings. But as this is an app distributed for free we had not done it as it was working without it.
I'll also continue testing with the reproductible code and update this question should anything new appear.
I posted the same question on MSDN forums and got this answer:
https://learn.microsoft.com/en-us/answers/questions/915789/uiautomation-throws-accessviolationexception-on-wi.html
Using IUIautomation instead of System.Windows.Automation works on Windows 11.
So I'm marking this as solved but if anyone has another idea or knows what happens you're welcome to comment!
I'm performing some text-to-speech and I'd like to specify some special pronunciations in a lexicon file. I have ran MSDN's AddLexicon example verbatim, and it speaks the sentence but it does not use the given lexicon, something appears to be broken.
Here's the provided example:
using System;
using Microsoft.Speech.Synthesis;
namespace SampleSynthesis
{
class Program
{
static void Main(string[] args)
{
// Initialize a new instance of the SpeechSynthesizer.
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
// Configure the audio output.
synth.SetOutputToDefaultAudioDevice();
PromptBuilder builder = new PromptBuilder();
builder.AppendText("Gimme the whatchamacallit.");
// Append the lexicon file.
synth.AddLexicon(new Uri("c:\\test\\whatchamacallit.pls"), "application/pls+xml");
// Speak the prompt and play back the output file.
synth.Speak(builder);
}
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
}
}
and lexicon file:
<lexicon version="1.0"
xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd"
alphabet="x-microsoft-ups" xml:lang="en-US">
<lexeme>
<grapheme> whatchamacallit </grapheme>
<phoneme> W S1 AX T CH AX M AX K S2 AA L IH T </phoneme>
</lexeme>
</lexicon>
The console opens, the text is spoken, but the new pronunciation isn't used. I have of course saved the file to c:\test\whatchamacallit.pls as specified.
I've tried variations of the Uri and file location (e.g. #"C:\Temp\whatchamacallit.pls", #"file:///c:\test\whatchamacallit.pls"), absolute and relative paths, copying it into the build folder, etc.
I ran Process Monitor and the file is not accessed. If it were a directory/file permission problem (which it isn't) I would still see the access denied messages, however I log no reference at all except the occasional one from my text editor. I do see the file accessed when I try File.OpenRead.
Unfortunately there are no error messages when using a garbage Uri.
On further investigation I realized this example is from Microsoft.Speech.Synthesis, whereas I'm using System.Speech.Synthesis over here. However from what I can tell they are identical except for some additional info and examples and both point to the same specification. Could this still be the problem?
I verified the project is set to use the proper .NET Framework 4.
I compared the example from MSDN to examples from the referenced spec, as well as trying those outright but it hasn't helped. Considering the file doesn't seem to be accessed I'm not surprised.
(I am able to use PromptBuilder.AppendTextWithPronunciation just fine but it's a poor alternative for my use case.)
Is the example on MSDN broken? How do I use a lexicon with SpeechSynthesizer?
After a lot of research and pitfalls I can assure you that your assumption is just plain wrong.
For some reason System.Speech.Synthesis.SpeechSynthesizer.AddLexicon() adds the lexicon to an internal list, but doesn't use it at all.
Seems like nobody tried using it before and this bug went unnoticed.
Microsoft.Speech.Synthesis.SpeechSynthesizer.AddLexicon() (which belongs to the Microsoft Speech SDK) on the other hand works as expected (it passes the lexicon on to the COM object which interprets it as advertised).
Please refer to this guide on how to install the SDK: http://msdn.microsoft.com/en-us/library/hh362873%28v=office.14%29.aspx
Notes:
people reported the 64-bit version to cause COM exceptions (because the library does not get installed correctly), I confirmed this on a 64bit Windows 7 machine
using the x86 version circumvents the problem
be sure to install the runtime before the SDK
be sure to also install a runtime language (as adviced on the linked page) as the SDK does not use the default system speech engine
You can use System.Speech.Synthesis.SpeechSynthesizer.SpeakSsml() instead of a lexicon.
This code changes pronunciation of "blue" to "yellow" and "dog" to "fish".
SpeechSynthesizer synth = new SpeechSynthesizer();
string text = "This is a blue dog";
Dictionary<string, string> phonemeDictionary = new Dictionary<string, string> { { "blue", "jelow" }, { "dog", "fyʃ" } };
foreach (var element in phonemeDictionary)
{
text = text.Replace(element.Key, "<phoneme ph=\"" + element.Value + "\">" + element.Key + "</phoneme>");
}
text = "<speak version=\"1.0\" xmlns=\"http://www.w3.org/2001/10/synthesis\" xml:lang=\"en-US\">" + text + "</speak>";
synth.SpeakSsml(text);
I've been looking into this a little recently on Windows 10.
There are two things I've discovered with System.Speech.Synthesis.
Any Voice you use, must be matched against the language in the Lexicon file.
Inside the lexicon you have the language:
<lexicon version="1.0"
xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
alphabet="x-microsoft-ups" xml:lang="en-US">
I find that I can name my Lexicon as "blue.en-US.pls" and make a copy with "blue.en-GB.pls". Inside it will have xml:lang="en-GB"
In the code you'd use:
string langFile = Path.Combine(_appPath, $"blue.{synth.Voice.Culture.IetfLanguageTag}.pls");
synth.AddLexicon(new Uri(langFile), "application/pls+xml");
The other thing I discovered is, it doesn't work with "Microsoft Zira Desktop - English (United States)" at all. I don't know why.
This appears to be the default voice on Windows 10.
Access and change your default voice here:
%windir%\system32\speech\SpeechUX\SAPI.cpl
Otherwise you should be able to set it via code:
var voices = synth.GetInstalledVoices();
// US: David, Zira. UK: Hazel.
var voice = voices.First(v => v.VoiceInfo.Name.Contains("David"));
synth.SelectVoice(voice.VoiceInfo.Name);
I have David (United States) and Hazel (United Kingdom), and it works fine with either of those.
This appears to be directly related to whether the voice token in the registry has the SpLexicon key value. The Microsoft Zira Desktop voice does not have this registry value.
While Microsoft David Desktop voice has the following:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\TTS_MS_EN-US_DAVID_11.0\Attributes\SpLexicon = {0655E396-25D0-11D3-9C26-00C04F8EF87C}
I'd like to create an Excel Add In. But I want it to have a special behavior.
Then problem is that the Excel Add In I developed stays within Excel. Even when I run a normal Excel instance from Windows...
So what I want, to be more precise, is to have an Excel add in that will only appear in the Excel's ribbon when launched by my own C#_made application.
How should I do that ?
The behavior would be :
Run my C# application
My C# application calls a new Excel's instance
My C# application adds a new tab (and its elements) to the ribbon of this Excel's instance
My C# application adds actions binded to the tab's elements
Excel's instance is being closed > Remove all added components, functions, methods, ...
Close my C# appliclation
Here's a nice tutorial for you: http://www.add-in-express.com/free-addins/net-excel-addin.php
Edit:
Have you considered just disabling the addin, then reenabling it whenever you launch the app with a server that runs in the background and when excel is closed, disables the addin?
Here's some unload code I found here:
private void UnloadBadAddins(bool unloadAddin)
{
const string badAddin = "iManage Excel2000 integration (Ver 1.3)";
foreach(Office.COMAddIn addin in this.ExcelApp.COMAddIns)
{
if (addin.Description.ToUpper().Contains(badAddin.ToUpper()))
{
if (addin.Connect == unloadAddin)
{
addin.Connect = !unloadAddin;
return;
}
}
}
}
I have found the following two properties on the Microsoft.Office.Interop.Excel.Application class:
var excel = new Application();
excel.AddIns
excel.AddIns2
Maybe these can help you programmatically add/remove addins during your application run?
I am trying to develop a util (using system-hook) for that works like an expander (user selects some text and presses a hotkey and it is expands). It should work with Visual Studio.
I want to implement this using Windows API because I want to develop an app that works globally with any application (whether you're using VS, or wordpad, you should get the same functionality).
I've been able to do this successfully with notepad, wordpad, etc. using EM_ GETSEL and EM_REPLACESEL messages. But these APIs are not working with Visual Studio, or ms word.
What APIs should I use to be able to
1. Detect what text is selected.
2. Send input to the editor.
I am programming in C#. If you must know what I am trying to do... I am trying to make a universal port of ZenCoding that works on any editor. So all help will be appreciated.
For part 2 you could try using Windows Input Simulator which is an open source project I've just released to Codeplex to wrap the Win32 SendInput. Instead of SendKeys which just simulates text input, you can actually simulate real key strokes and complex chords to the active window.
In your case, if the user can perform the task with the Keyboard, this project will help you, otherwise you'd need to find another solution.
Hope this helps.
Why don't you use a System.Windows.Forms.SendKeys class for simulating keyboard input from user?
You can use:
SendKeys.SendWait("^C"); //CTRL+C
var selectedText = Clipboard.GetText();
var newText = Replace(selectedText);
SendKEys.SendWait("^V"); //CTRL+V
You can use WPF's Automation functionality, encapsulated in these two namespaces:
System.Windows.Automation
System.Windows.Automation.Provider
As an example, this is a method for finding an automation target element (e.g. a typical win control):
public static AutomationElement FindElement(AutomationElement context, PropertyCondition[] conditions)
{
// if no conditions, there's no search to do: just return the context, will be used as target
if (conditions == null)
{
return (context);
}
// create the condition to find
System.Windows.Automation.Condition condition = null;
if (conditions.Length <= 0)
{
throw (new ArgumentException("No conditions specified"));
}
else if (conditions.Length == 1)
{
condition = conditions[0];
}
else
{
AndCondition ac = new AndCondition(conditions);
condition = ac;
}
// find the element
CacheRequest creq = new CacheRequest();
creq.TreeFilter = Automation.ControlViewCondition;
using (creq.Activate())
{
AutomationElement e = AutomationContext(context);
AutomationElement target = e.FindFirst(TreeScope.Subtree, condition);
return (target);
}
}
Whatever you try, be absolutely sure to try it, ASAP, with Visual Studio 2010 beta 2. The editor has largely been rewritten, and hacks that work with an earlier version should be tested again.
I know that I can display a PDF file in my c# executable (not web app) with:
private AxAcroPDFLib.AxAcroPDF axAcroPDF1;
axAcroPDF1.LoadFile(#"somefile.pdf");
axAcroPDF1.Show();
But that is the regular pdf viewer like in the browser. I don't want that. I want full Adobe Standard or Professional functionality in my C# application using the Adobe controls. For example, if I use the code above, it loads in the C# app and I can see the adobe toolbar (print, save, etc.) But it is useless to me because I need things like save which cannot be done with the activex viewer above. Specifically, you cannot save, just as you cannot within the broswer.
So, I referenced the acrobat.dll and am trying to use:
Acrobat.AcroAVDocClass _acroDoc = new Acrobat.AcroAVDocClass();
Acrobat.AcroApp _myAdobe = new Acrobat.AcroApp();
Acrobat.AcroPDDoc _pdDoc = null;
_acroDoc.Open(myPath, "test");
pdDoc = (Acrobat.AcroPDDoc)(_acroDoc.GetPDDoc());
_acroDoc.SetViewMode(2);
_myAdobe.Show();
It opens adobe acrobat but it opens it outside of my c# application. I need it to open in my c# application like the activex library does. Can it be done with these libraries?
If I cannot open it in my c# application I would like to be able to "hold" my c# app tied to it so the c# app knows when I close the adobe app. At least that way I'd have some measure of control. This means I would hit open, the adobe app opens. I close the adobe app, my C# app is aware of this and loads the newly changed doc with the activex library (because I don't need change ability anymore, just displaying.)
I have the full versions of adobe acrobat installed on my computer. It is not the reader.
Thank you for any help.
edit:
There is an example in vb in the adobe acrobat sdk. I believe it is called activeview.
you can check out ABCpdf. I dont know if it has this capability but we have used it for several of our apps
Using a webbrowser control would be an option to display the content.
IText# may help you out.
You can create PDF's and I believe you can use it to read and modify them.
As for displaying in the app..... I am not sure how to display them with iText or if it is possible (have not tried this yet), sorry. iText does let you convert to RTF which may be one approach.
Best option is to write a listener which tells your calling code when Adobe.exe is no longer running. Something like the following (with tweaks for your uses) should work:
public void Open(string myPath)
{
Acrobat.AcroAVDocClass _acroDoc = new Acrobat.AcroAVDocClass();
Acrobat.AcroApp _myAdobe = new Acrobat.AcroApp();
Acrobat.AcroPDDoc _pdDoc = null;
_acroDoc.Open(myPath, "test");
_pdDoc = (Acrobat.AcroPDDoc) (_acroDoc.GetPDDoc());
_acroDoc.SetViewMode(2);
_myAdobe.Show();
NotifyAdobeClosed += new EventHandler(Monitor_NotifyAdobeClosed);
MonitorAdobe();
}
private void Monitor_NotifyAdobeClosed(object sender, EventArgs e)
{
NotifyAdobeClosed -= Monitor_NotifyAdobeClosed;
//Do whatever it is you want to do when adobe is closed.
}
private void MonitorAdobe()
{
while(true)
{
var adcount = (from p in Process.GetProcesses()
where p.ProcessName.ToLower() == "acrobat"
select p).Count();
if (adcount == 0)
{
OnNotifyAdobeClosed();
break;
}
}
}
public event EventHandler NotifyAdobeClosed;
public void OnNotifyAdobeClosed()
{
if (NotifyAdobeClosed != null)
NotifyAdobeClosed(this, null);
}