I am trying to write a XAML control for a Piano Keyboard in WPF which responds to NoteOn and NoteOff MIDI events from an external MIDI keyboard. I am using Tom Lokovic's midi-dot-net to raise NoteOn and NoteOff events triggered by the hardware but I need a way to get these events to raise the NoteOn and NoteOff events of my XAML Key class (derived from the WPF Button). The colour of a key should change when it is on and the event should be subscribable to so that a user of the Piano Keyboard control can play a sound the key is pressed.
I could do this by passing every Midi.InputDevice to every single key on the keyboard so that each one can subscribe to the NoteOn and NoteOff events of every InputDevice then, in turn raise their own NoteOn and NoteOff events but the problem with this is that the PianoKeyboard control (an ItemsControl which holds Keys) and its nested Key controls all become tightly coupled to the implementation of midi-dot-net. If I have to do this I will, but it seemed like there should be a better way of doing this in WPF moving the dependency on midi-dot-net higher up in the call stack.
I have too much code to paste in its entirety here and still be readable so here's a sample of one of the DataTemplates I'm using as my PianoKeyboard's ItemTemplate.
<DataTemplate x:Key="naturalKeyTemplate">
<!--NOTE: Background and Foreground Color assignment and changing is accounted for in the style.-->
<local:Key Grid.Column="{Binding Converter={StaticResource keyToColumnNumberConverter}}"
Grid.ColumnSpan="{Binding Converter={StaticResource keyToColumnSpanConverter}}"
Grid.RowSpan="2"
Style="{StaticResource naturalKeyStyle}"
Note="{Binding Note}"
IsHighlighted="{Binding IsHighlighted}">
<!--TODO: Find a way of raising the Key's NoteOn and NoteOff events from here.-->
</local:Key>
</DataTemplate>
Essentially what I'm asking is: given an input device is not supported to trigger a WPF button with the button's built in behaviour (e.g. a mouse click), how does one get it to trigger the button without coupling it to a derived class of the button?
You could raise the Click event programmatically:
button1.RaiseEvent(new RoutedEventArgs(ButtonBase.ClickEvent));
I think this can help you to call specific Metod based on event called.
<local:Key>
<i:Interaction.Triggers>
<i:EventTrigger EventName="Mouse.PreviewMouseDown">
<ei:CallMethodAction MethodName="NoteOn"
TargetObject="{Binding}" />
</i:EventTrigger>
<i:EventTrigger EventName="Mouse.PreviewMouseUp">
<ei:CallMethodAction MethodName="NoteOff"
TargetObject="{Binding}" />
</i:EventTrigger>
</i:Interaction.Triggers>
</local:Key>
You will need to add xmlns extensions:
xmlns:i="http://schemas.microsoft.com/expression/2010/interactivity"
xmlns:ei="http://schemas.microsoft.com/expression/2010/interactions"
I would have done a MIDI listenner service singleton that listen to all midi events, which update a keyboard state repository, which expose a viewmodel binded to your keyboard control...but i do not really see your programm structure, so it's not easy to say.
binding the midi event in your xaml to the wrapper midi.net does not seems to be the LOB way, because you link the gui with midi.net...
by the way, On/Off, is not enough, you need "pressed" as key is pressed state??
Related
I am starting to use MVVM but I'm finding difficult to replicate simple things that I do with events: I have a canvas and I want to get the position of the mouse click so I did a command and the xaml is this
<Canvas x:Name="cnvLeft">
<i:Interaction.Triggers>
<i:EventTrigger EventName="PreviewMouseDown">
<cmd:EventToCommand Command="{Binding CanvasClick}"
PassEventArgsToCommand="True"/>
</i:EventTrigger>
</i:Interaction.Triggers>
</Canvas>
However it pass only the mouse arguments, which is not enough because i need the sender, how can i fix this?
As recommended already: register a common event handler for the mouse click event.
MVVM is not concerned with code-behind. It's absolutely fine and even necessary to use code-behind.
Code-behind files are a compiler feature i.e. language feature (partial classes). They have nothing to do with application architecture. MVVM does not care about compilers - no design pattern does.
MVVM is also not concerned with commands (or data binding or any framework concept in general). Commanding is part of the framework's infrastructure and MVVM does not care about frameworks - no design pattern does.
MVVM does not mean to use commands. Events are usually just as good. So don't force commands. Instead of using interaction behaviors to convert an input event to a command, simply handle the event directly (of course in the view).
Controls must always be handled in the View of an MVVM application. The code-behind file of a control is a partial class. It's part of the control and therefore part of the View.
Implement the user input event handler in the hosting control's code-behind. Here you must implement the Canvas related logic (UI logic).
If you want to encapsulate the logic, you can move it along with the Canvas to a new custom Control (or UserControl).
MainWindow.xaml
<Window>
<Canvas PreviewMouseDown="OnCanvasePreviewMouseDown" />
</Window>
MainWindow.xaml.cs
private void OnCanvasePreviewMouseDown(object sender, MouseButtonEventArgs e)
{
var canvas = sender as Canvas;
Point canvasClickPosition = e.GetPosition(canvas);
}
I want to catch shortcuts at the window level with KeyBindings and then raise an event that all UserControls can somehow subscribe to in order to get notified when a shortcut has been issued.
I tried to do this on the window:
<Window.InputBindings>
<KeyBinding Key="M"
Command="{x:Static someNamespace:RoutedCommands.ShortcutSingleKeyM}" />
</Window.InputBindings>
And then add CommandBindings in the usercontrol to "catch" the command:
<UserControl.CommandBindings>
<CommandBinding Command="{x:Static someNamespace:RoutedCommands.ShortcutSingleKeyM}" Executed="OnShortcutSingleKeyM"></CommandBinding>
</UserControl.CommandBindings>
Method OnShortcutSingleKeyMin UserControl's is not getting hit. After some reading I now understand RouteCommands bubble up the tree and that might be the reason this approach didn't work.
I need the UserControl to be able to listen to "OnShortcut" events coming from the window. I'm currently implementing it this way:
Add an attached property to each user control that wants to listen to such events. Have the container pass a higher level delegate kind of thing to notify the Usercontrol.
Does this make sense? I'm getting the feeling that I'm overthinking this, it should be simpler to achieve.
Propagating from the Window down the tree to UserControls it's not going to work (at least not without major plumbing). I implemented it this way:
View listens to input (via both KeyBindings and KeyDown events).
View owns an object (ShortcutMcShortcutFace) that allows UserControls to subscribe to a OnShortcutEvent kind of thing.
UserControls expose DependencyProperties of type ShortcutMcShortcutFace. The view passes its ShortcutMcShortcutFace instance to them. They subscribe to OnShortcutEvent.
UserControls handle the OnShortcutEvent (args include a shortcut identifier) whichever way they want.
I have a combobox that is editable and a textbox.
<TextBox x:Name="textBox" HorizontalAlignment="Left" Height="23" Margin="86,149,0,0" TextWrapping="Wrap" Text="TextBox" VerticalAlignment="Top" Width="120"/>
<ComboBox x:Name="comboBox" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="282,150,0,0" IsEditable="True" PreviewMouseDown="ComboBox_PreviewMouseDown"/>
I don't understand why ComboBox_PreviewMouseDown does not trigger, when the focus is on the textbox and I click on the combobox. It just highlights the text in the combobox and sets the focus. Clicking in the combobox when it already has the focus, PreviewMouseDown fires.
Is that what's happening here? Why is a PreviewMouseDown in an unfocused combobox not working?
When ComboBox.IsEditable is set to True, the ComboBox internally sets the focus (and keyboard focus) to the edit TextBox to make it instantly available for text input. This makes total sense as the intention when clicking the edit TextBox is always to enter or edit some text. Otherwise, the user would have to click the TextBox twice to make it receive focus for text input (keyboard focus).
So, to prevent focus stealing, the author marked the MouseDown event as handled i.e. RoutedEventArgs.Handled is set to true. (This is the reason why most non-preview events are marked handled by most controls).
Also, the author wanted to prevent the moving of the caret when clicked into the edit TextBox for the first time (to give it focus): the PreviewMouseDown event's RoutedEventArgs.Handled will only be set to true, if the edit TextBox has no keyboard focus and the drop-down panel is closed. (That's why the second click into the TextBox will pass through to be handled by an added event handler).
To achieve the behavior you expect, you have to handle the UIElement.PreviewGotKeyboardFocus event or the attached Keyboard.PreviewGotKeyboardFocusevent on the ComboBox.
Alternatively register the event handler using the UIElement.AddHandler method and set the handledEventsToo parameter to true:
this.MyComboBox.AddHandler(
UIElement.PreviewMouseDownEvent,
new RoutedEventHandler(MyComboBox_PreviewMouseDown),
true);
I ran into this same issue myself. A simple and effective workaround is to wrap your ComboBox in a lightweight ContentPresenter, then attach your PreviewMouseDown handler to that, like so:
<ContentPresenter x:Name="MyComboBoxWrapper"
PreviewMouseDown="MyComboBoxWrapper_PreviewMouseDown">
<ContentPresenter.Content>
<ComboBox x:Name="MyComboBox" />
</ContentPresenter.Content>
</ContentPresenter>
Additionally, since this control gets the PreviewMouseDown event before the ComboBox does, you not only can use it to pre-process events before the ComboBox even sees them, but you can cut off the ComboBox entirely by setting the event arg's handled property to 'true.'
Works like a charm! No subclassing or other tricks needed and it only requires a lightweight control in the tree!
Notes
As some may have considered, technically you could attach the PreviewMouseDown event to any ancestor of your ComboBox, but you then may have to include logic in that handler to determine if you're actually clicking on the ComboBox vs something else.
By using an explicit ContentPresenter (an incredibly lightweight element that itself doesn't have any rendering logic. It simply hosts other elements), you now have a dedicated PreviewMouseDown handler just for this control. Plus, it makes it more portable should you need to move it around since the two items can travel together.
I'm currently trying to implement a ImageViewer with the possibility to move and zoom with touch inputs.
I already implemented these functionalities in a recent project in the Code Behind, but I'm struggling to do so in a View Model in MVVM.
Problem is that for my code to work I have to know how many touch inputs are recognized at the same time.
In my Code-Behind I used:
canvas.TouchesCaptured.Count()
The ViewModel shouldn't not know any Controls of the View, so passing the Canvas as a Command Parameter is not the way to go.
Beside the canvas I need the TouchEventArgs of the triggered TouchEvent to determine the position of the TouchEvent on the canvas.
Using Prism I was able to get the TouchEventArgs into the ViewModel.
<i:Interaction.Triggers>
<i:EventTrigger EventName="TouchDown">
<prism:InvokeCommandAction Command="{Binding TouchDownCommand}"
</i:EventTrigger>
</i:Interaction.Triggers>
prism:InvokeCommandAction automatically sets the EventArgs as the CommandParameter for clarification.
To determine the position of the TouchEvent on the Canvas I need the canvas and the TouchEvent.
In my Code-Behind it looked like that:
startingPoint = e.GetTouchPoint(canvas);
Anyone has a idea how I can solve this problem without violating the MVVM Pattern?
You could try writing a Blend behavior that encapsulates the Canvas event handling and exposes commands (e.g. ManipulationDelta in particular). You could even add a property to the behavior that exposes the TouchesCaptured values (during the ManipulationDelta event).
e.g.
<Canvas>
<i:Interaction.Behaviors>
<bhv:CanvasBehavior ManipulationDeltaCommand="{Binding MyViewModelCommand}" TouchPointCount="{Binding MyViewModelTouchPointCount}" />
</i:Interaction.Behaviors>
</Canvas>
Mouse gestures can be bound to commands using the MouseBinding InputBinding,
for example:
<Grid.InputBindings>
<MouseBinding Command="{Binding MyCommand}" Gesture="LeftClick"/>
</Grid.InputBindings>
In that example, the LeftClick gesture is used. What is the full list of gesture strings? I'm looking for a left mouse button down gesture, if it exists.
That is a MouseAction value. You can see possible values in the documentation. Mouse down is not a built-in gesture. Only various clicks and double clicks are in the enumeration.
It is possible to make your own input bindings by creating classes that extend InputBinding and InputGesture. You can reference the implementation of MouseBinding for an example. Alternatively, you can find a different way to accomplish whatever it is you are trying to do.
I'm looking for a left mouse button down gesture, if it exists.
That would be the LeftClick mouse action that you are currently using.
If you want to invoke a command when the MouseLeftButtonDown event occurs, you could do this using an interaction trigger:
<i:Interaction.Triggers>
<i:EventTrigger EventName="MouseLeftButtonDown" >
<i:InvokeCommandAction Command="{Binding MyCommand}"/>
</i:EventTrigger>
</i:Interaction.Triggers>
Please refer to the following blog post for more information about this.
Handling events in an MVVM WPF application: https://blog.magnusmontin.net/2013/06/30/handling-events-in-an-mvvm-wpf-application/
The EventTrigger class is included in th Expression Blend SDK which you can download from here: http://www.microsoft.com/en-us/download/details.aspx?id=10801.