How can I fix race condition in this "reset after time" observable? - c#

Input is an observable that produces a value each time a problem occurs.
As output I want an observable that produces a value if problems exist for a longer time. In other words I want to "reset" the output observable (not produce values) if the last problem is outdated.
My solution:
// first get an observable producing statusOk values (true = ok, false = not ok)
var okStatusObservable = input.Select(_ => true).Throttle(longerTime)
.Merge(input.Select(_ => false));
// we only want event if statusOk=false for a longer time
var outputObservable = okStatusObservable
.DistinctUntilChanged() // only changes
.Throttle(evenLongerTime) // wait for stable status
.Where(_ => _ == false); // only interested in bad status
I think the okStatusObservable might contain a race condition: If input receives events at time interval of exactly longerTime and second merge part (Select / false) would produce a boolean before the first part (Select+Throttle / true) then this would result in okStatus to be true 99.9% of the time where the opposite would be correct.
(PS: to have status value from beginning, we might add .StartWith(true) but that doesn't matter regarding race condition.)

A cleaner way to do the first observable is as follows:
var okStatusObservable2 = input
.Select(_ => Observable.Return(true).Delay(longerTime).StartWith(false))
.Switch();
Explanation: For each input message, produce an observable that starts with a false, and after longerTime produces a true. The Switch means that if you have a new observable, just switch to it, which would exclude the all-clear true at the end.
For your second observable, unless longerTime differs between the two observables, every first false in the first observable will result in a false in the second. Is that your intention?
Also, your Where is messed up (should be .Where(b => !b) or .Where(b => b == false). .Where(_ => false) will always evaluate to false returning nothing.
Other than that, I think your solution is sound.

Related

DefaultIfEmpty() + select new MyStronglyTypeObj()

my aim is to return result via left join by linq. The io.IsDefault can be null but insted of this I want to return MyStronglyTypeObj obj with the rest data.
context.Image.Where(i => i.IsActive == true) have 3 rows. one of those have isDefault null because this ImageId- (io => io.ImageId == i.ImageId) dosent exist in ImageObject
var test2 = (from i in context.Image.Where(i => i.IsActive == true)
from io in ImageObject.Where(io => io.ImageId == i.ImageId).DefaultIfEmpty()
select new MyStronglyTypeObj() { Alt = i.Alt, Caption = i.Caption, DisplayName = i.DisplayName, Extension = i.Extension, IsDefault = io.IsDefault, Height = i.Height, Width = i.Width, Name = i.Name });
// return 2 imgs - the 3rd one without isDefault (isDefault = null) wasn't added to collection.
var test = (from i in context.Image.Where(i => i.IsActive == true)
from io in ImageObject.Where(io => io.ImageId == i.ImageId).DefaultIfEmpty()
select i); // return 3 imgs
Is something obvious to me that I don't see? - perhaps I totally misunderstood the .DefaultIfEmpty() function
please help
DefaultIfEmpty() only affects empty collections, and causes that collection to return a single element with value default(T) (where T == collection type).
For example, using strings (note default(string) == null):
So based on the code you provided:
DefaultIfEmpty() is not a factor
The only other difference is the select statement, which doesn't really make sense
I'm guessing i is type MyStronglyTypeObj (based on properties matching)? I suspect there's another factor when you're running this code that you're not taking into account.
Try putting a breakpoint on that line, and viewing the results in the debugger.
Also, because LINQ uses deferred execution, this query code doesn't actually "run" until it gets consumed, and depending on when that happens, the source data can change (essentially, easily causing timing bugs if you're changing the source data somewhere else). Even more frustrating, this can cause this bug to disappear when you use a debugger and view the results in that, as it causes the code to execute sooner. You can avoid this by adding a .ToList() at the end of the line to cause the results to be executed immediately.

Rx sequential groupBy (partition stream)

I have a stream of events:
event.EventTime: 1s-----2s----3s----4s----5s----6s---
stream: A-B-C--D-----------------E-F---G-H--
An event looks like this:
public class Event
{
public DateTime EventTime { get; set; }
public int Value { get; set; }
}
EventTime should correspond to a time at which the event arrives, but there can be a small delay. The events are not supposed to arrive out-of-order, though.
Now, when I specify an grouping interval, say 1 second, I expect the stream to be grouped like this
1s-------2s----3s----4s----5s-----6s---
[A-B-C]--[D]---[ ]---[ ]---[E-F]--[G-H]
(notice the empty intervals)
I have tried using Buffer, but sadly I need to partition by EventTime, not System.DateTime.Now. Even with boundaries, I'd need some kind of look-ahead since when I use Buffer(2,1) as boundary and compare [0] and [1], even though [1] succesfully breaks the buffer, it still gets inserted into the old one instead of the new one. I also tried GroupBy, but that yielded groups only after the input stream finished. Which should never happen. Then I tried some this thing:
var intervalStart = GetIntervalStartLocal(DateTime.Now) + intervalLength;
var intervals = Observable.Timer(intervalStart, intervalLength);
var eventsAsObservables = intervals.GroupJoin<long, Event, long, Event, (DateTime, IObservable<Event>)>(
data,
_ => Observable.Never<long>(),
_ => Observable.Never<Event>(),
(intervalNumber, events) => {
var currentIntervalStart = intervalStart + intervalNumber*intervalLength;
var eventsInInterval = events
.SkipWhile(e => GetIntervalStartLocal(e.EventTime) < currentIntervalStart)
.TakeWhile(e => GetIntervalStartLocal(e.EventTime) == currentIntervalStart);
return (currentIntervalStart, eventsInInterval);
});
var eventsForIntervalsAsObservables = eventsAsObservables.SelectMany(g => {
var lists = g.Item2.Aggregate(new List<Event>(), (es, e) => { es.Add(e); return es; });
return lists.Select(l => (intervalStart: g.Item1, events: l));
});
var task = eventsForIntervalsAsObservables.ForEachAsync(es => System.Console.WriteLine(
$"=[{es.intervalStart.TimeOfDay}]= " + string.Join("; ", es.events.Select(e => e.EventTime.TimeOfDay))));
await task;
I was thinking that I'd use GroupJoin which joins based on values. So first, I'll emit interval timestamps. Then, inside GroupJoins resultSelector, I'll compute a matching interval from each Event, using GetIntervalStartLocal function (truncates the date to an interval length). After that, I'll skip all the potential leftovers from a previous interval (SkipWhile expected interval is higher then actually computed from Event). Finally, I'll TakeWhile event computed interval matches expected.
However, there must be a problem before I even get to SkipWhile and TakeWhile, because resultSelector actually does not operate on all data from data, but ignores some, e.g. like this:
event.EventTime: 1s-----2s----3s----4s----5s----6s---
stream: A---C--D-------------------F-----H--
and then constructs (from what it operates on, correctly):
1s-----2s----3s----4s----5s---6s---
[A-C]--[D]---[ ]---[ ]---[F]--[H]--
I think I must be doing something terribly wrong here, because it shouldn't be that hard to do partitioning on a stream based on a stream event value.
You need to clarify what you want. Given this:
time : 1s-------2s----3s----4s----5s-----6s---
stream: A-B-C----D-----------------E-F----G-H-- (actual)
group : [A-B-C]--[D]---[ ]---[ ]---[E-F]--[G-H] (desired result)
It's not clear whether 'time' here is your event time-stamp, or actual time. If it's actual time, then that is of course impossible: You can't pass a list of ABC before C has arrived. If you're referring to your event time-stamp, then Buffer or perhaps Window will have to know when to stop, which isn't that easy to do.
GroupBy does work for me as follows:
var sampleSource = Observable.Interval(TimeSpan.FromMilliseconds(400))
.Timestamp()
.Select(t => new Event { EventTime = t.Timestamp.DateTime, Value = (int)t.Value });
sampleSource
.GroupBy(e => e.EventTime.Ticks / 10000000) //10M ticks per second
.Dump(); //LinqPad
The only problem with this is that each group doesn't have a close criteria, so it's a giant memory leak. So you can add a timer to close the groups:
sampleSource
.GroupBy(e => e.EventTime.Ticks / 10000000) //10M ticks per second
.Select(g => g.TakeUntil(Observable.Timer(TimeSpan.FromSeconds(2)))) //group closes 2 seconds after opening
.Dump(); //LinqPad
This closing also allows us to return lists with .ToList(), rather than Observables:
sampleSource
.GroupBy(e => e.EventTime.Ticks / 10000000) //10M ticks per second
.SelectMany(g => g.TakeUntil(Observable.Timer(TimeSpan.FromSeconds(2))).ToList())
.Dump(); //LinqPad

Why is this output variable in my LINQ expression NOT problematic?

Given the following code:
var strings = Enumerable.Range(0, 100).Select(i => i.ToString());
int outValue = 0;
var someEnumerable = strings.Where(s => int.TryParse(s, out outValue))
.Select(s => outValue);
outValue = 3;
//enumerating over someEnumerable here shows ints from 0 to 99
I am able to see a "snapshot" of the out parameter for each iteration. Why does this work correctly instead of me seeing 100 3's (deferred execution) or 100 99's (access to modified closure)?
First you define a query, strings that knows how to generate a sequence of strings, when queried. Each time a value is asked for it will generate a new number and convert it to a string.
Then you declare a variable, outValue, and assign 0 to it.
Then you define a new query, someEnumerable, that knows how to, when asked for a value, get the next value from the query strings, try to parse the value and, if the value can be parsed, yields the value of outValue. Once again, we have defined a query that can do this, we have not actually done any of this.
You then set outValue to 3.
Then you ask someEnumerable for it's first value, you are asking the implementation of Select for its value. To compute that value it will ask the Where for its first value. The Where will ask strings. (We'll skip a few steps now.) The Where will get a 0. It will call the predicate on 0, specifically calling int.TryParse. A side effect of this is that outValue will be set to 0. TryParse returns true, so the item is yielded. Select then maps that value (the string 0) into a new value using its selector. The selector ignores the value and yields the value of outValue at that point in time, which is 0. Our foreach loop now does whatever with 0.
Now we ask someEnumerable for its second value, on the next iteration of the loop. It asks Select for a value, Select asks Where,Where asks strings, strings yields "1", Where calls the predicate, setting outValue to 1 as a side effect, Select yields the current value of outValue, which is 1. The foreach loop now does whatever with 1.
So the key point here is that due to the way in which Where and Select defer execution, performing their work only immediately when the values are needed, the side effect of the Where predicate ends up being called immediately before each projection in the Select. If you didn't defer execution, and instead performed all of the TryParse calls before any of the projections in Select, then you would see 100 for each value. We can actually simulate this easily enough. We can materialize the results of the Where into a collection, and then see the results of the Select be 100 repeated over and over:
var someEnumerable = strings.Where(s => int.TryParse(s, out outValue))
.ToList()//eagerly evaluate the query up to this point
.Select(s => outValue);
Having said all of that, the query that you have is not particularly good design. Whenever possible you should avoid queries that have side effects (such as your Where). The fact that the query both causes side effects, and observes the side effects that it creates, makes following all of this rather hard. The preferable design would be to rely on purely functional methods that aren't causing side effects. In this context the simplest way to do that is to create a method that tries to parse a string and returns an int?:
public static int? TryParse(string rawValue)
{
int output;
if (int.TryParse(rawValue, out output))
return output;
else
return null;
}
This allows us to write:
var someEnumerable = from s in strings
let n = TryParse(s)
where n != null
select n.Value;
Here there are no observable side effects in the query, nor is the query observing any external side effects. It makes the whole query far easier to reason about.
Because when you enumerate the value changes one at a time and changes the value of the variable on the fly. Due to the nature of LINQ the select for the first iteration is executed before the where for the second iteration. Basically this variable turns into a foreach loop variable of a kind.
This is what deferred execution buys us. Previous methods do not have to execute fully before the next method in the chain starts. One value moves through all the methods before the second goes in. This is very useful with methods like First or Take which stop the iteration early. Exceptions to the rule are methods that need to aggregate or sort like OrderBy (they need to look at all elements before finding out which is first). If you add an OrderBy before the Select the behavior will probably break.
Of course I wouldn't depend on this behavior in production code.
I don't understand what is odd for you?
If you write a loop on this enumerable like this
foreach (var i in someEnumerable)
{
Console.WriteLine(outValue);
}
Because LINQ enumerates each where and select lazyly and yield each value, if you add ToArray
var someEnumerable = strings.Where(s => int.TryParse(s, out outValue))
.Select(s => outValue).ToArray();
Than in the loop you will see 99 s
Edit
The below code will print 99 s
var strings = Enumerable.Range(0, 100).Select(i => i.ToString());
int outValue = 0;
var someEnumerable = strings.Where(s => int.TryParse(s, out outValue))
.Select(s => outValue).ToArray();
//outValue = 3;
foreach (var i in someEnumerable)
{
Console.WriteLine(outValue);
}

What is the functional way to properly set a dependent predicate for Observable sequence without side effect?

I have three observables oGotFocusOrDocumentSaved, oGotFocus and oLostFocus. I would like oGotFocusOrDocumentSaved to push sequences only when _active is true. My implementation below works as needed, but it introduces a side-effect on _active. Is there anyway to remove side-effect but still get the same functionality?
class TestClass
{
private bool _active = true;
public TestClass(..)
{
...
var oLostFocus = Observable
.FromEventPattern<EventArgs>(_view, "LostFocus")
.Throttle(TimeSpan.FromMilliseconds(500));
var oGotFocus = Observable
.FromEventPattern<EventArgs>(_view, "GotFocus")
.Throttle(TimeSpan.FromMilliseconds(500));
var oGotFocusOrDocumentSaved = oDocumentSaved // some other observable
.Merge<CustomEvtArgs>(oGotFocus)
.Where(_ => _active)
.Publish();
var lostFocusDisposable = oLostFocus.Subscribe(_ => _active = false);
var gotFocusDisposable = oGotFocus.Subscribe(_ => _active = true);
// use case
oGotFocusOrDocumentSaved.Subscribe(x => DoSomethingWith(x));
...
}
...
}
It does sound like you really want a oDocumentSavedWhenHasFocus rather than a oGotFocusOrDocumentSaved observable.
So try using the .Switch() operator, like this:
var oDocumentSavedWhenHasFocus =
oGotFocus
.Select(x => oDocumentSaved.TakeUntil(oLostFocus))
.Switch();
This should be fairly obvious as to how it works, once you know how .Switch() works.
A combination of SelectMany and TakeUntil should get you where you need to be.
from g in oGotFocus
from d in oDocumentSaved
.Merge<CustomEvtArgs>(oGotFocus)
.TakeUntil(oLostFocus)
It seems that you want to be notified when the document is saved, but only if the document currently has focus. Correct? (And you also want to be notified when the document gets focus, but that can easily be merged in later.)
Think in terms of windows instead of point events; i.e., join by coincidence.
Your requirement can be represented as a Join query whereby document saves are joined to focus windows, thus yielding notifications only when both overlap; i.e., when both are "active".
var oGotFocusOrDocumentSaved =
(from saved in oDocumentSaved
join focused in oGotFocus
on Observable.Empty<CustomEventArgs>() // oDocumentSave has no duration
equals oLostFocus // oGotFocus duration lasts until oLostFocus
select saved)
.Merge(oGotFocus);

Reactive extensions: matching complex key press sequence

What I'm trying to achieve is to handle some complex key press and release sequence with Rx. I have some little experience with Rx, but it's clearly not enough for my current undertaking, so I'm here for some help.
My WinForms app is running in the background, only visible in a system tray. By a given key sequence I want to activate one of it's forms. Btw, to hook up to the global key presses I'm using a nice library http://globalmousekeyhook.codeplex.com/ I'm able to receive every key down and key up events, and while key is down multiple KeyDown events are produced (with a standard keyboard repeat rate).
One of example key sequence I want to capture is a quick double Ctrl + Insert key presses (like holding Ctrl key and pressing Insert twice in a given period of time). Here is what I have currently in my code:
var keyDownSeq = Observable.FromEventPattern<KeyEventArgs>(m_KeyboardHookManager, "KeyDown");
var keyUpSeq = Observable.FromEventPattern<KeyEventArgs>(m_KeyboardHookManager, "KeyUp");
var ctrlDown = keyDownSeq.Where(ev => ev.EventArgs.KeyCode == Keys.LControlKey).Select(_ => true);
var ctrlUp = keyUpSeq.Where(ev => ev.EventArgs.KeyCode == Keys.LControlKey).Select(_ => false);
But then I'm stuck. My idea is that I need somehow to keep track of if the Ctrl key is down. One way is to create some global variable for that, and update it in some Merge listener
Observable.Merge(ctrlDown, ctrlUp)
.Do(b => globabl_bool = b)
.Subscribe();
But I think it ruins the whole Rx approach. Any ideas on how to achieve that while staying in Rx paradigm?
Then while the Ctrl is down I need to capture two Insert presses within a given time. I was thinking about using the Buffer:
var insertUp = keyUpSeq.Where(ev => ev.EventArgs.KeyCode == Keys.Insert);
insertUp.Buffer(TimeSpan.FromSeconds(1), 2)
.Do((buffer) => { if (buffer.Count == 2) Debug.WriteLine("happened"); })
.Subscribe();
However I'm not sure if it's most efficient way, because Buffer will produce events every one second, even if there was no any key pressed. Is there a better way? And I also need to combine that with Ctrl down somehow.
So once again, I need to keep track of double Insert press while Ctrl is down. Am I going in the right direction?
P.S. another possible approach is to subscribe to Insert observable only while Ctrl is down. Not sure how to achieve that though. Maybe some ideas on this as well?
EDIT: Another problem I've found is that Buffer doesn't suit my needs exactly. The problem comes from the fact that Buffer produces samples every two seconds, and if my first press belongs to the first buffer, and second to the next one, then nothing happens. How to overcome that?
Firstly, welcome to the brain-bending magic of the Reactive Framework! :)
Try this out, it should get you started on what you're after - comments in line to describe whats going on:
using(var hook = new KeyboardHookListener(new GlobalHooker()))
{
hook.Enabled = true;
var keyDownSeq = Observable.FromEventPattern<KeyEventArgs>(hook, "KeyDown");
var keyUpSeq = Observable.FromEventPattern<KeyEventArgs>(hook, "KeyUp");
var ctrlPlus =
// Start with a key press...
from keyDown in keyDownSeq
// and that key is the lctrl key...
where keyDown.EventArgs.KeyCode == Keys.LControlKey
from otherKeyDown in keyDownSeq
// sample until we get a keyup of lctrl...
.TakeUntil(keyUpSeq
.Where(e => e.EventArgs.KeyCode == Keys.LControlKey))
// but ignore the fact we're pressing lctrl down
.Where(e => e.EventArgs.KeyCode != Keys.LControlKey)
select otherKeyDown;
using(var sub = ctrlPlus
.Subscribe(e => Console.WriteLine("CTRL+" + e.EventArgs.KeyCode)))
{
Console.ReadLine();
}
}
Now this doesn't do exactly what you specified, but with a little tweaking, it could be easily adapted. The key bit is the implicit SelectMany calls in the sequential from clauses of the combined linq query - as a result, a query like:
var alphamabits =
from keyA in keyDown.Where(e => e.EventArgs.KeyCode == Keys.A)
from keyB in keyDown.Where(e => e.EventArgs.KeyCode == Keys.B)
from keyC in keyDown.Where(e => e.EventArgs.KeyCode == Keys.C)
from keyD in keyDown.Where(e => e.EventArgs.KeyCode == Keys.D)
from keyE in keyDown.Where(e => e.EventArgs.KeyCode == Keys.E)
from keyF in keyDown.Where(e => e.EventArgs.KeyCode == Keys.F)
select new {keyA,keyB,keyC,keyD,keyE,keyF};
translates (very) roughly into:
if A, then B, then C, then..., then F -> return one {a,b,c,d,e,f}
Make sense?
(ok, since you've read this far...)
var ctrlinsins =
from keyDown in keyDownSeq
where keyDown.EventArgs.KeyCode == Keys.LControlKey
from firstIns in keyDownSeq
// optional; abort sequence if you leggo of left ctrl
.TakeUntil(keyUpSeq.Where(e => e.EventArgs.KeyCode == Keys.LControlKey))
.Where(e => e.EventArgs.KeyCode == Keys.Insert)
from secondIns in keyDownSeq
// optional; abort sequence if you leggo of left ctrl
.TakeUntil(keyUpSeq.Where(e => e.EventArgs.KeyCode == Keys.LControlKey))
.Where(e => e.EventArgs.KeyCode == Keys.Insert)
select "Dude, it happened!";
All right, I've come up with some solution. It works, but has some limits which I'll explain further. I'll not accept the answer for some time, maybe somebody else will offer a better and more generic way to solve this problem. Anyway, here's the current solution:
private IDisposable SetupKeySequenceListener(Keys modifierKey, Keys doubleClickKey, TimeSpan doubleClickDelay, Action<Unit> actionHandler)
{
var keyDownSeq = Observable.FromEventPattern<KeyEventArgs>(m_KeyboardHookManager, "KeyDown");
var keyUpSeq = Observable.FromEventPattern<KeyEventArgs>(m_KeyboardHookManager, "KeyUp");
var modifierIsPressed = Observable
.Merge(keyDownSeq.Where(ev => (ev.EventArgs.KeyCode | modifierKey) == modifierKey).Select(_ => true),
keyUpSeq.Where(ev => (ev.EventArgs.KeyCode | modifierKey) == modifierKey).Select(_ => false))
.DistinctUntilChanged()
.Do(b => Debug.WriteLine("Ctrl is pressed: " + b.ToString()));
var mainKeyDoublePressed = Observable
.TimeInterval(keyDownSeq.Where(ev => (ev.EventArgs.KeyCode | doubleClickKey) == doubleClickKey))
.Select((val) => val.Interval)
.Scan((ti1, ti2) => ti2)
.Do(ti => Debug.WriteLine(ti.ToString()))
.Select(ti => ti < doubleClickDelay)
.Merge(keyUpSeq.Where(ev => (ev.EventArgs.KeyCode | doubleClickKey) == doubleClickKey).Select(_ => false))
.Do(b => Debug.WriteLine("Insert double pressed: " + b.ToString()));
return Observable.CombineLatest(modifierIsPressed, mainKeyDoublePressed)
.ObserveOn(WindowsFormsSynchronizationContext.Current)
.Where((list) => list.All(elem => elem))
.Select(_ => Unit.Default)
.Do(actionHandler)
.Subscribe();
}
Usage:
var subscriptionHandler = SetupKeySequenceListener(
Keys.LControlKey | Keys.RControlKey,
Keys.Insert | Keys.C,
TimeSpan.FromSeconds(0.5),
_ => { WindowState = FormWindowState.Normal; Show(); Debug.WriteLine("IT HAPPENED"); });
Let me explain what's going on here, maybe it will be useful for some. I'm essentially setting up 3 Observables, one is for modifier key (modifierIsPressed), another for key which needs to be double-clicked when modifier is pressed to activate the sequence (mainKeyDoublePressed), and the last that combines the two first.
First one is pretty straightforward: just convert key presses and releases to bool (using the Select). DistinctUntilChanged is needed because of if user press and holds some key, multiple events are generated. What I'm getting in this observable is a sequence of booleans, saying if modifier key is down.
Then the most tricky one, where the main key is handled. Let's go step by step:
I'm using TimeInterval to replace key down (it's important) events with the timespans
Then I'm getting the actual timespans out with Select function (to prepare for the next step)
Then comes the most tricky thing, the Scan. What it does is takes each two consecutive elements from previous sequence (timespans in our case) and passes them into a function as two parameters. Output of that function (which has to be of the same type as the parameters, a timespan) is passed further. The function in my case does very simple thing: just returns the second parameter.
Why? It's time to remember my actual task here: to catch double press of some button which are close enough to each other in time (like in half of a second in my example). My input is a sequence of timespans which are saying how much time passed since the previous event has happened. That's why I need to wait for two events: first one will be usually long enough, because it will tell since when user pressed the key last time, which could be minutes or more. But if the user presses the key two times quickly, then the second timespan will be small, since it will tell the difference between these two last quick presses.
Sounds complicated, right? Then think about it in a simple way: Scan always combines two latest events. That's why it fits my needs in this case: I need to listen to double-click. If I'd need to wait for 3 consecutive presses, I'd be at a loss here. That's why I call this approach limited, and still wait if somebody will offer a better and more generic solution, to handle potentially any key combination.
Anyway, let's continue the explanation:
4.Select(ti => ti < doubleClickDelay): here I just convert the sequence from timestamps to booleans, passing true for quick enough consecutive events, and false for not quick enough ones.
5.Here's another trick: I'm merging boolean sequence from step 4 to the new one, where I listen to the key up events. Remember that the original sequence one was built from key down events, right? So here I'm essentially taking the same approach as with observable number one: passing true for key down and false for key up.
Then it becomes super-easy to use CombineLatest function, which takes the last events from each sequence and pass them further, as a List, to the Where function, which checks if all of them are true. That's how I achieve my goal: now I know when main key was pressed twice while modifier key is held down. Merging in the main key up event ensures that I clear the state, so the next presses of modifier key will not trigger the sequence.
So here we go, that's pretty much it. I'll post this, but will not accept, as I said before. I hope somebody will chime in and enlighten me. :)
Thanks in advance!

Categories