Detecting rare incidents from multivariate time series intervals - c#

Given a time series of sensor state intervals, how do I implement a classifier which learns from supervised training data to detect an incident based on a sequence of state intervals? To simplify the problem, sensor states are reduced to either true or false.
Update: I've found this paper (PDF) on Mining Sequences of Temporal Intervals which addresses a similar problem. Another paper (Google Docs) on Mining Hierarchical Temporal Patterns in Multivariate Time Series takes a novel approach, but deals with hierarchical data.
Example Training Data
The following data is a training example for an incident, represented as a graph over time, where /¯¯¯\ represents a true state interval and \___/ a false state interval for a sensor.
Sensor | Sensor State over time
| 0....5....10...15...20...25... // timestamp
---------|--------------------------------
A | ¯¯¯¯¯¯¯¯¯¯¯¯\________/¯¯¯¯¯¯¯¯
B | ¯¯¯¯¯\___________________/¯¯¯¯
C | ______________________________ // no state change
D | /¯\_/¯\_/¯\_/¯\_/¯\_/¯\_/¯\_/¯
E | _________________/¯¯¯¯¯¯¯¯\___
Incident Detection vs Sequence Labeling vs Classification
I initially generalised my problem as a two-category sequence labeling problem, but my categories really represented "normal operation" and a rare "alarm event" so I have rephrased my question as incident detection. Training data is available for "normal operation" and "alarm incident".
To reduce problem complexity, I have discretized sensor events to boolean values, but this need not be the case.
Possible Algorithms
A hidden Markov model seems to be a possible solution, but would it be able to use the state intervals? If a sequence labeler is not the best approach for this problem, alternative suggestions would be appreciated.
Bayesian Probabilistic Approach
Sensor activity will vary significantly by time of day (busy in mornings, quiet at night). My initial approach would have been to measure normal sensor state over a few days and calculate state probability by time of day (hour). The combined probability of sensor states at an unlikely hour surpassing an "unlikelihood threshold" would indicate an incident. But this seemed like it would raise a false alarm if the sensors were noisy. I have not yet implemented this, but I believe that approach has merit.
Feature Extraction
Vector states could be represented as state interval changes occurring at a specific time and lasting a specific duration.
struct StateInterval
{
int sensorID;
bool state;
DateTime timeStamp;
TimeSpan duration;
}
eg. Some State Intervals from the process table:
[ {D, true, 0, 3} ]; [ {D, false, 4, 1} ]; ...
[ {A, true, 0, 12} ]; [ {B, true, 0, 6} ]; [ {D, true, 0, 3} ]; etc.
A good classifier would take into account state-value intervals and recent state changes to determine if a combination of state changes closely matches training data for a category.
Edit: Some ideas after sleeping on how to extract features from multiple sensors' alarm data and how to compare it to previous data...
Start by calculating the following data for each sensor for each hour of the day:
Average state interval length (for true and false states)
Average time between state changes
Number of state changes over time
Each sensor could then be compared to every other sensor in a matrix with data like the following:
Average time taken for sensor B to change to a true state after sensor A did. If an average value is 60 seconds, then a 1-second wait would be more interesting than a 120-second wait.
Average number of state changes sensor B underwent while sensor A was in one state
Given two sets of training data, the classifier should be able to determine from these feature sets which is the most likely category for classification.
Is this a sensible approach and what would be a good algorithm to compare these features?
Edit: the direction of a state change (false->true vs true-false) is significant, so any features should take that into account.

A simple solution would be collapse the time aspect of your data and take each timestamp as one instance. In this case, the values of the sensors are considered your feature vector, where each time step is labeled with a class value of category A or B (at least for the labeled training data):
sensors | class
A B C D E |
-------------------------
1 1 1 0 0 | catA
1 0 0 0 0 | catB
1 1 0 1 0 | catB
1 1 0 0 0 | catA
..
This input data is fed to the usual classification algorithms (ANN, SVM, ...), and the goal is to predict the class of unlabeled time series:
sensors | class
A B C D E |
-------------------------
0 1 1 1 1 | ?
1 1 0 0 0 | ?
..
An intermediary step of dimensionality reduction / feature extraction could improve the results.
Obviously this may not be as good as modeling the time dynamics of the sequences, especially since techniques such as Hidden Markov Models (HMM) take into account the transitions between the various states.
EDIT
Based on your comment below, it seems that the best way to get less transitory predictions of the target class is to a apply a post-processing rule at the end of the prediction phase, and treating the classification output as a sequence of consecutive predictions.
The way this works is that you would compute the class posterior probabilities (ie: probability distribution that an instance belong to each class label, which in the case of binary SVM are easily derived from the decision function), then given a specified threshold, you check if the probability of the predicted class is above that threshold: if it is we use that class to predict the current timestamp, if not then we keep the previous prediction, and the same goes for future instances. This has the effect of adding a certain inertia to the current prediction.

This doesn't sound like a classification problem. Classifiers aren't really meant to take into account "a combination of state changes." It sounds like a sequence labeling problem. Look into using a Hidden Markov Model or a Conditional Random Field. You can find an efficient implementation of the latter at http://leon.bottou.org/projects/sgd.
Edit:
I've read through your question in a little more detail, and I don't think and HMM is the best model given what you want to do with features. It's going to blow up your state space and could make inference intractable. You need a more expressive model. You could look at Dynamic Bayesian Networks. They generalize HMMs by allowing the state space to be represented in factored form. Kevin Murphy's dissertation is the most thorough resource for them I've come across.
I'll still like CRFs though. Just as an easy place to start, define one with the time of day and each of the sensor readings as the features for each observation and use bigram feature functions. You can see how it performs and increase complexity of your features from there. I would start simple though. I think you're underestimating how difficult some of your ideas will be to implement.

Why reinvent the wheel? Check out TClass
If that doesn't cut it for you, you can find there also a number of pointers. I hope this helps.

Related

Identify Phrases/Words in a Sentence

I'm looking for an API or library that can identify certain phrases/words in a sentence or short phrase?
The application of this is to find items which are not permitted. For instance, we will have a definitive list of phrases/words that are not permitted e.g., knife, battery, oil, paint, nail polish, glass etc.
We will then have a list of short phrases that need to be checked against this list. Ideally the API should handle pluralisation, misspellings and number substitutions e.g., 0 as o
UPDATE 15 Oct 2021:
Over the past few months, I've been experimenting with Fuse (JavaScript) and FuzzySharp. However, I’m still struggling to find a solution that can accurately identify words/phrases.
Using FuzzySharp, some examples of false positives are:
“used hot water bottle” – this matched “water” with a score of 90
“waterproof trousers” – this matched “water” with a score of 90
“oil” – this matched “toiletries” with a score of 90
“toilet” – this matched “toiletries” with a score of 90
I understand why these have a high score, however I’m unsure of what technology I should be using improve accuracy. At the moment we only have 298 phrases that we want to search against

Complicated Logic for Variation of same rhythm C#

I am working on checking the variation on some integers and filtering it up , so let me say that we start entering integers of 700,768,820,320,790,260,etc... so I want to check if the each integer is within the same rhythm and harmony so 320 will be remove or ignored,
Let us say the variation must not be less the 75% of the lower number and must not the higher than 75%.
Actually the problem is not checking the new entry if it is higher or lower , but the problem if the rhythm or harmony of the entries suddenly become higher , so lets take an example :
768,799,890,320,380,799,820,1230,1300,1340,1342,1400,680,1340,1280,1490 ,
so in that case I we started in a range of 700 ~ 890 so 320 will be filtered out , and when suddenly the range became between 1230 ~ 1400 so 680 . and we cannot expect what would be the range ,
so how to make a logic that can filter out and put the higher and lower limitations?
no need for any code just I need for logical explanation .
Regards...

Calculate Difference between 2 Time Spans DSP

This might be a wide answer but i would like to see answers and discuss this thread with SO users.
So far i guess a Audio File(WAV) has a Sample Rate which could be 44000 or 48000 (i've seen most these 2), and from that we can determine that a single Second into a File (second 00:00:01) has exactly 44000 Integer Values which means here we have an Int[], so if an Audio File Duration is 5 Seconds it has 5 * 44000 Integers (or 5 Samples).
So my question is, how can we calculate the difference (or similarity) of content between two time spans, like Audio1.wav and Audio2.wav at 00:00:01 with same Sample Rate.
There are couple assumptions in your reasoning:
1. The file is the raw uncompressed (PCM encoded) data.
2. There is only one channel (mono).
It's better to start from reading some format descriptions and sample implementations, then search for some audio comparison algorithms (1, 2, 3).
Linked Q: Compare two spectogram to find the offset where they match algorithm
One way to do this would be to resample the signal from 44100 Hz to 48000 Hz, so both signals have the same samplerate, and perform a cross-correlation. The shape of the cross-correlation could be a measure of similarity. You could look at the height of the peak, or the ratio of energy in the peak to the total energy.
Note however that when the signal repeats itself, you will get multiple cross-correlation peaks.

Create a summary description of a schedule given a list of shifts

Assuming I have a list of shifts for an event (in the format start date/time, end date/time) - is there some sort of algorithm I could use to create a generalized summary of the schedule? It is quite common for most of the shifts to fall into some sort of common recurrence pattern (ie. Mondays from 9:00 am to 1:00 pm, Tuesdays from 10:00 am to 3:00 pm, etc). However, there can (and will be) exceptions to this rule (eg. one of the shifts fell on a holiday and was rescheduled for the next day). It would be fine to exclude those from my "summary", as I'm looking to provide a more general answer of when does this event usually occur.
I guess I'm looking for some sort of statistical method to determine the day and time occurences and create a description based on the most frequent occurences found in the list. Is there some sort of general algorithm for something like this? Has anyone created something similar?
Ideally I'm looking for a solution in C# or VB.NET, but don't mind porting from any other language.
Thanks in advance!
You may use Cluster Analysis.
Clustering is a way to segregate a set of data into similar components (subsets). The "similarity" concept involves some definition of "distance" between points. Many usual formulas for the distance exists, among others the usual Euclidean distance.
Practical Case
Before pointing you to the quirks of the trade, let's show a practical case for your problem, so you may get involved in the algorithms and packages, or discard them upfront.
For easiness, I modelled the problem in Mathematica, because Cluster Analysis is included in the software and very straightforward to set up.
First, generate the data. The format is { DAY, START TIME, END TIME }.
The start and end times have a random variable added (+half hour, zero, -half hour} to show the capability of the algorithm to cope with "noise".
There are three days, three shifts per day and one extra (the last one) "anomalous" shift, which starts at 7 AM and ends at 9 AM (poor guys!).
There are 150 events in each "normal" shift and only two in the exceptional one.
As you can see, some shifts are not very far apart from each other.
I include the code in Mathematica, in case you have access to the software. I'm trying to avoid using the functional syntax, to make the code easier to read for "foreigners".
Here is the data generation code:
Rn[] := 0.5 * RandomInteger[{-1, 1}];
monshft1 = Table[{ 1 , 10 + Rn[] , 15 + Rn[] }, {150}]; // 1
monshft2 = Table[{ 1 , 12 + Rn[] , 17 + Rn[] }, {150}]; // 2
wedshft1 = Table[{ 3 , 10 + Rn[] , 15 + Rn[] }, {150}]; // 3
wedshft2 = Table[{ 3 , 14 + Rn[] , 17 + Rn[] }, {150}]; // 4
frishft1 = Table[{ 5 , 10 + Rn[] , 15 + Rn[] }, {150}]; // 5
frishft2 = Table[{ 5 , 11 + Rn[] , 15 + Rn[] }, {150}]; // 6
monexcp = Table[{ 1 , 7 + Rn[] , 9 + Rn[] }, {2}]; // 7
Now we join the data, obtaining one big dataset:
data = Join[monshft1, monshft2, wedshft1, wedshft2, frishft1, frishft2, monexcp];
Let's run a cluster analysis for the data:
clusters = FindClusters[data, 7, Method->{"Agglomerate","Linkage"->"Complete"}]
"Agglomerate" and "Linkage" -> "Complete" are two fine tuning options of the clustering methods implemented in Mathematica. They just specify we are trying to find very compact clusters.
I specified to try to detect 7 clusters. If the right number of shifts is unknown, you can try several reasonable values and see the results, or let the algorithm select the more proper value.
We can get a chart with the results, each cluster in a different color (don't mind the code)
ListPointPlot3D[ clusters,
PlotStyle->{{PointSize[Large], Pink}, {PointSize[Large], Green},
{PointSize[Large], Yellow}, {PointSize[Large], Red},
{PointSize[Large], Black}, {PointSize[Large], Blue},
{PointSize[Large], Purple}, {PointSize[Large], Brown}},
AxesLabel -> {"DAY", "START TIME", "END TIME"}]
And the result is:
Where you can see our seven clusters clearly apart.
That solves part of your problem: identifying the data. Now you also want to be able to label it.
So, we'll get each cluster and take means (rounded):
Table[Round[Mean[clusters[[i]]]], {i, 7}]
The result is:
Day Start End
{"1", "10", "15"},
{"1", "12", "17"},
{"3", "10", "15"},
{"3", "14", "17"},
{"5", "10", "15"},
{"5", "11", "15"},
{"1", "7", "9"}
And with that you get again your seven classes.
Now, perhaps you want to classify the shifts, no matter the day. If the same people make the same task at the same time everyday, so it's no useful to call it "Monday shift from 10 to 15", because it happens also on Weds and Fridays (as in our example).
Let's analyze the data disregarding the first column:
clusters=
FindClusters[Take[data, All, -2],Method->{"Agglomerate","Linkage"->"Complete"}];
In this case, we are not selecting the number of clusters to retrieve, leaving the decision to the package.
The result is
You can see that five clusters have been identified.
Let's try to "label" them as before:
Grid[Table[Round[Mean[clusters[[i]]]], {i, 5}]]
The result is:
START END
{"10", "15"},
{"12", "17"},
{"14", "17"},
{"11", "15"},
{ "7", "9"}
Which is exactly what we "suspected": there are repeated events each day at the same time that could be grouped together.
Edit: Overnight Shifts and Normalization
If you have (or plan to have) shifts that start one day and end on the following, it's better to model
{Start-Day Start-Hour Length} // Correct!
than
{Start-Day Start-Hour End-Day End-Hour} // Incorrect!
That's because as with any statistical method, the correlation between the variables must be made explicit, or the method fails miserably. The principle could run something like "keep your candidate data normalized". Both concepts are almost the same (the attributes should be independent).
--- Edit end ---
By now I guess you understand pretty well what kind of things you can do with this kind if analysis.
Some references
Of course, Wikipedia, its "references" and "further reading" are good guide.
A nice video here showing the capabilities of Statsoft, but you can get there many
ideas about other things you can do with the algorithm.
Here is a basic explanation of the algorithms involved
Here you can find the impressive functionality of R for Cluster Analysis (R is a VERY good option)
Finally, here you can find a long list of free and commercial software for statistics in general, including clustering.
HTH!
I don't think any ready made algorithm exists, so unfortunately you need to come up with something yourself. Because the problem is not really well defined (from mathematical perspective) it will require testing on some "real" data that would be reasonably representative, and a fair bit of tweaking.
I would start from dividing your shifts into weekdays (because if I understand correctly you are after a weekly view) - so for each weekday we have shifts that happen to be on that day. Then for each day I would group the shifts that happen at the same time (or "roughly" at the same time - here you need to come up with some heuristic, i.e. both start and end times do not deviate from average in the group by more than 15min or 30 mins). Now we need another heuristic to decide if this group is relevant, i.e. if a shift 1pm-3pm on a Monday happened only once it is probably not relevant, but if it happened on at least 70% of Mondays covered by the data then it is relevant. And now your relevant groups for each day of the week will form the schedule you are after.
Could we see an example data set? If it is really "clean" data then you could simply find the mode of the start and end times.
One option would be to label all the start times as +1 and the end times as -1 then create a three column table of times (both start and ends), label (+1 or -1), and number of staff at that time (starts with zero and adds or subtracts staff using the label) and sort the whole thing in time order.
This time series now is a summary descriptor of your staff levels and the labels are also a series as well. Now you can apply time series statistics to both to look for daily, weekly or monthly patterns.

What is the best way to represent "Recurring Events" in database?

I am trying to develop a scheduler- and calendar-dependent event application in C#, for which a crucial requirement is to represent recurring events in the database.
What is the best way to represent recurring events in a database?
More Details:
While creating the event I am also sending invites to the certain users and the invitees should be allowed to login to the meeting only during the specified window(meeting duration) or may be decline the login when the invitee attempts to login say, 5 minutes before the scheduled start of the meeting.
The sysjobs, sysjobsschedule and sysschedules tables in SQL Server does a pretty good job of this. I wouldn't reinvent the wheel, I'd just copy their design.
Here are some of the important fields from sysschedules
freq_type
How frequently a job runs for this schedule.
1 = One time only
4 = Daily
8 = Weekly
16 = Monthly
32 = Monthly, relative to freq_interval
64 = Runs when the SQL Server Agent service starts
128 = Runs when the computer is idle
freq_interval
Days that the job is executed. Depends on the value of freq_type. The default value is 0, which indicates that freq_interval is unused.
Value of freq_type Effect on freq_interval
1 (once) freq_interval is unused (0)
4 (daily) Every freq_interval days
8 (weekly) freq_interval is one or more of the following: 1 = Sunday 2 = Monday 4 = Tuesday 8 = Wednesday 16 = Thursday 32 = Friday 64 = Saturday
16 (monthly) On the freq_interval day of the month
32 (monthly, relative) freq_interval is one of the following: 1 = Sunday 2 = Monday 3 = Tuesday 4 = Wednesday 5 = Thursday 6 = Friday 7 = Saturday 8 = Day 9 = Weekday 10 = Weekend day
64 (starts when SQL Server Agent service starts) freq_interval is unused (0)
128 (runs when computer is idle) freq_interval is unused (0)
freq_subday_type
Units for the freq_subday_interval. Can be one of the following values:
Value Description (unit)
1 At the specified time
2 Seconds
4 Minutes
8 Hours
freq_subday_interval
Number of freq_subday_type periods to occur between each execution of the job.
freq_relative_interval
When freq_interval occurs in each month, if freq_interval is 32 (monthly relative). Can be one of the following values:
0 = freq_relative_interval is unused
1 = First
2 = Second
4 = Third
8 = Fourth
16 = Last
freq_recurrence_factor
Number of weeks or months between the scheduled execution of a job. freq_recurrence_factor is used only if freq_type is 8, 16, or 32. If this column contains 0, freq_recurrence_factor is unused.
Well, to store the recurrence rule itself, you could use a cut down version of RFC 5545 (and I really suggest you cut it down heavily). Aside from anything else, that will make it easy to export into other applications should you wish to.
After you've made that decision, for the database side you need to work out whether you want to store each occurrence of the event, or just one record for the repeated event, expanding it as and when you need to. Obviously it's considerably easier to query the database when you've already got everything expanded - but it makes it trickier to maintain.
Unless you fancy writing some pretty complex SQL which may be hard to test (and you'll want a lot of unit tests for all kinds of corner cases) I would suggest that you make the database itself relatively "dumb" and write most of the business logic in a language like Java or C# - either of which may be embeddable within stored procedures depending on your database, of course.
Another thing you need to ask yourself is whether you need to cope with exceptions to events - one event in a series changing time/location etc.
I have some experience with calendaring (I've spent most of the last year working on the calendar bit of Google Sync via ActiveSync) and I should warn you that things get complicated really quickly. Anything you can deem "out of scope" is a blessing. In particular, do you need to work in multiple time zones?
Oh, and finally - be very, very careful when you're doing actual arithmetic with calendar operations. If you're going to use Java, please use Joda Time rather than the built-in Calendar/Date classes. They'll help you a lot.
The accepted answer here is too convoluted. For example, if an event occurs every 5 days, the 5 is stored in freq_interval, but if it occurs every 5 weeks, the 5 is stored in freq_recurrence. The biggest problem is that freq_interval means three different things depending on the value of freq_type (number of days between occurrences for daily recurrence, day of the month for monthly recurrence, or days of the week for weekly or monthly-relative). Also, the 1,2,4,8... type sequence is used when it is unnecessary and less than helpful. For example, freq_relative_interval can only be "one of" the possible values. This lines up with a drop-down box or radio button type input, not a checkbox type input where multiple choices can be selected. For coding, and for human readability, this sequence gets in the way and just using 1,2,3,4... is simpler, more efficient, more appropriate. Finally, most calendar applications don't need subday intervals (events occurring multiple times in a day - every so many seconds, minutes, or hours).
But, having said this, that answer did help me refine my thoughts on how I am doing this. After mix and matching it with other articles and going from what I see in the Outlook calendar interface and a few other sources, I come up with this:
recurs
0=no recurrence
1=daily
2=weekly
3=monthly
recurs_interval
this is how many of the periods between recurrences. If the event recurs every 5 days, this will have a 5 and recurs will have 1. If the event recurs every 2 weeks, this will have a 2 and recurs will have a 2.
recurs_day
If the user selected monthly type recurrence, on a given day of the month (ex: 10th or the 14th). This has that date. The value is 0 if the user did not select monthly or specific day of month recurrence. The value is 1 to 31 otherwise.
recurs_ordinal
if the user selected a monthly type recurrence, but an ordinal type of day (ex: first monday, second thursday, last friday). This will have that ordinal number. The value is 0 if the user did not select this type of recurrence.
1=first
2=second
3=third
4=fourth
5=last
recurs_weekdays
for weekly and monthly-ordinal recurrence this stores the weekdays where the recurrence happens.
1=Sunday
2=Monday
4=Tuesday
8=Wednesday
16=Thursday
32=Friday
64=Saturday
So, examples:
So, every 4 weeks on Saturday and Sunday would be
recurs = 2 ==> weekly recurrence
recurs_interval = 4 ==> every 4 weeks
recurs_weekdays = 65 ==> (Saturday=64 + Sunday=1)
recurs_day and recurs_ordinal = 0 ==> not used
Similarly, Every 6 months on the first Friday of the month would be
recurs = 3 ==> monthly recurrence
recurs_interval = 6 ==> every 6 months
recurs_ordinal = 1 ==> on the first occurrence
recurs_weekdays = 32 ==> of Friday
None of this business of having a field that means three entirely different things depending on the value of another field.
On the user interface side of things, I let the user specify a date, start time, end time. They can then specify if they want a type of recurrence other than none. If so, the app expands the relevant section of the web-page to give the user the options required for the stuff above, looking a lot like the Outlook options, except there is no "every weekday" under daily recurrence (that is redundant with weekly recurrence on every mon-fri), and there is no yearly recurrence. If there is recurrence then I also require the user to specify an end-date that is within one year of today (the users want it that way, and it simplifies my code) - I don't do unending recurrence or "end after ## occurrences."
I store these fields with the user selections in my event table, and expand that out in a schedule table which has all occurrences. This facilitates collision detection (I am actually doing a facility reservation application) and editing of individual occurrences or refactoring of future occurrences.
My users are all in CST, and I thank the good Lord for that. It is a helpful simplification for now, and if in the future the user base is going to expand beyond that, then I can figure out how to deal with it then, as a well separated task.
UPDATE
Since I first wrote this, I did add daily occurrence with "Every weekday". Our users had a bit of a hard time with thinking that you could use Weekly recurrence for events happening from Thursday one week to Tuesday the next week and only on weekdays. It was more intuitive for them to have this, even if there was already another way that they could do it.
I have been thinking about this too, although have not implemented it but these are my thoughts for a simple solution.
When setting up an event thats recurring, have the user specify the "end date" and create individual events for each one (based on the recurring options). Because its a recurring event, set a unique "recurring ID" for each of these. This ID will then be used to mark an event as recurring and if you change a future event, you can prompt the user to apply this to the rest of the future events by deleting and recreating the recurring events with a new "recurring ID" which will also differentiate this recurring event from the previously ones that have changed.
Hope this makes sense and would like any comments.
I would record recurring events as two separate things in the database. First of all, in an events table, record each and every occurence of the event. Secondly, have recurrences table in which you record the details that you ask for to set up the recurring event. Start date, periodicity, number of occurences, etc.
Then you might think of tying it all together by putting the PK of recurrences into each of the event records as an FK. But a better design would be to normalise the event table into two tables, one which is just the barebones of an event, and one which has the details, which could now be referring to multiple events. That way every event record, recurring or not, has an FK to the PK of the eventdetails table. Then in eventdetails, record the PK of recurrences somewhere along with agenda, invitees, etc. The recurrence record does not drive anything. For instance, if you want a list of all recurring events, you look through eventdetails for all events with a non-null FK to recurrences.
You'll need to be careful to synchronise all of these things, so that you insert or delete events when the recurrence data changes.
"Aside from anything else"
does this include "the very requirements" ?
"that will make it easy to export into other applications should you wish to."
Do the stated requirements include "it must be easy to export the calendars to other applications" ? My impression was that the problem consisted solely of building the FIRST application.
that said, my own response :
You need to limit yourself/your user on the types of "recurrency" your sytem will be able to support. And "All of the above" or "No Limitations" will not be a valid answer if you/your user want(s) to end up with a usable application.

Categories