SSAS Tabular AMO - how to know when a requested refresh is complete - c#

The partition class in the tabular AMO library has a method for refreshing the partition (RequestRefresh). I can use the AMO library to fire this off, however this method appears to be asynchronous and I cannot find a way of monitoring this request to know when the processing has completed (either refreshed or failed).
The partition class does have a "State" property, but in practice, this always appears to report as ready, even during processing or after a failure in refreshing the data that's caused no data to be written into the partition.
I need to be able to programatically refresh my cube partitions, but have tasks that I need to schedule after the build has completed, I could watch the refresh time, but that feels like the wrong way to do this and any failed attempts do not appear to change this value (therefore requiring some form of timeout or other method for detecting failed refreshes).

Please add the following line after RequestRefresh. SaveChanges is synchronous and the refresh operation isn't actually executed until SaveChanges is run:
partition.RequestRefresh(RefreshType.Full);
db.Model.SaveChanges();

Related

Calling different API with different interval time , saved in my database table

I have a list of API from different client saved in my Database table and all the API have different time interval for there API to be called. What should be my approach to call the API. New data may be added in the List of API table . Should I go for Dynamic Timers?
I have an application (GUI) which clients use to add new records.
These records represent an API url and the time (Schedule) at which that API should be called.
Your Challenge is to write code that is able to call all the Client specified API's at the specified schedule/time.
To me - API calling & Handling the responses (Storing into DB etc) should be one component. and, scheduling when to call which API - should be other component (Something like cron job). This way - when the time is right appropriate API call would be triggered. This also gives you flexibility to do multiple tries/retries in a day etc.
Update after your comment:
You have an application (GUI) which clients use to add new records.
These records represent an API url and the time (Schedule) at which that API should be called.
Your Challenge is to write code that is able to call all the Client specified API's at the specified schedule/time.
If I have got that problem right - my original suggestion stands.
Component 1 - Scheduler
Use Quartz.net (or create your own using a Timer etc) - and create a Service (say WCF) or Process which will read records from Database and identify all the schedules and the API urls that need to be called. When the scheduled time happens Quartz.net will trigger your handler method - where you will make a Call to Component 2 and pass on the API url.
Component 2 - API Engine
When it receives a call from Component 1 - it will make the API call and fetch the response. Store/process it as required.
There are various schedulers that can be used to do this automatically. For example, you could use Quartz.NET and its AdoJobStore. I haven't used that myself, but it sounds appropriate:
With the use of the included AdoJobStore, all Jobs and Triggers configured as "non-volatile" are stored in a relational database via ADO.NET.
Alternatively, your database may well have timers built into it. However, if this is primarily an academic exercise (as suggested by "your challenge") you may not be able to use these.
I would keep a table of scheduled tasks, with columns specifying:
When the task should next be run
What the task should do
How to work out the next iteration of that task afterwards
If the task has been started, when it was started
If the task completed, when it completed
You can then write code in an infinite loop to just scan that table, e.g. once per minute. It should look for all tasks with a "next time" earlier than now that haven't completed:
If the task hasn't been started, update the row to show that it has been started (now), and start executing the task
If the task was started recently, ignore it
If the task was started "a long time ago" (i.e. longer than it would take to run successfully), either mark it as "broken" somehow, or restart
When a task completes successfully, update the row to indicate that it's finished, and add another row for the next time it should be started.
You'll need to work out exactly what your error strategy is:
How long should the gap be between a task starting and you deciding it's failed?
Do you always want to restart the task, or should some failures be permanent?
Do you need to record how often a task failed, and give up after a certain number of tries?
What do you do if you explicitly notice that the task has failed while you're executing it? (Rather than just by the fact that it was started a long time ago.)
For extra reliability, you'd need to think of other aspects too:
Do you need multiple task runners?
How can you spot when a task runner has failed, and restart that?
How do you deal with multiple task runners trying to start the same task at the same time?
You may not need to actually implement everything here, but it's worth considering them.

Handling service bus Message.Complete() exceptions

Consider the scenario, an Azure service bus with message deduplication enabled, with a single topic, with a single subscription and an application that is subscribed to that queue.
How can I ensure that the application receives messages from the queue once and only once ?
Here is the code I'm using in my application to receive messages :
public abstract class ServiceBusListener<T> : IServiceBusListener
{
private SubscriptionClient subscriptionClient;
// ..... snip
private void ReceiveMessages()
{
message = this.subscriptionClient.Receive(TimeSpan.FromSeconds(5));
if (message != null)
{
T payload = message.GetBody<T>(message);
try
{
DoWork(payload);
message.Complete();
}
catch (Exception exception)
{
// message.Complete failed
}
}
}
}
The problem I forsee is that if message.Complete() fails for whatever reason, then that message that has just been processed will remain on the subscription's queue in Azure. When ReceiveMessages() is called again it will pick up that same message from the queue and the application would do the same work again.
Whilst the best solution would be to have idempotent domain logic (DoWork(payload)), this would be very difficult to write in this instance.
The only method I can see to ensure once and only once delivery to an application is by building another queue to act as an intermediary between the Azure service bus and the application. I believe this is called a 'Durable client-side queue'.
However I can see that this would be a potential issue for a lot of applications that use Azure service bus, so is a durable client-side queue the only solution ?
The default behavior when you dequeue a message is called "Peek-Lock" it will lock the message so no one else can get it while your processing it and will remove it when you commit. It will unlock if you fail to commit, so it could be picked up again. This is probably what you are experiencing. You can change the behavior to use "Receive and Delete" which will delete it from the queue as soon as you receive it for processing.
https://msdn.microsoft.com/en-us/library/azure/hh780770.aspx
https://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/#how-to-receive-messages-from-a-subscription
I have similar challenges in a large scale Azure platform I am responsible for. I use a logical combination of the concepts embodied by the Compensating Transaction pattern (https://msdn.microsoft.com/en-us/library/dn589804.aspx), and Event sourcing Pattern (https://msdn.microsoft.com/en-us/library/dn589792.aspx). Exactly how you incorporate these concepts will vary, but ultimately, you may need to plan on your own "rollback" logic, or detecting that a previous process completed 100% successfully minus the removal of the message. If there is something you could check upfront, you will know that a message was simply not removed, then complete it and move on. How expensive that "check" is may make this a bad idea. You can even "create" an artificial final step, like adding a row to a DB, that runs only when the DoWork reaches the end. You can then check for that row before processing any other messages.
IMO, the best approach is to make sure that all of the steps in your DoWork() check for the existence of the work as having already been performed (if possible). For example, if it's creating a DB table, run a "IF NOT EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA...". In that scenario, even in the unlikely event this happens, it's safe to process the message again.
Other approaches I use are to store the MessageID's (the sequential bigint on each message) of the previous X messages (i.e. 10,000), and then check for their existence (NOT IN) before I proceed with processing a message. Not as expensive as you might think and very safe. If found, simply Complete() the message and move on. In other situations, I update the message with a "starting" type status (inline in certain queue types, persisted elsewhere in others), then proceed. If you read a message and this is already set to "started", you know something either failed or did not clear appropriately.
Sorry this is not a clear cut answer, but there are a lot of considerations.
Kindest regards...
You can continue to use a single subscription if you include the logic to detect if the message has been successfully processed already or the stage it had reached into your message handling.
For example, I use service bus messages to insert payments from an external payment system into a CRM system. The message handling logic first checks to see if the payment already exists in CRM (using unique ids associated with the payment) before inserting. This was required because very occasionally the payment would be successfully added to CRM but not reported back as such (timeout or connectivity). Using Receive/Delete when picking up a message would mean that payments would potentially be lost, not checking that the payment already existed could result in duplicate payments.
If this is not possible then another solution I have applied is updating table storage to record the progress of handling a message. When picking up a message the table is checked to see if any stages have already been completed. This allows a message to continue from the stage it had reached previously.
The most likely cause of the scenario you outline is that the time taken to DoWork exceeds the lock on the message. The message lock timeout can be adjusted to a value that safely exceeds the expected DoWork period.
It also possible to call RenewLock on a message within the handler if you are able to track time taken to process against the message lock expiry.
Maybe I misunderstand the design principle of a second queue but it seems as if this would be just as vulnerable to the original scenario you outlined.
Hard to give a definitive answer without knowing what your DoWork() involves but I would consider one or combination of the above as a better solution.

How do I cancel and roll back part of a workflow

I have a very long running workflow that moves video files around between video processing devices and then reports the files state to a database which is used to drive a UI
At times the users press a button on the UI to "Accept" a file into a video storage server. This involves copying a file from one server to another.]
They have asked if this activity can be cancelled.
I've looked at the wf4 documentation and I can't see a way to roll back part of a workflow.
Is this possible and what technique should I use.
The are two basic inbuild activities for reverting work.
The TransactionScope for ACID transaction
The Compensable activity for long running work.
With the Compensable activity you add activities to the compensation handler to undo work previously done. The Compensate activity can be used to trigger compensation. If there is no compensation you will get the confirmation handler either at the end of the workflow automatically or when you use the Conform activity.
See A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 by Matt Milner for more details.
Okay, so let's first say that the processing of "rolling back" what was already uploaded will have to be done by hand, so where ever you're storing those chunks you'll need to clean up by hand when they cancel.
Now, on to the workflow itself, in my opinion you could setup your FlowChart like this:
Alright so let's break down this workflow. The entire service should be correlated on some client key so that way you can start the service with Start once per client to keep the startup costs down.
Next, when said client wants to start a transfer you'll call BeginTransfer which will move into the transfer loop. The transfer loop is setup so that you can cancel between chunks if necessary by calling CancelTransfer.
That same branch, in this model, is used to finish the transfer as well because it gets out of the loop, so when your done transferring chunks just call CancelTransfer (if you don't like that just setup a different branch that looks exactly the same).
Finally, when you're in the process loop, you can SoftExit the entire workflow and shut it down so that you can kill it softly if there is necessary maintenance or when the client is finished with its connection it needs to call SoftExit to dispose of it.
not sure if I totally understand your scenario but I think you would need to run your transfer process on an asynchronous thread, that from time to time check a "cancel" variable to perform a rollback. This variable can be modified on the main thread on your UI.
Of course, this will allow you to cancel between transfers, not in the midle on one single transfer.

Queuing ObservableCollection Updates

I am programming a TAPI application which uses the state pattern for dealing with the different states a TK can be in. Incoming and outgoing calls are recorded via an ObservableCollection in a ListView (call journal). The call data gets compared with contacts stored in a SQL-Server database to determine possible matches. That information is then used to update the call journal. All this in real time of course and all governed by/in the different states of the FSM (finite state machine).
To distinguish calls, I do use a call ID (which is provided by TAPI). When the phone rings or I start calling out, a new record including its call ID are added to the call journal and the customer database is searched for the number and certain data in the journal is updated accordingly. When proceeding through the different call states the application dynamically updates the journal (i.e. changing an icon that visually shows the state of the specific call, etc).
Exactly those updates to the ObservableCollection are giving me headaches, as they need to happen in a certain order. For example, when receiving a call the associated state will create a new entry in the ObservableCollection. When the state changes the new state might try to update the collection even though it is not clear weather the entry that is to be changed has been added already. The states happen to switch really fast, apparently faster than updating the collection can happen.
Would some kind of message queue be a possible/good solution? If so, how could such a message queue be implemented - in the context of either a state machine or an ObservableCollection. I am not looking for complete solutions, but any information which I cannot easily find via google or stackoverflow would be appreciated.
Edit: greatly rephrased the question.
Edit: I added my own solution for the problem, but will wait and see if there is possibly someone with a better idea.
Have you checked whether the result of FirstOrDefault is null? This can happen if no element with given id exists in the collection.
For example:
var element = this.FirstOrDefault(p => p.ID == id);
if (element != null) {
// Do something with element.Number.
}
Or you could just call First and see if you get InvalidOperationException.
--- EDIT ---
I see from your comment that you seem to be accessing the same ObservableCollection from multiple threads concurrently. If that is the case, you need to protect the shared data structure through locking. It is entirely possible that one thread begins inserting a new element just at the moment the other one is searching for it, leading to all sorts of undefined behavior. According to MSN documentation for ObservableCollection:
"Any instance members are not guaranteed to be thread safe."
As for debugging, you can "freeze" other threads and so you can concentrate only on the thread of interest without excessive "jumping". See the Threads panel, right-click menu, Freeze and Thaw options.
Updating the ObservableCollection is a long running process, at least compared to receiving and handling the TAPI-events. This can lead to race conditions, where a call state which would have to edit a call entry could not find the entry as it aquired the lock to writing/updating the collection prior to the call state that would actually have to add the call. Also, not handling the TAPI-events in the proper order would break the state machine.
I decided to implement a simplified Command Pattern. The TAPI-Events, who used to trigger the performance heavy state transactions, get added to a thread safe, non-blocking and observable command queue. When a command gets enqueued the queue class starts "executing" (and dequeuing) the commands in a new thread, that is it is triggering the proper call states in the finite state machine, until there are no commands left in the queue. If there is a dequeuing thread already running no new thread is created (multi-threading would lead to race conditions again) and the queue class is blocking reentrancy to make sure that only one command will ever be executed at the any one time.
So basically: all TAPI-events (the invoker) are added to a queue (the client) in the order they are happening, as fast as possible. The queue then relays the TAPI information to the receiver, the finite state machine performing the business logic, taking its time but making sure the information gets updated in the proper order.
Edit: Starting from .NET 4.0 you can use the ConcurrentQueue(T) Class to achieve the same result.

Add a realtime progress update to a slow page in asp.net

I'm trying to add a realtime progress report to my c#/asp.net 4.0 application for a slow loading page. I've look at the UpdatePanel and UpdateProgress Ajax controls but I don't think they're suitable.
Basically when the user click a button the page executes a number of tasks, I'd like the user to see an update as each one completes, instead of a report when they all complete and the page load completes.
The order thing would happen would be:
1. user click button to start
2. call method 1
3. when method 1 completes, user see "Method 1 done"
3. call method 2
etc.
Can anyone help with this?
This sort of asynchronous execution can be difficult to implement. A few solutions off the top of my head:
Totally asynchronous without AJAX:
Use hits button, submits page.
Server generates a GUID for the task, and creates a record in your database. This might include:
Guid (ID)
Status flag/enum
start time.
Server spawns a thread to handle the task and passes in the Guid.
Server returns the GUID, along with a "Working..." message
After n seconds/milliseconds/insert-time-span-here, browser posts the page again, including a "GetStatus" command and the GUID.
Server checks the status flag in the database based on the GUID.
Server returns a status message based on the DB record ("Step 2...", "Still working", or whatever is appropriate)
Loop to step (5) until the status returned from the server indicates that the process is complete.
In the thread created in step (3):
Thread start
Read current status from the DB record
Execute the next step based on that status
Update the DB status to indicate that it's ready to do the next step, or set an error flag.
Sleep for a few milliseconds to keep from blocking the app (might be unnecessary - I'm not sure how threads interact under IIS)
Loop to (2) until everything's done.
Thread exits.
Here's an example of easily creating a thread with a lambda.
(new Thread(
() => {
DoLongRunningWork();
}
) { Name = "Long Running Work Thread"
,
Priority = ThreadPriority.BelowNormal
}).Start();
Synchronous
Easier, but might cause some performance problems:
User submits "Start" form.
Server writes "Starting..." to the response stream and flushes the stream. This should get the text back to the client, I think, but I haven't tried it in years.
Server executes first step.
Server writes status to the response stream and flushes.
Loop to step (3) until complete.
Effectively, the page keeps the connection open until the task is complete, and flushing the output periodically keeps the client from timing out. This may have problems with timeouts, etc, and your server configuration (output buffering, etc) might be an issue.
Background Task
Similar to the first asynchronous approach:
User clicks "start"
Server adds a row to the DB that identifies the task to be executed, retrieves the ID, and returns it to the client.
Create a scheduled task (script, windows service, etc) that polls the table, executes the desired tasks and updates the status as it progresses.
Client re-posts the form with the DB ID periodically. Server checks the ID against the DB and returns a message about the status (may include info about previous steps such as exec time, ETA, etc)
Client Loops to (4) until the task is complete or errors out.
The difference between this and the first approach is that the thread lives in a separate process instead of IIS.
Each approach has its issues, of course, and there may be a simpler way to do this.

Categories