Fabric reliable service vs actor in order to solve GA - c#

At the moment we have a Genetic Algorithm (GA) that runs quite a while, and I thought we could distribute it using Fabric because theoretically it fits nicely as a microservice. This is my first try at Fabric.
How should we do it ? Should we have a stateful service that runs and aggregates other actors tasks ? It's kinda similar to this project: https://github.com/Azure-Samples/service-fabric-dotnet-data-streaming-websockets
I'm not really sure how to go and there is not much documented on this subjects. This GA is really extensive and our goal is to distribute it's calculations.

I implemented a basic genetic algorithm app with Service Fabric as an app building exercise. Not sure if my approach is the best way to do things for your scenario, but I can describe what I did.
My app consisted of only actors, both stateful and stateless. I had a Processor stateful actor which provided all the management tasks and drove the algorithm. Because it was stateful, it maintained the history of all the genetic state across each of the generations that were produced. I also had a FitnessEvalTask stateless actor. This task was simply responsible for evaluating the fitness of an entity. Its input was the gene representation and its output was the fitness value. The idea was that you'd be spinning up instances of this actor at a high rate and they'd be distributed appropriately. The Processor app, being responsible for driving the algorithm, would create the necessary instances of the FitnessEvalTask actors and provide their input and have them report back with their fitness values and do the necessary processing afterwards. My client process, just a simple console app, would communicate with the Processor actor to initiate the algorithm and perform any necessary management tasks.

In general I think Service Fabric can accommodate a long-running, distributed genetic algorithm like you describe, and would be a reasonable solution.
You would likely use SF actors to represent candidate solutions in your population, and also (as you describe) a SF reliable service to perform data aggregation, manage the population and generations, etc.
The choice of whether to use stateful vs. stateless actors/services largely depends on whether you want (or need) to manage state yourself (say, if you're integrating with a custom datastore) or if you're okay with SF managing state on your behalf. A "stateless" SF service can still have durable state... you are simply responsible for managing it yourself.
The nice thing about using SF is that it formally separates the logic + state of your solution from the low-level resource management needed to execute it. You define your application in code and separately configure a SF cluster with whatever resources you wish, and SF takes care of distributing the work efficiently and reliably across the cluster. Certainly you can do that yourself but its challenging to do correctly.
Sounds like a fun problem... best of luck!

Related

Message-based machine control pattern similar to ROS

If I have a single application running on a single computer but want to have multiple asynchronous threads running and communicating with each-other in order to control the complex behavior of machinery or robots what software design pattern would that be?
I'm specifically looking for something similar to Robot Operating System (ROS) but more in the context of a single library for c# where it handles the messages or the "message bus". There seems to be a lot of overlapping terminology for these things.
I'm essentially looking for a software implementation of a local, distributed node architecture that communicate with each-other much in the same way that nodes on the CAN bus of a car do to perform complex behavior in a distributed way.
Thanks
Your question has a lot of ambiguity. If you have a single application (read single process) then that is different from distributed node architecture.
For Single Application with multiple asynchronous threads
ROS is not the best tool to accomplish this. ROS facilitates communication across nodes using either TCP or shared memory, neither of which are required for communication within the process.
For Distributed Nodes
ROS can be a great tool for this but you need to understand its limitations. First, ROS does not guarantee any real time capabilities. The performance can of course be improved by using nodelets (shared memory) but again, no time guarantees. Second, ROS is not really distributed. It still needs a ROS Master which acts as the main registry.
I suggest you may want to look into ROS2 which uses DDS underneath. ROS2 has distributed architecture and you have the freedom to define your QoS parameters.

Azure Service Fabric - Distributed computation code sample Monte Carlo simulation - performance issues

Having listened to recent azure podcasts (particularly the one on building low latency financial systems on azure) and reading all the hype about Service Fabric I decided to try to alter the 'Distributed computation code sample Monte Carlo simulation' pattern for my needs.
My scenario is:
One request with a given starting state to run 10k full sports match simulations using a simplistic (computationally-wise) monte-carlo based model.
My first attempt was:
1 * Stateful 'Processor' Actor that receives the start state of the match and forwards it to 10k + Task Actors, along with relevant Aggregator ActorId
10K+ * StateLess 'Task' Actors that ran 1 simulation and passed the Result to their Aggregator Actor. Simulation time was small (~2ms)
100 * Stateful 'Aggregator' Actors that aggregated received simulations and passed to a finaliser Actor
1 * 'Finaliser' Actor that calculated the final result
Running the above on my dev box simply using Tasks takes < 100ms, but the above setup (running on the dev machine as a local cluster) took 50secs and more!
After debugging through one potential cause that i found was the amount of time it takes for the Processor Actor to send the initial tasks so i was wondering what sort of overhead there is in calling Service Fabric (I guess all sorts of Naming service calls are happening when i call an actor's methods) and whether the slowness was likely to be due to this and my number of tasks?
To eliminate other possibilities i did the following and noticed only very small differences in total time:
Made all actors stateless to ensure that state management wasn't adding overheads.
Created all ActorProxies in the Processor and stored their references for future calls to ensure Actor Activations weren't causing issues.
Does anybody have any suggestions about where to go from here, or has anybody tried to implement something similar?
Thanks,
Alex
I would have posted this as a comment, but I do not yet have enough reputation for that! If you reference this page in Service Fabric's documentation, take a look at the comments below the article, particularly the comment trail started by "tom" sometime around June, 2015. He was experiencing poor performance (~20 operations per second) with stateful actors, which seemed to be acknowledged as an area of future improvement. They stressed the use of readonly attributes on non-mutating methods to significantly improve performance. Abhishek Ram also included some notes and a link to information on relevant performance counters that may help with troubleshooting.
You noted that you tried using stateless actors with little impact on performance. I would point further down the comment trail where another user reports achieving 2k+ operations per second on a single actor using readonly methods, which I would expect to perform similarly to stateless actor methods. Perhaps the information from the performance counters can be compared with this to see how closely your performance is matching their somewhat trivial example in the comments.

C# Threading in real-world apps

Learning about threading is fascinating no doubt and there are some really good resources to do that. But, my question is threading applied explicitly either as part of design or development in real-world applications.
I have worked on some extensively used and well-architected .NET apps in C# but found no trace of explicit usage.Is there no real need due to this being managed by CLR or is there any specific reason?
Also, any example of threading coded in widely used .NET apps. in Codelplex or Gooogle Code are also welcome.
The simplest place to use threading is performing a long operation in a GUI while keeping the UI responsive.
If you perform the operation on the UI thread, the entire GUI will freeze until it finishes. (Because it won't run a message loop)
By executing it on a background thread, the UI will remain responsive.
The BackgroundWorker class is very useful here.
is threading applied explicitly either as part of design or development in real-world applications.
In order to take full advantage of modern, multi-core systems, threading must be part of the design from the start. While it's fairly easy (especially in .NET 4) to find small portions of code to thread, to get real scalability, you need to design your algorithms to handle being threaded, preferably at a "high level" in your code. The earlier this is done in the design phases, the easier it is to properly build threading into an application.
Is there no real need due to this being managed by CLR or is there any specific reason?
There is definitely a need. Threading doesn't come for free - it must be added in by the developer. The main reason this isn't found very often, especially in open source code, is really more a matter of difficulty. Even using .NET 4, properly designing algorithms to thread in a scalable, safe manner is difficult.
That entirely depends on the application.
For a client app that ever needs to do any significant work (or perform other potentially long-running tasks, such as making web service calls) I'd expect background threads to be used. This could be achieved via BackgroundWorker, explicit use of the thread pool, explicit use of Parallel Extensions, or creating new threads explicitly.
Web services and web applications are somewhat less likely to create their own threads, in my experience. You're more likely to effectively treat each request as having a separate thread (even if ASP.NET moves it around internally) and perform everything synchronously. Of course there are web applications which either execute asynchronously or start threads for other reasons - but I'd say this comes up less often than in client apps.
Definitely a +1 on the Parallel Extensions to .NET. Microsoft has done some great work here to improve the ThreadPool. You used to have one global queue which handled all tasks, even if they were spawned from a worker thread. Now they have a lock-free global queue and local queues for each worker thread. That's a very nice improvement.
I'm not as big a fan of things like Parallel.For, Parallel.Foreach, and Parallel.Invoke (regions), as I believe they should be pure language extensions rather than class libraries. Obviously, I understand why we have this intermediate step, but it's inevitable for C# to gain language improvements for concurrency and it's equally inevitable that we'll have to go back and change our code to take advantage of it :-)
Overall, if you're looking at building concurrent apps in .NET, you owe it to yourself to research the heck out of the Parallel Extensions. I also think, given that this is a pretty nascent effort from Microsoft, you should be very vocal about what works for you and what doesn't, independent of what you perceive your own skill level to be with concurrency. Microsoft is definitely listening, but I don't think there are that many people yet using the Parallel Extensions. I was at VSLive Redmond yesterday and watched a session on this topic and continue to be impressed with the team working on this.
Disclosure: I used to be the Marketing Director for Visual Studio and am now at a startup called Corensic where we're building tools to detect bugs in concurrent apps.
Most real-world usages of threading I've seen is to simply avoid blocking - UI, network, database calls, etc.
You might see it in use as BeginXXX and EndXXX method pairs, delegate.BeginInvoke calls, Control.Invoke calls.
Some systems I've seen, where threading would be a boon, actually use the isolation principle to achieve multiple "threads", in other words, split the work down into completely unrelated chunks and process them all independently of each other - "multi-threading" (or many-core utilisation) is automagically achieved by simply running all the processes at once.
I think it's fair to say you find a lot of stock-and-trade applications (data presentation) largely do not require massive parallisation, nor are they always able to be architected to be suitable for it. The examples I've seen are all very specific problems. This may attribute to why you've not seen any noticable implementations of it.
The question of whether to make use of an explicit threading implementation is normally a design consideration as others have mentioned here. Trying to implement concurrency as an afterthought usually requires a lot of radical and wholesale changes.
Keep in mind that simply throwing threads into an application doesn't inherently increase performance or speed, given that there is a cost in managing each thread, and also perhaps some memory overhead (not to mention, debugging it can be fun).
From my experience, the most common place to implement a threading design has been in Windows Services (background applications) and on applications which have had use case scenarios where a volume of work could be easily split up into smaller parcels of work (and handed off to threads to complete asynchronously).
As for examples, you could check out the Microsoft Robotics Studio (as far as I know there's a free version now) - it comes with an redistributable (I can't find it as a standalone download) of the Concurrency and Coordination Runtime, there's some coverage of it on Microsoft's Channel 9.
As mentioned by others the Parallel Extensions team (blog is here) have done some great work with thread safety and parallel execution and you can find some samples/examples on the MSDN Code site.
Threading is used in all sorts of scenarios, anything network based depends on threading, whether explicit (sockets stuff) or implicit (web services). Threading keeps UI responsive. And windows services having multiple parallel runs doing the same things in processing data working through queues that need to be processed.
Those are just the most common ones I've seen.
Most answers reference long-running tasks in a GUI application. Another very common usage scenario in my experience is Producer/Consumer queues. We have many utility applications that have to perform web requests etc. often to large number of endpoints. We use producer/consumer threading pattern (usually by integrating a custom thread pool) to allow high parallelization of these tasks.
In fact, at this very moment I am checking up on an application that uploads a 200MB file to 200 different FTP locations. We use SmartThreadPool and run up to around 50 uploads in parallel, which allows the whole batch to complete in under one hour (as opposed to over 50 hours were it all uploads to happen consecutively - so in our usage we find almost straight linear improvements in time).
As modern day programmers we love abstractions so we use threads by calling Async methods or BeginInvoke and by using things like BackgroundWorker or PFX in .Net 4.
Yet sometimes there is a need to do the threading yourself. For Example in a web app I built I have a mail queue that I add to from within the app and there is a background thread that sends the emails. If the thread notices that the queue is filling up faster that it is sending it creates another thread if it then sees that that thread is idle it kills it. This can be done with a higher level abstraction I guess but i did it manually.
I can't resist the edge case - in some applications where either a high degree of operational certainty must be achieved or a high degree of operational uncertainty must be tolerated, then threads and processes are considered from initial architecture design all the way through end delivery
Case 1 - for systems that must achieve extremely high levels of operational reliability, three completely separate subsystems using three different mechanisms may be used in a voting architecture - Spawn 3 threads/proceses across each of the voters, wait for them to conclude/die/be killed, and proceed IFF they all say the same thing - example - complex avionic susystems
Case 2 - for systems that must deal with a high degree of operational uncertainty - do the same thing, but once something/anything gets back to you, kill off the stragglers and go forth with the best answer you got - example - complex intraday trading algorithms endeavoring to destroy the business that employ them :-)

Scability of .NET webservices

Can anyone help me with a question about webservices and scalability? I have written a webservice as a facade into our document management system and need to think about scalability issues. What areas should I be looking at to ensure performance and availability?
Thanks in advance
Performance is separate from scalability. Scalability means that you can add more servers to linearly increase system throughput (i.e more client connections). The best way to start is having stateless webservices. That way any client can call any of the n webservice intance on n different machines. If there is a shared database at the end for persistence that will ultimately be your bottleneck. There are ways to reduce that with data partitioning and sharding, but only when you get to that point.
First of all, decide what is acceptable behaviour of your web service. What should it be able to cope - 1000 connections per second? What response time will each connection have?
Then you need to automate the usage of your web service so you can stress test the system.
What happens when you have 100 requests per second? 1000? 10000?
Then you can make a decision about if performance is ok, if the acceptable behaviour is too strict, or if you need to do heavy performance tuning based on actual profiling data.
You should be looking to host your WCF service in IIS. IIS has a lot of performance, scalability, security etc. mechanisms built in and is the best starting point to save you reinventing the wheel.
Some of the performance is certainly due to your own code, but lets assume that it's already optimized. At that point, the additional performance scaling issues involve the service host (e.g. IIS) the machines that host it, and their network (inter/intranet) connection speeds. You'll need to do some speed tests to be sure of things.
Well it really depends on what you're doing in your web service, but the only way you're going to find out is by simulating lots of users and measuring it.
Take a look at my answer to this question: Measuring performance
When we tested our code in this manor (where the web services were hosted in Windows service(s)), we found that the bottleneck was authenticating each user in the facade service. In particular the windows component LSASS was using most of the CPU.
Luckily we were able to create new machines, each with a facade service, which then called through to our main set of web services. This enable us to scale up to a large number of users (in the region of 100,000 users using our software normally).

Web application design

I have a project that I have recently started working on seriously but had a bit of a design discussion with a friend and I think he raised some interesting points.
The project is designed to be highly scalable and easy to maintain the business objects completely independently. Ease of scalability has forced some of the design decisions that impede the project's initial efficiency.
The basic design is as follows.
There is a "core" that is written in ASP.NET MVC and manages all interactions JSON API and HTML web. It however doesn't create or manage "business objects" like Posts, Contributors etc. Those are all handled in their own separate WCF web services.
The idea of the core is to be really simple leveraging individual controls that use management objects to retrieve the business data/objects from the web services. This in turn means that the core could be multithreaded and could call the controls on the page simultaneously.
Each web service will manage the relevant business object and their data in the DB. Any business specific processing will also be in here such as mapping data in the tables to useful data structures for use in the controls. The whole object will be passed to the core, and the core should only be either retrieving or setting a business object once per transaction. If multi-affecting operations are necessary in the future then I will need to make that functionality available.
Also the web services can perform their own independent caching and depending on the request and their own knowledge of their specific area (e.g. Users) could return a newly created object or a pre-created one.
After the talk with my friend I have the following questions.
I appreciate that WCF isn't as fast as DLL calls or something similar. But how much overhead will there be given that the whole system is based on them?
Creating a thread can be expensive. Will it cost more to do this than just calling all the controls one after another?
Are there any other inherent pit falls that you can see with this design?
Do you have any other clients for the web service beyond your web site? If so, then I think that the web service isn't really needed. A service interface is reasonable, but that doesn't mean that it needs to be a web service. Using a web service you'll incur the extra overhead of serialization and one more network transfer of the data. You gain, perhaps, some automatic caching capabilities for your service, but it sounds like you are planning to implement this on your own in any case. It's hard to quantify the amount of overhead because we don't know how complex your objects are nor how much data you intend to transfer, but I would wager that it's not insignificant.
It it were me, I would simplify the design: go single-threaded, use an embedded service interface. Then, if performance were an issue I'd look to see where I could address the existing performance problems via caching, multiprocessing, etc. This lets the actual application drive the design, though you'd still apply good patterns and practices when the performance issue crops up. In the event that performance doesn't become an issue, then you haven't built a lot of complicated infrastructure -- YAGNI! You are not gonna need it!
It depends on the granularity of your service calls. One principle in SOA is to make your interfaces less chatty, i.e. have one call perform a whole bunch of actions. If you designed your service Interface as if it was a reguler Business object, then it is very likely it will be too chatty.
It depends on your usage pattern. Also regarding threads, granularity is a key factor.
It looks very much like you're overdesigning the system. Changing a service interface is much more cumbersome than changing a simple method signature. If all your business objects are exposed as services, you are up for a debugging nightmare.
1.
Web Service oriented design is reasonable if you have one or more non-native clients (that cannot access to you logic directly). For example AJAX, Flash, another web application from different domain, etc. But using WCF for you application when you can make calls to you logic directly is very bad idea.
If later you will need Web Services you can easily wrap you domain model with Service Layer.
2.
Use thread pool to minimize thread creation calls when necessary. And answer on this question depends on what you need to achieve, it is not clear from you explanation.
3.
Main pit fall is that you are trying to use to many things. Overdesigning probably a good term.
If you are worried about the overhead in calling a WCF service then you can use the null transport. This avoids all the necessary serialization and deserialization that would happen if the client and server were on separate machines.
It doesn't sound like something that'll be highly scalable; at least, not to lots of users per second. Slapping in WCF all over the place will slow things down, by creating far more threads than you need. If the WCF calls don't do much work, then the thread overhead will hurt you hard. Although it'll be multithreaded, multiple calls to ASPX pages are already multithreaded. You might speed up your system when just one person is running, but hit performance hard if lots of users are running. Eg, if one user requests the page, then ten sepearate WCF calls may gain from multithreading. However, if you have 100 page reqests per second, that's 1000 WCF calls per second. That's a lot of overhead.

Categories