I would like to ask some advices or experiences from architecture or
technology for building real-time system. Before I have some
experience on developing "Queuing Management System", I have done by
sending TcpServer and TcpClient message to all operators when a
operator changed the queue number. But I think this strategy a lot
complicated and issues.
Could anyone guide me some ideas or frameworks?
First up: hardcore real-time peeps will take issue with the use of ".NET" and "real-time" in the same sentence, due to .NET's non-deterministic nature ;)
Having said that, if you're just implementing a supervisory or visualisation layer over an existing real-time system (say, implementing a SCADA-type system), then .NET should be fine. Then your network architecture can boil down to two scenarios:
Clients poll from a server: you create a centralised server which contains much of your process logic, and clients poll from this server periodically.
Server supports a publish/subscribe mechanism: clients subscribe to the server's information, and the server sends out updates when they occur.
There's no one "right" way to do the above comms; it depends a lot on size and frequency of updates, network traffic, etc.
I havent worked on anything real-time, but I would assume that looking into real-time linux would be a good start to understanding the problems and solutions they have come up with dealing with real-time applications.
I'd recommend looking at QNX.
Related
I have a problem, have not much experience in C #, so I did a lot of research and I'm stuck.
I have to make two applications C #, the first applications is windows forms, the second runs in the background, so that the first applications will be a (POS) sales point that need to communicate with the application background for information as (products, customers, etc ...) and send data, so do not want to use web service for problems like timeouts, so anyone can help me with some idea to perform this task?
it is important to mention that the application in background will be just one while the POS applcations wich will communicate with it will be a lot (n number of apps).
There is a myriad of ways of doing interprocess communication. As the question is so generic, I will point out some more common ways.
The background process can be a windows service which updates the DB and POS systems query the DB to retrieve what they need. Even if the background process reads from the same DB, you can have a separate table which has "finished" information ready for the POS piece to pick up. Now you can use a file instead of a DB to store this finished results too, but most folks prefer DB.
You can use WCF channel to establish communication between the POS piece and the background process.
You can convert your background process to a web-service and let your POS piece communicate using XML. I don't think any time-out issue should be a problem. You will have to explain better what time-out issue causes you to not use this option.
You can convert the whole piece into a web-site and the POS will simply be a browser then
You can use a bus like Tibco or MQ to pass data.
Or you can go the old fashioned way of TCP sockets.
The most preferred way is usually the web-servcie or web-site way depending on your constraints.
Typically you'll use a message queue for something like this. They are a component in ensuring clean separation of concerns reducing and cross-application coupling and are meant to receive messages by some publisher (thus freeing the publisher of any further responsibility), and pushing messages to some subscriber.
RabbitMQ is a popular framework: https://www.rabbitmq.com/
(note that RabbitMQ (and other ready-built frameworks) can sometimes be daunting for new application programmers as they handle a great many use cases. However the underlying concept of writing to a queue from one application and reading from the queue in the other application is really the key here... feel free to implement a small utility of your own as a learning experience, but I do recommend an pre-existing framework if you're comfortable using such)
One method is to use named pipes for such communications between different programs.
How to: Use Named Pipes for Network Interprocess Communication
If you do not want to use web service (based on soap protocol),
you could attempt to use web api. In this way, you could build rest based interfaces with json (json streaming between computers is faster than xml streaming).
I think the following link can be usefull to you:
http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/using-web-api-with-aspnet-web-forms
I'm writing a calculation intensive program in C# using the TPL. Some preliminary benchmarking shows good reduction in computation time through using processors with more cores/threads.
However, there is a limit to how many threads are available on a single CPU (I think even the best Xeons money can buy is currently have about 16).
I've been reading about how render farms with a 'grid' of multiple inexpensive CPUs in their own machines is a good way to increase the overall core count, but I have no idea how I go about implementing one of these. Is it implemented at the OS level with Microsoft server technology (and if so, how?), or do I also need to modify the C# code itself?
Any help or links to existing information would be greatly appreciated.
If you want to do this at scale (100s of nodes) then developing your own system is hard. You have to handle; nodes becoming unavailable, data replication to each node, tracking job progress.. It's a long list. You also need to consider the sort of communication you're going to require between your nodes. Remember that the cost of sending a message (data) from one thread to another is tiny compared to the cost of sending it to another machine across a network (even a fast one). You may have to completely rewrite your multithreaded application to run well on a distributed system, even to the point of using a completely different algorithm.
Hadoop
Microsoft had plans to commercialize Dryad as LINQ to HPC but this project was sidelined a while back (I worked on this project before I left Microsoft). I believe you can still get the final "public preview", but it's unsupported. The SQL team opted to work with the Hadoop/Hortonworks people on getting a Windows/Azure/.NET friendly Hadoop distribution off the ground. As far as I know the only thing they have delivered is HDInsight. A Hadoop service running in Azure.
There is now a Microsoft .NET SDK For Hadoop which will allow you to manage a cluster and submit jobs etc. It does not seem to allow you to write code that executes on the Hadoop nodes. You can however use the Hadoop streaming API. This is fairly low level but is language agnostic so you can pretty much use it to integrate map reduce code written in any language with Hadoop. More details on this can be found in this blog post.
Hadoop for .NET Developers
If you want to do this as a smaller scale (10s of nodes) then I would look for something like MPI .NET. it looks like this project has been abandoned but something similar is probably what you want.
You might look into some like Dryad - http://research.microsoft.com/en-us/projects/dryadlinq/default.aspx
It might on the other hand also be a big too much for your situation, but the ideas in Dryad could be simplified to your needs.
You might also look into making your own TaskScheduler, which could handle the distribution of threads to agents running on other boxes, but you would have to implement a simple socket client/server communication to get and push the data.
Another and a bit odd suggestion, which might be okay for investigating things, is to do the following.
Let the master of the calculation cut the problem into the number of available client computers.
Write the parameters to kick of the calculation for each client to a file shared by all on the network.
Let the clients look for files dedicated to them, and kick of the calculation for their piece, when file appears. The output is written back to a result file.
The server will sit an listen for all clients completing their jobs.
The files could be replaced with a database, low-level sockets, REST services, Web Services etc. depending on your needs.
I'm looking at putting together a fairly straight-forward WCF-based service, and I have a question about how best to decouple it from the database.
Background: The service I'm going to be implementing is highly critical, geographically distributed, and needs to be as available as possible through a disaster or database failure. The business logic is pretty simple; it receives events from an external source, maintains a state table, and broadcasts processed updates to connected clients. I'm replacing a service that currently handles 400-600 incoming events per second, and approximately 10-20 concurrently connected clients. There will be multiple instances of the service running in multiple locations across the US. All instances host the same state data and share events. There is one instance of a master (SQL Server 2008) database in one location.
Challenge: I've built a number of applications similar to this in the past, and I have most of the architectural hurdles behind me. But there's one challenge I've come across to which I can't help but imagine there's a better solution: in my design, the database (MSSQL) is used only for persistence; the database is only read when the first instance of the service starts and for offline reporting. During normal operation, the application only ever writes historical data to the DB.
To fully decouple the application from the database, in the past I've used SQL Service Broker: On each server running the service, I install an instance of SQL Server Express that essentially just acts as a queue for Service Broker messages to the core (SSB "target") database. In normal operating conditions, the application executes all its SQL operations against the local instance, which queues/forwards them to the target DB via SSB. This works pretty well, and to be honest I'm fairly happy with it... As long as the local instance of SQL Server Express is up, the application will obviously stay unaware of problems at the target DB, network issues between it and the target DB, etc., and it's highly survivable in the case of a localized disaster. It's easy to monitor, not too horribly ugly to set up, and it's all supported. In short, it works, and I'm content to live with it if I have to.
But it strikes me as a bit of a kludge. It feels like there should a better way to do that.
Obviously one option is to just queue the database operations in process. I don't like that because if I'm going to decouple things at all, I'd prefer to really decouple and keep my application itself as far away from the DB as possible. I could also write a Data Service that queues these operations... I actually briefly started down that path before thinking to myself, "Wait, isn't this what SSB already does?"
Due to unchangeable external constraints, a more robust/HA SQL Server architecture is not an option. I've been given my one DB cluster and that's that.
So I'm open to just about any thoughts and/or criticisms. Is there something obvious I'm missing? This feels like the kind of thing where there could be something stone-simple I've just somehow overlooked (though not for lack of searching.) Am I making some kind of wider architectural mistake here?
Thanks in advance!
My opinion is obviously biased, but for the record I can point to several fairly big projects that do (or did) it the same way, like High volumn contiguos real Time ETL, March Madness on Demand or MySpace SQL Server Service Broker.
But several things changed in later years, and the primary change is the rise of PaaS offerings. Today you can have a highly available, scalable database and messaging platform, eg. SQL Azure and Azure Queues/Azure Service Buss. Or DynamoDB and SQS if you're willing to step outside SQL/ACID. Arguably, the price point of a park of SQL Express instances pushing to a central SQL Server Standard Edition will be lower than a PaaS solution, but it will be hard to beat the PaaS in terms of availability, free maintenance and scale on-demand.
So aside from the PaaS oint of view above, I would argue that the solution you have is superior to pretty much anything else the MS stack has. WCF is sure easy to program against, unless you have the anti-SOAP fever, but has basically 0 (zero) to offer in terms of availability/reliability. Your process is gone === your data is gone, end of story. WCf over MSMQ is 'WCF' just in name, the programming model of queue channels is miles away from the http/net binding WCF programming model. And MSMQ has little to stand up agains Service Broker (aside from ubiquity). but then again, as you probably know, I am really biased in my opinion...
im working on a project where i should transfer data from a c# server to an Java client (running on android device).
i need to use UDP protocol for a real time data and to maintain performance.
searching the web. didnt find any similar example and i really dont know where to start.
can you please suggest if this can be done ?
Thanks in advance.
Yes, it can be done. That's one of the beautiful things about the Internet protocols: support for standard sockets is so widespread and common that disparate devices running vastly different CPU architectures and software environments can interoperate with nearly no trouble.
Please make sure that UDP is really the best tool for the job. Do you need reliable delivery? Do you need in-order delivery? How much packetloss can you tolerate? How much packet re-ordering can you tolerate? Will your application handle 540 byte packets as gracefully as it will handle 1500 byte packets? Does your application need to protect against man in the middle attacks? How?
TCP is an incredible protocol. Many attempts to use UDP "for speed" wind up re-implementing many of the things that TCP provides for you already -- but most re-implementations are not done nearly as well as the real thing. Please don't be so quick to dismiss TCP.
To get started, just about any network tutorial for Java and C# should include something like a chat or echo server, the network programming equivalent of "Hello World". And that'd be good enough for a simple environment. If you intend for your server to handle dozens of clients simultaneously, it'll be more work, and if you intend for your server to scale into the hundreds or thousands, it'll be a different style of programming altogether.
Have you tried reading this:
http://nickstips.wordpress.com/2010/09/03/c-creating-and-sending-udp-packets/
The client is irrelevant, it can be Java, C++, or any other language/platform. Doesn't matter.
The protocol is still the same.
Hope this helps.
Try the Oracle Documentation as a starting point with UDPs, there you can find an example which i in java but as mentioned the idea of the protocols is to support a language independent communication.
I have a C# windows service which listens to a MSMQ and sends each message out as an email.
Since there's no UI, I'd like to offer an ability to monitor this service to see things such as # messages in queue, # emails sent (by message type perhaps), # of errors, etc.
What is the best/recommended way to accomplish this? Is it WMI or performance counters? Is this data viewed using PerfMon or WMI CIM Studio? Does any approach allow one to monitor the service real-time as well as providing historical analysis?
I can dig into the details myself but would appreciate some broad guidance to help demystify this subject.
I do sometimes implement performance monitoring of my windows services. What I do I keep internal counters of things I am interested and create a way for an external program to access it. It can be WCF service hosted inside windows service which is fairly easy to implement and it can be accessed by various channels. Client of this WCF service is also quite easy to be done.
Other way is to create your own windows performance counters which could be read by an event viewer application.
In either case you need to keep track of what's going on in your service and expose that to outside. In your case you would need to keep count of MSMQ size, emails sent, errors got and measure time of collection of those. Should be easy. Then go for WCF service or custom performance counters. Use this msdn article on how to create custom counter. Hope this helps.