Given a simple high level architecture, e.g a cloud service with a web role, and a compute role, under what circumstances would we choose to use WCF as the communication method between the web role and the compute role, rather than service bus.
There is a lot of documentation, and examples regarding service bus, but I would like to understand if there are any platform benefits to using Service Bus, rather than WCF.
Given the calls are synchronous, and short, e.g a typical API call for getting data onto the website, would you choose WCF over queuing messages and replies onto a queue?
It would appear logically that for a synchronous call WCF would offer the least amount of overhead and latency?
I don't fully understand if the platform offers any "clever" tricks to keep the service bus operating as quickly as a TCP connection over WCF, (Given the queuing overhead?) and would like to understand this further.
At the moment if I was to pick an implementation for this type of call I would choose WCF, which maybe a little naive.
Just to clear, the calls always return data, they are not long running, or fire and forget.
Thanks!
I think it depends on what specifically you want to do.
Service Bus is typically used more for what I would call constant contact type interactions. It should be more performant, but more complex to set up. It also has bi-directional communication capabilities. So you get a lot of extra flexibility out of it.
I would swap WCF for the more modern Web Api. Both solve the same core problem primarily in serving up content. I think of it as just that an API, not necessarily a platform for message passing and handling. They solve 2 different core problems.
I would actually solve the likely problem differently and use Azure Websites + WebJobs. Its the same sort of thing. You can bind the WebJob to an Azure Queue, table or blob and put messages on that storage mechanism, which the job picks up and does something with. The web role I do not believe should rely on content coming back from the job. The job may hit a SignalR Hub that you have on the AzureWeb site post completion, which pushes state back down to the affected parties.
Reference Materials:
WebJobs: https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/
SignalR: http://signalr.net/
Azure Web Apps: https://azure.microsoft.com/en-us/services/app-service/web/
Related
I have a .NET Core Console Application that I need to deploy to Azure and schedule to run once a day. The application is creating a TCP Socket to get market data. I need to schedule it to run in the morning, and the application will receive a close message near the end of the day from the market and automatically close. Approximately run time is estimated at 16 hours, 5 days a week.
Here are the option I've looked researched:
Cloud Service, which might be deprecated (I'm having a hard to validating the comments I've read to this effect)
Service Fabric - but this really looks like it's tailored for stateless applications that can spin up and down for scale. In my case, it should always be a single instance (I do like the self "healing", if my service does go down, it would be great if it is automatically restarted or a new one is spun up)
Azure Web job and azure scheduler. It looks like I could set this to "always on" and add a settings file that has cron configuration, but it seems like a waste of resources to have it "always on". This option also appears to be limited in it's deployment options - I can't set up (that I see) a git integration and auto-deploy. This does seem like the way to go
I'm looking for the pro's and con's of these options above for my use case, or any other options that I might have missed.
There's one thing that seems to be overlooked here. This part:
The application is creating a TCP Socket to get market data.
Is that 80/TCP or 443/TCP? Does it talk HTTP over one of those ports?
Because if your application talks custom protocol over an arbitrary TCP port, you can't use WebJobs. The App Service sandbox does not allow arbitrary port binding. This applies ingress. Egress (outbound), there's no restriction. You can make raw TCP requests from the WebJob to any destination and port.
From https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#network-endpoint-listening:
Network endpoint listening
The only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
There's no need to involve the Azure Scheduler service. WebJobs have a built-in cron implementation which is completely free.
Also, the Always On feature really means hit this site with synthetic requests every couple minutes so it serves a 200 OK, and thus prevent the Application Pool from being unloaded from memory due to inactivity.
I would use the tank. Can't find anything wrong with the tank if you can pick your tank size. There's also very little maintenance with tanks.
Size (id) Cores Ram Net Bandwidth Total disk size
---------------------------------------------------------------
ExtraSmall 1 0.75 GB Low 19 GB
Small 1 1.75 GB Moderate 224 GB
...
ServiceDefinition.csdef if you need to listen on a socket:
<Endpoints>
<InputEndpoint name="aRawTCPEndpoint" protocol="tcp" port="54321" localPort="54321" />
</Endpoints>
Where does your application keep state? Memory/disk/off-the-box database? Cloud Service roles are stateless in nature and if one instance gets sick it's barbecued and a new one is spun up. It's crucial that state be kept off-the-box, in durable storage - Blob/Table storage, Azure SQL, DocumentDB, etc.
Imagine you've build your house 6 years ago. And you've used this material called ClassicBrick in the structure. It is a good material, strong, waterproof, scratch-resistant. But recently this newer and better material - let's call it Armritis (which by the way is designed to be used in BRIDGES not houses, but i digress) came out which everybody tells you is better in every way. Do you tear down the house? Cloud Services are not deprecated and until i see an official Microsoft roadmap telling the opposite i'm not going to entertain this in any way.
On the topic of Service Fabric, it CAN do stateful and it's actually one of its biggest selling points:
From https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-quick-start/:
Create a stateful service
Service Fabric introduces a new kind of service that is stateful. A stateful service can maintain state reliably within the service itself, co-located with the code that's using it. State is made highly available by Service Fabric without the need to persist state to an external store.
Azure Functions is also worth a good look if you can speak HTTP over standard ports.
First, we need to compare the pricing between Cloud Service, Service Fabric, Web job &Scheduler if you doesn't want to wast resources. Here is the pricing calculator. Because your console application job will have special scheduler time to work. It is better to save money if the job is not work. So, Web job(if you have web app meantime) &Scheduler will be good choice to achieve your purpose.
Using a cloud service for such a tiny job is like using a tank to go to work
Service Fabric is mainly for building Micro-services style application not a console apps or jobs that run once a day
Web Jobs require a web app, so you have the remaining option which is the base for web jobs.
You can create the scheduler and make it run every day at specific time or execute it manually on demand
I would go with either one solution, in this priority:
App Service: Web Apps (Web Job)
Virtual Machine
Cloud Service
App Service: Web Apps (Web Job) provides a free plan. And it started to support Web Jobs in a free plan. You will be able to work with files, should you need it. Just as mentioned in other answers, just set a scheduler. If you have doubts and think it is not kosher to do it on a website, then think of it as getting a free website (if you use paid plan) as a bonus. Either way, everything runs on a machine - be it with web-server or without. Maybe you will even start some long-ago-forgotten web project of yours?
Cloud Service and Virtual Machine are both straightforward and simple. Gotta be honest, I haven't used Cloud Service, yet I think you can connect to it via Remote Desktop just like to an ordinary VM. You will have complete control. I would choose Virtual Machine over the Cloud Service though, just because it is cheaper.
Solutions that will NOT work:
Azure Scheduler does not fit you, because it allows only HTTP/S requests and posting messages to Azure Storage queues, Azure Service Bus queues, or Azure Service Bus.
Personally I would go with a WebJob without AlwaysOn, and use Azure Scheduler to fire the WebJob at the desired times using HTTPS. Then the WebJob can do the calls needed to get the data. This does not need AlwaysOn since the Scheduler call wakes it up anyway.
Azure Functions might also be worth a look, though they aren't meant for long-running tasks.
This would also probably be the cheapest option as the Web App can probably run on Free tier (depending on how long the job takes) and the Scheduler can also be Free tier.
Scenario :
I want to implement an MSMQ in which users input message through System.Messaging API's. And there should be a listener which always observes this queue so when ever there is a message in queue I want to make a database updation.
My first approach was to implement MSMQ trigger. I was able to implement a COM interop dll trigger. But I wasnt able to do database operation and I couldnt figure out what was wrong, I tried a lot . Then I came to know about this WCF MSMQ binding. As I am new to this WCF I have some doubts.
Which is the best approach to host WCF for this case. Is it IIS with WAS or Windows service?
And for this kind of listener service is a client necessary or can we write the database operations directly under the Service host operations without client invocation?
Then I came to know about this WCF MSMQ binding. As I am new to this
WCF I have some doubts
Well, that's valid. WCF has a fairly steep learning curve, is very config-heavy, and is not everyone's cup of tea.
However, if you're integrating to MSMQ, WCF has very good support and is rock solid in terms of the way it is implemented.
Which is the best approach to host WCF for this case. Is it IIS with
WAS or Windows service?
Unless you're hosting exclusively on a web environment, I would choose a windows service every time. Using something like topshelf, the deployment and management overhead is tiny, and there are no external dependencies.
Remember to use msmqIntegrationBinding rather than netMsmqBinding, as the latter relies on WCF at both ends, whereas the former supports System.Messaging on the client, or even better, use WCF on the client and then you can use netMsmqBinding which support "typed" messages.
The wording of the question doesn't necessarily do the issue justice...
I've got a client UI sitting on a local box with and a background windows service to support it while it performs background functions.
The client UI is just the presentation layer and the windows service does all the hard hitting action... so there needs to be communication between the two of them. After spending a while on google and reading best practices, I decided to make the service layer using WCF and named pipes.
The client UI is the WCF client and the windows service acts as the WCF host (hosting locally only) to support the client.
So this works fine, as it should. The client UI can pass data to the WCF host. But my question is, how do I make that data useful? I've got a couple engines running on the windows service/WCF host but the WCF host is completely unaware of the existence of any background engines. I need the client's communications requests to be able to interact with those engines.
Does anybody have any idea of a good design pattern or methodology on how to approach facilitating communication between a WCF host and running threads?
I think that your best bet is to have some static properties or methods that can be used to interchange data between the service threads/processes and the WCF service.
Alternatively, the way that we approach this is through the use of a database where the client or wcf service queues up requests for the service to respond to and the service, when it is available, updates the database with the responses to those requests. The client then polls the database (through WCF) on a regular basis to retrieve the results of any outstanding requests.
For example, if the client needs a report generated, we fire off a request through WCF and WCF creates a report generation request in the database.
The service responsible for generating reports regularly polls this table and, when it finds a new entry, it spins off a new thread/process that generates the report.
When the report has completed (either successfully or in failure), the service updates the database table with the result.
Meanwhile, the client asks the WCF service on a regular basis if any of the submitted reports have completed yet. The WCF service in turn polls the table for any requests that have been completed, but not been delivered to the client yet, gathers the information from them, and returns them to the client.
This mechanism allows us to do a couple of things:
1) We can scale the number of services processing these requests across multiple physical/virtual machines as the workload increases.
2) A given service can support numerous clients.
3) Through the WCF interface, we can extend this support to any client platform that we choose to support (web, win, tablet, phone, etc).
Forgot to mention:
Just because we elect to use a database doesn't mean that you have to in order to implement this pattern. You can easily implement the same functionality by creating a static request collection that the WCF service and worker service access in much the same way that we use the database.
You will just need to be very careful about properly obtaining and releasing locks on the static properties to avoid cross-thread collisions or deadlocks.
I'm trying to create a feedback system which all messages get posted to then published back to the correct subsystem. We are using queues quiet heavily and i want to make the subscriber code as clean as possible. I want to switch based off the message id i get into the feedback system and publish to its specific subscriber. i don't want to make a service for each subscriber to listen for messages.. i was thinking i could set up a queue for each subscriber and trigger to invoke a com+ component.. but i'm looking for a more modern way..
I was looking into NServiceBus but it seems i'd need to make a service/executable/webservice for each listening system ( its a little less work to make a C# dll and invoke a method) and i'm not sure if NServiceBus can handle dynamic endpoints based off a preloaded config ( loaded from a db ). WCF is also a choice.. it can handle dynamic endpoints for sure..
what do you think is the best solution for the lease amount of code/ scalable for new systems to subscribe?
Thanks
In case you are ok with online solutions you could take a look at the latest .NET Services SDK for Windows Azure which has queue service bus http://www.microsoft.com/azure/netservices.mspx It relies on WCF messages and supports routing etc. Some blog posts about this here http://vasters.com/clemensv/default.aspx
Another framework you could try is MassTransit http://code.google.com/p/masstransit/
It seems you're looking for a service host, rather than a message broker. If so, Microsoft's recommended way is to host your WCF services in IIS. They can still use MSMQ as transport, but the services themselves will be managed by IIS. IIS has evolved significantly since its early days as HTTP server, now it's closer to an application server, with its choice of transports (TCP, MSMQ, HTTP), pooling, activation, lifetime policies etc.
Although I find WCF+MSMQ+IIS somewhat overcomplicated this is the price you pay to play on the Microsoft field.
For nice and simple message broker, you can use Active MQ instead of MSMQ, it will give you message brokering as well as pub/sub. It's quite easy to work with in .NET, check this link out: http://activemq.apache.org/nms/
I have 50+ kiosk style computers that I want to be able to get a status update, from a single computer, on demand as opposed to an interval. These computers are on a LAN in respect to the computer requesting the status.
I researched WCF however it looks like I'll need IIS installed and I would rather not install IIS on 50+ Windows XP boxes -- so I think that eliminates using a webservice unless it's possible to have a WinForm host a webservice?
I also researched using System.Net.Sockets and even got a barely functional prototype going however I feel I'm not skilled enough to make it a solid and reliable system. Given this path, I would need to learn more about socket programming and threading.
These boxes are running .NET 3.5 SP1, so I have complete flexibility in the .NET version however I'd like to stick to C#.
What is the best way to implement this? Should I just bite the bullet and learn Sockets more or does .NET have a better way of handling this?
edit:
I was going to go with a two way communication until I realized that all I needed was a one way communication.
edit 2:
I was avoiding the traditional server/client and going with an inverse because I wanted to avoid consuming too much bandwidth and wasn't sure what kind of overhead I was talking about. I was also hoping to have more control of the individual kiosks. After looking at it, I think I can still have that with WCF and connect by IP (which I wasn't aware I could connect by IP, I was thinking I would have to add 50 webservices or something).
WCF does not have to be hosted within IIS, it can be hosted within your Winform, as a console application or as windows service.
You can have each computer host its service within the winform, and write a program in your own computer to call each computer's service to get the status information.
Another way of doing it is to host one service in your own computer, and make the 50+ computers to call the service once their status were updated, you can use a database for the service to persist the status data of each node within the network. This option is easier to maintain and scalable.
P.S.
WCF aims to replace .net remoting, the alternatives can be net.tcp binding or net.pipe
Unless you have plans to scale this to several thousand clients I don't think WCF performance will even be a fringe issue. You can easily host WCF services from windows services or Winforms applications, and you'll find getting something working with WCF will be fairly simple once you get the key concepts.
I've deployed something similar with around 100-150 clients with great success.
There's plenty of resources out on the web to get you started - here's one to get you going:
http://msdn.microsoft.com/en-us/library/aa480190.aspx
Whether you use a web service or WCF on your central server, you only need to install and configure IIS on the server (and not on the 50+ clients).
What you're trying to do is a little unclear from the question, but if the clients need to call the server (to get a server status, for example), then they just call a method on the webservice running on the server.
If instead you need to have the server call the clients from time to time, then you'll need to have each client call a sign-in method on the server webservice each time the client starts up. The sign-in method would take a delegate method from the client as a parameter. The server would then call this delegate when it needed information from the client.
Setting up each client with its own web service would represent an inversion of the traditional (one server, multiple clients) client/server architecture, and as you've already noted this would be impractical.
Do not use remoting.
If you want robustness and scalability you end up ruling out everything but what are essentially stateless remote procedure calls. Since this is exactly the capability of web services, and web services are simpler and easier to build, remoting is an essentially pointless technology.
Callbacks with remote delegates are on the performance/reliability forbidden list, so if you were thinking of using remoting for that, think again.
Use web services.
I know you don't want to be polling, but I don't think you need to. Since you say all your units are on a single network segment then I suggest UDP for broadcast change notifications, essentially setting a dirty flag, and allowing the application to (re-)fetch on demand. It's still not reliable but it's easy and very fast because it's broadcast.
As others have said you don't need IIS, you can self-host. See ServiceHost class for details on how to do this.
I'd suggest using .NET Remoting. It's quite easy to implement and doesn't require anything else.
For me its is better to learn networking.. or the manual way of socket communication.. web services are mush slower because it contains metadata..
your clients and the servers can transform to multithreaded application. just imitate the request and response architecture. it is much easy to implement a network application like this..
If you just need a status update, you can use much simpler solution, such as simple tcp server/client messaging or like orrsella said, remoting. WCF is kinda overkill here.
One note though, if all your 50+ kiosk is connected via internet, then you might need use VPN or have an open port on each kiosk(which is a security risk) so that your server can retrieve status update from each kiosk.
We had a similiar situation, but the status is send to our server periodically, so we only have 1 port to protect/secure. The frequency of the update is configurable as to accomodate slower clients.
As someone who implemented something like this with over 500+ clients and growing:
Message Queing is the way to go.
We have gone from an internal developed TCP server and client to WCF polling and ended up with Message queing. It's the only guaranteed way to get data to and from clients and servers over the internet. As a bonus, many of these solutions have an extensive framework makeing it trivial to implement publish-subscribe, Send-one-way, point-to-point sending, Request-reply. Some of these are possible with WCF but it will involve crying, shouting, whimpering and long nights not to mention gallons of coffee.
A couple of important remarks:
Letting a process poll the clients instead of the other way around = Bad idea.. it is not scalable at all and you will soon be running in to trouble when the process is take too long to complete.. Not to mention having to handle all the ip addresses ( do you have access to all clients on the required ports ? What happpens when the ip changes etc..)
what we have done: The clients sends status updates to a central message queue on a regular interval ( you can easily implement live updates in the UI), it also listens on it's own queue for a GetStatusRequest message. if it receives this, it answers ( has a timeout).. this way, we can see overal status of all clients at all times and get a specific status of a specific client when needed.
Concerning bandwidth: kiosk usually show images/video etc.. 1Kb or less status messages will not be the big overhead.
I CANNOT stress enough that the current design you present will have a very intensive development cycle AND will not scale or extend well ( trust me, we have learned this lesson). Next to this, building a good client/server protocol for this type of stuff is a hard job that will be totally useless afterwards if you make a design error ( migrating a protocol is not easy)
We have built our solution ontop of ActiveMQ ( using NMS library c#) and are currently extending Simple Service Bus for our internal workings.
We only use WCF for the communication between our winforms app and the centralized service(s)