so a brief summary of what Im working with at the moment :
Im deciding whether I can do this with 1 topic vs needing N topics and both with the relevant metadata/filters.
I have 3 pieces pretty much; a socket server (worker role) of which units in the field connect to, Azure Service Bus messaging and finally a web app.
The user can queue commands to be sent to devices via the web app but we need to be able to hold messages in the queue until the device comes online of which it will then get all the messages. This is where I am confused...
I was initially working along the lines of dynamically creating 1-9999 topics (limit of 10 000 topics can be created, so using last 4 chars of serial) at the web app on messages being queued. Will then have the devices full serial within the metadata. This way as devices connect to the socket server I can create N subscriptions with specific rules and shut them down when the devices disconnect.
But now I'm wondering if I could just have 1 Topic and place all the logic within the metadata?
I am very new to SQLFilters with service bus so any help would be greatly appreciated :)
Good question! First of all, i should say that i would use IoT Hubs in your situation which is the "queue"-like service optimized for IoT scenarios, management and commanding included. Or Event Hubs, but they are less command pattern optimized.
1) Event Hubs
2) IoT Hubs
The first one is for scenarios that are more events-oriented. What i mean - to implement the management of the device from the backend will be more complex with the Event Hubs and less complex with the IoT Hubs.
I would highly recommend you to take a look at these services, because Service Bus is the great service, but the listed services are more IoT-oriented.
From the architecture standpoint, recently Microsoft published the IoT Reference Architecture whitepaper that you may download here. It has the recommendations, services, best practices etc that may be used for the Azure + IoT projects from the Microsoft point of view.
Another helpful resource could be http://azureiotsuite.com . It is the reference IoT architecture implemented. So, if you click on the Create, you will have one of two reference architectures (remote monitoring or predictive maintenance) in your Azure subscription and you will be able to review all of the flows.
So, i would recommend to consider to use IoT/Event Hubs instead of the SB Topics/Queues because in the IoT field, the service that is optimized for these workloads should perform better than non-optimized initially.
Second, you did not specify how you connect your devices to the Worker Role, so as i saw there is a good library for doing that called SuperSocket.
So, as i see your solution architecture may looks like:
Device 2 Cloud:
Devices => Gateway (SuperSocket or whatever) || IoT Hub => Device Registry (see links specified above)
Cloud 2 Device:
User Interface => IoT Hub with registered device => Device
Device Registry is more convenient way to do the IoT flows than transferring IDs or etc. Dynamic creation of entities has some downsides - imagine, if creation command will return timeout error, for example. Better to use optimized services, i believe.
When the device is offline, it will not poll the queue. Messages have some retention time before they will be stalled, that is the built-in mechanism.
Related
There are x number of devices in system and y numbers of device owners. These devices are actively sending data to Azure IOT Hub - temperature, humidity, voltage, etc.
These devices have RFID chips inside them that store information about device and I'm working with Xamarin.Forms project that would allow android/iOS users to get this information.
Upon retrieving this information about the device, I want users to be able to see and monitor the device-to-cloud Azure IOT communication.
In similar fashion, I'm working on a web dashboard, where user could just select device from dropdown/selection and do the same thing.
I've tried implementing the
Azure Device Explorer approach, however here's the problem; in Azure IOT Hub, one consumer group can only have 5 clients and once, meaning, if 6 people want to monitor the live device-to-cloud communication, only 5 people will be able to do it. Furthermore, what this example does is - get all incoming IOT Hub messages, query and filter only the information that the user wants to see - which would probably put quite of a load on mobile phones.
So the question is: Is there a way to see live device-to-cloud Azure IOT Hub communication from a single device?
I'm open to adding other Azure services.
The Azure IoT Hub telemetry path (hot path) is a data stream of all devices ingested internally into the default built-in Event Hub or externally via the custom endpoint. To see a telemetry data from the single device, it is necessary to capture a telemetry stream and use a filtering technique for a specific data. In other words, the telemetry stream must flow transparently in the stream pipeline without any latency and the capture point will hold a copy of the telemetry window.
This warm path is close to the real-time (hot path) and it can be configured from 1-15 minutes.
The following screen snippet shows an example of the warm path using an Event Hub feature such as Capture. Note, that this feature is not in the Azure IoT Hub, so that's used a custom endpoint with an external Event Hub:
Once we have a telemetry warm path (stored in the Blobs, for example each minute), we can query a device messages, twin changes events and device lifecycle events based on the time, value, etc. by user request, eventing or trigger.
I am working on a similar project, we developed a web app with back end database. this db stores all the iot messages as it comes in, web app send the push notification via azure notification hub to the mobile clients interested about a device. All the business logic and operations is written in the web api project.
Mobiles directly do not communicate with IOT, they communicate via api to get the information from db, and using push notification helps to keep everything real time.
we are using azure functions for reading message from iot as it comes and process.
I have a .NET Core Console Application that I need to deploy to Azure and schedule to run once a day. The application is creating a TCP Socket to get market data. I need to schedule it to run in the morning, and the application will receive a close message near the end of the day from the market and automatically close. Approximately run time is estimated at 16 hours, 5 days a week.
Here are the option I've looked researched:
Cloud Service, which might be deprecated (I'm having a hard to validating the comments I've read to this effect)
Service Fabric - but this really looks like it's tailored for stateless applications that can spin up and down for scale. In my case, it should always be a single instance (I do like the self "healing", if my service does go down, it would be great if it is automatically restarted or a new one is spun up)
Azure Web job and azure scheduler. It looks like I could set this to "always on" and add a settings file that has cron configuration, but it seems like a waste of resources to have it "always on". This option also appears to be limited in it's deployment options - I can't set up (that I see) a git integration and auto-deploy. This does seem like the way to go
I'm looking for the pro's and con's of these options above for my use case, or any other options that I might have missed.
There's one thing that seems to be overlooked here. This part:
The application is creating a TCP Socket to get market data.
Is that 80/TCP or 443/TCP? Does it talk HTTP over one of those ports?
Because if your application talks custom protocol over an arbitrary TCP port, you can't use WebJobs. The App Service sandbox does not allow arbitrary port binding. This applies ingress. Egress (outbound), there's no restriction. You can make raw TCP requests from the WebJob to any destination and port.
From https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#network-endpoint-listening:
Network endpoint listening
The only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
There's no need to involve the Azure Scheduler service. WebJobs have a built-in cron implementation which is completely free.
Also, the Always On feature really means hit this site with synthetic requests every couple minutes so it serves a 200 OK, and thus prevent the Application Pool from being unloaded from memory due to inactivity.
I would use the tank. Can't find anything wrong with the tank if you can pick your tank size. There's also very little maintenance with tanks.
Size (id) Cores Ram Net Bandwidth Total disk size
---------------------------------------------------------------
ExtraSmall 1 0.75 GB Low 19 GB
Small 1 1.75 GB Moderate 224 GB
...
ServiceDefinition.csdef if you need to listen on a socket:
<Endpoints>
<InputEndpoint name="aRawTCPEndpoint" protocol="tcp" port="54321" localPort="54321" />
</Endpoints>
Where does your application keep state? Memory/disk/off-the-box database? Cloud Service roles are stateless in nature and if one instance gets sick it's barbecued and a new one is spun up. It's crucial that state be kept off-the-box, in durable storage - Blob/Table storage, Azure SQL, DocumentDB, etc.
Imagine you've build your house 6 years ago. And you've used this material called ClassicBrick in the structure. It is a good material, strong, waterproof, scratch-resistant. But recently this newer and better material - let's call it Armritis (which by the way is designed to be used in BRIDGES not houses, but i digress) came out which everybody tells you is better in every way. Do you tear down the house? Cloud Services are not deprecated and until i see an official Microsoft roadmap telling the opposite i'm not going to entertain this in any way.
On the topic of Service Fabric, it CAN do stateful and it's actually one of its biggest selling points:
From https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-quick-start/:
Create a stateful service
Service Fabric introduces a new kind of service that is stateful. A stateful service can maintain state reliably within the service itself, co-located with the code that's using it. State is made highly available by Service Fabric without the need to persist state to an external store.
Azure Functions is also worth a good look if you can speak HTTP over standard ports.
First, we need to compare the pricing between Cloud Service, Service Fabric, Web job &Scheduler if you doesn't want to wast resources. Here is the pricing calculator. Because your console application job will have special scheduler time to work. It is better to save money if the job is not work. So, Web job(if you have web app meantime) &Scheduler will be good choice to achieve your purpose.
Using a cloud service for such a tiny job is like using a tank to go to work
Service Fabric is mainly for building Micro-services style application not a console apps or jobs that run once a day
Web Jobs require a web app, so you have the remaining option which is the base for web jobs.
You can create the scheduler and make it run every day at specific time or execute it manually on demand
I would go with either one solution, in this priority:
App Service: Web Apps (Web Job)
Virtual Machine
Cloud Service
App Service: Web Apps (Web Job) provides a free plan. And it started to support Web Jobs in a free plan. You will be able to work with files, should you need it. Just as mentioned in other answers, just set a scheduler. If you have doubts and think it is not kosher to do it on a website, then think of it as getting a free website (if you use paid plan) as a bonus. Either way, everything runs on a machine - be it with web-server or without. Maybe you will even start some long-ago-forgotten web project of yours?
Cloud Service and Virtual Machine are both straightforward and simple. Gotta be honest, I haven't used Cloud Service, yet I think you can connect to it via Remote Desktop just like to an ordinary VM. You will have complete control. I would choose Virtual Machine over the Cloud Service though, just because it is cheaper.
Solutions that will NOT work:
Azure Scheduler does not fit you, because it allows only HTTP/S requests and posting messages to Azure Storage queues, Azure Service Bus queues, or Azure Service Bus.
Personally I would go with a WebJob without AlwaysOn, and use Azure Scheduler to fire the WebJob at the desired times using HTTPS. Then the WebJob can do the calls needed to get the data. This does not need AlwaysOn since the Scheduler call wakes it up anyway.
Azure Functions might also be worth a look, though they aren't meant for long-running tasks.
This would also probably be the cheapest option as the Web App can probably run on Free tier (depending on how long the job takes) and the Scheduler can also be Free tier.
I want to send commands from one application (e.g. running on mobile device) to another application (e.g. running on embedded device) which is located in a different network.
I don't want to use VPN or something like port forwarding. So after some research I found some other ways to do that, for example via a cloud messaging service like Azure Service Bus.
Sending commands/messages from the first application to the service bus is not a problem for me. But I don't really understand how two get a connection from the cloud service to the second device? I know I can also send a message from the second device to a cloud service e.g. via HTTPS. And then the cloud service can keep that connection alive. As long as the connection is alive, I can send messages to the second device.
But there are some points I can't understand:
When I have thousands of devices, isn't that a problem to keep thousands of connections alive?
How can the second device listening the connection if there are new messages? Doesn't that needing too much ressources on the embedded device?
I also read about using "long polling" techniques and web sockets. I know too little to understand what are the advantages and disadvantages of those concepts. Which technique should I use for my problem?
To be more platform agnostic, I don't want to use services like Azure IoT Hub.
Edit:
Maybe I can use a web service and implement a MQTT Broker?
I think the mentioned MQTT Broker will get you there, especially as your usecase is exactly what MQTT and it's implementations (brokers and clients) have been built for.
The simplified story is the following:
A MQTT Client running on your Application 'publishes' a MQTT message using a 'topic' (think routing key) to the MQTT broker. A MQTT client running on your Devices have a subscription for the same 'topic' on the broker. This enables the broker to route the message from the application to the devices without the requirement that they know about each others.
As far as I understand your question your concerns are the following:
can all the devices be connected at the same time (thousands of open TCP connections) and therefore receiving messages published from your first application via the broker in 'realtime'.
assuming the devices will disconnect for whatever reasons, e.g. due to network problems or for decreasing energy consumption, how would be ensured that the devices will eventually receive the messages.
how will the devices connect to the broker.
Regarding 1. MQTT brokers are built to handle (and keeping) a massive amount of TCP connections. For example VerneMQ, a MQTT broker I can talk about, as I am one of the core devs, is able to handle over a million connections on one node (with proper server configuration it's actually mainly a matter of available RAM). However we'd only recommend such a setup if the devices are mainly sleeping. Using VerneMQ you can also add more nodes to the cluster and balance the connections among all your cluster nodes.
Regarding 2. A MQTT broker typically implements an offline storage for messages that haven't been send out to a client or haven't been acknowledged by a client. This allows your device to go offline for hours and receive the messages upon reconnect.
Regarding 3. This is specific to your usecase. In the simplest case you configure a fixed IP:Port on every device, and the MQTT client running on the device uses it to connect to the broker. Depending on the ability to reconfigure the devices it makes sense to use DNS lookups, or even to provide a 'backchannel' for reconfiguration.
For standard compliant MQTT client software have a look at Eclipse Paho. For an up-to-date list of available MQTT brokers consult the list of MQTT brokers.
What might be the considerations for building a real time screen sharing service (some where close to shared view or live meeting) on top of Windows Azure? Please share your thoughts.
For this, it is obvious that we've to create a custom TCP/IP server - to which clients can connect to and exchange (publish/retrieve) data real time, over a custom protocol on top of TCP/IP.
I think Azure supports TCP/IP only for the web role as of now, on port 80 and 443? Please share your thoughts.
Wow - almost 2 years old and no accepted answer! As Joannes stated, realtime is going to be a challenge - you'll need to carefully evaluate what that means to you in terms of response time and latency.
Windows Azure Worker and Web Roles have evolved considerably since you asked this. You can now have up to 25 input (e.g. external-facing) endpoints in your deployment, spread across any combination of Web and Worker roles - you define the port #s - you're not limited to 80 and 443. You may also have up to 25 Internal endpoints (used for inter-role communication).
Designing to run a desktop-sharing service in Windows Azure would have the same basic considerations as when designing for Windows Server (that's what the Windows Azure VMs are running, afterall - Windows Server 2008 R2). You'll need to deal with authentication and authorization, through your own custom solution or possibly with Access Control Services.
Ok, there "is" one thing you'll need to keep in mind: Windows Azure VMs are stateless, and you shouldn't assume a user will always connect to the same VM instance (there's no way to direct-access a specific instance of a Web or Worker role). So, you'll need to externalize any type of session-specific data (which is very easy, with both SQL Azure and Windows Azure Cache service both very simple to set up and use as session providers).
Low latency is still a tough case for cloud computing providers (Azure being no exception). I think that's going to toughest part in the design. Then, since the Nov'09 release, worker roles can have entry points too (not sure about port limitations though).
I'm trying to create a feedback system which all messages get posted to then published back to the correct subsystem. We are using queues quiet heavily and i want to make the subscriber code as clean as possible. I want to switch based off the message id i get into the feedback system and publish to its specific subscriber. i don't want to make a service for each subscriber to listen for messages.. i was thinking i could set up a queue for each subscriber and trigger to invoke a com+ component.. but i'm looking for a more modern way..
I was looking into NServiceBus but it seems i'd need to make a service/executable/webservice for each listening system ( its a little less work to make a C# dll and invoke a method) and i'm not sure if NServiceBus can handle dynamic endpoints based off a preloaded config ( loaded from a db ). WCF is also a choice.. it can handle dynamic endpoints for sure..
what do you think is the best solution for the lease amount of code/ scalable for new systems to subscribe?
Thanks
In case you are ok with online solutions you could take a look at the latest .NET Services SDK for Windows Azure which has queue service bus http://www.microsoft.com/azure/netservices.mspx It relies on WCF messages and supports routing etc. Some blog posts about this here http://vasters.com/clemensv/default.aspx
Another framework you could try is MassTransit http://code.google.com/p/masstransit/
It seems you're looking for a service host, rather than a message broker. If so, Microsoft's recommended way is to host your WCF services in IIS. They can still use MSMQ as transport, but the services themselves will be managed by IIS. IIS has evolved significantly since its early days as HTTP server, now it's closer to an application server, with its choice of transports (TCP, MSMQ, HTTP), pooling, activation, lifetime policies etc.
Although I find WCF+MSMQ+IIS somewhat overcomplicated this is the price you pay to play on the Microsoft field.
For nice and simple message broker, you can use Active MQ instead of MSMQ, it will give you message brokering as well as pub/sub. It's quite easy to work with in .NET, check this link out: http://activemq.apache.org/nms/