Azure Storage Count Transactions - c#

Can you please explain what is the best way to estimate the transactions number on windows azure storage using the development environment? I though about implementing an int variable and increment that ex: i++ each time I make a call to azure storage? What do you think? Have you done such thing before? I just need to have an estimated amount of transactions ...

there's the Windows Azure Storage Services REST API which you can use, it contains a a full API stack for Storage Analytics: http://msdn.microsoft.com/en-us/library/windowsazure/hh343270.aspx. hope this helps (of course you can use the native monitoring through the portal also for starters)

#techmike2kx gave you the REST API info. Instead, let me address your other question regarding the use of a local transaction counter. That approach won't really help you at all, for a few reasons:
If you have multiple instances of your app running (e.g. 2 web role instances), you'd need a single counter across instances, which means you're now synchronizing, or you're accumulating numbers somewhere. And... you'll probably store these instance-specific counters in something like table storage, which will result in additional transactions.
What if you use an attached disk with your VM? There will be transactions generated since the vhd is stored in a blob. You'll have no visibility into those transactions.
Your storage account could be used by multiple apps. How will you track that?
Your storage account could be used for logging and diagnostics, which you don't have much control over, regarding how those calls are made.
You'll need to track unsuccessful transactions since these are not billed (these are documented).
Some calls make multiple storage transactions. For instance: If you query table storage, and exceed what can be returned in a single transaction, you'll end up with multiple calls to storage, under the hood (hidden by the language-specific SDK you're using).
What happens when you serve web content directly from blobs (e.g. http://mysite.blob.core.windows.net/images/logo.jpg)? You'd have no control over this access, so no way to track it
when will you roll your counter back to zero? How will you know the exact month-end of your billing cycle?
I'm sure there are other gotchas, but the bottom line is: You shouldn't be trying to track transaction consumption, since it's given to you via storage analytics.

Related

How can I programatically retrieve performance data (CPU, memory usage) of an Azure Cloud Service?

I'm looking for a way to retrieve performance data of an Azure cloud service. Specifically, I need CPU and memory usage statistics of the last 5/30/60 minutes.
Googling around I found that this can be done by accessing Azure's default performance counters, but the documentation seems to be scarce and ambiguous as to how to do this programatically. Also, I need to do this without making any manual configurations to the service after the deployment.
Anybody got any idea?
Best regards,
Remus
Ideas? Yes. Will it fit your use case? I do not really know. What do you need to do with the data?
Have you thought about integrating Application Insights: https://azure.microsoft.com/en-US/documentation/articles/app-insights-cloudservices/ It allows collecting (custom) performance counters telemetry (https://azure.microsoft.com/en-US/documentation/articles/app-insights-cloudservices/#performance-counters).
If you not only need to see/monitor these counters you can enable continuous export to a sql database and collect the data in code from there. You can also define alerts based on certain values.
They are also working on a Rest API so you could get the raw data from there for further processing, see https://visualstudio.uservoice.com/forums/357324-application-insights/suggestions/4999529-make-data-accessible-via-apis-for-custom-processin.
Might be a bit overkill to use AI for your specific scenarios however, since you only need it for the last hour.
You can use the KUDU API to get the CPU and Memory usage of your w3wp processes that running in your cloud service.
To access the KUDU service from browser type - https://[your-web-site-name].scm.azurewebsites.net.
Their you can see in the Process explorer tab the CPU and Memory information about the w3wp processes.
If you want to do it programtically you can build http client and access the data, for example -
GET https://[your-web-site-name].scm.azurewebsites.net/api/processes/ - To get all processes.
GET https://[your-web-site-name].scm.azurewebsites.net/api/processes/[proccess number] - to access each process and get the information.
For credentials you need to look in your publish profile - and get the userName and userPWD.
A nice example can be found -
http://chriskirby.net/blog/running-your-azure-webjobs-with-the-kudu-api

What is "Usage" in relation to Microsoft Azure billing

this might be a stupid question but I have to ask. I've never used Azure before but a client is looking to send some SQL databases and their web server to the cloud. On the Azure site they refer to billing for usage per hour.
If I create 10 SQL Databases, is usage considered the actual amount of time they were used by the application, or am I charged for the amount of time I had the database instances themselves? Same with a web application...if the web application goes 2 weeks without any web traffic, does that still count as usage since I have the app live in Azure? If the app is not used then the databases wouldn't be either, so both would be idle and not used at the moment.
I guess I'm just confused as to what the word "usage" is actually referring to.
Meaning of Usage in Azure varies based on the type of resource. For some items, usage is calculated in terms of consumed hours (websites, virtual machines etc. would come there) whereas for certain items it is calculated in terms of consumed space (azure storage is a good example of that).
Also, please note that pricing is not based on the utilization (e.g. how many times a website got hit) but based on provisioning. So in your example, if a website is provisioned for you, you will pay for it irrespective of the fact that anybody is using that website or not.
I would recommend taking a look at Azure Pricing Calculator to understand approximately how much are you going to pay by resource type.

Implementing a simple local memory cache on an Azure instance

I'm looking for a simple way to implement a local memory store which can be used on an Azure .NET instance
I've been looking at Azure Co-located Caching and it seems to support all of my requirements:
Work on both web roles and worker roles
Implement a simple LRU
Keep cached objects in memory (RAM)
Allow me to define the cache size as a percentage of the machine's total RAM
Keep the cache on the same machine of the web/worker role (co-located mode)
Allow me to access the same cache from multiple AppDomains running on the same machine (Web Roles may split my handlers into different AppDomains)
The only problem I have with Azure Co-located caching is that different instances communicate and try to share their caches - and I don't really need all that.
I want every machine to have its own separate in-memory cache. When I query this cache, I don't want to waste any time on making a network request to other instances' caches.
Local Cache config?
I've seen a configuration setting in Azure Caching to enable a Local Cache - but it still seems like machines may communicate with each other (ie. during cache miss). This config also requires a ttlValue and objectCount and I want TTL to be "forever" and the object count to be "until you fill the entire cache". It feels like specifying maxInt in both cases seems wrong.
What about a simple static variable?
When I really think about it, all this Azure caching seems like a bit of an overkill for what I need. I basically just need a static variable in the application/role level.. except that doesn't work for requirement #6 (different AppDomains). Requirement #4 is also a bit harder to implement in this case.
Memcached
I think good old memcached seems to do exactly what I want. Problem is I'm using Azure as a PaaS and I don't really want to administer my own VM's. I don't think I can install memcached on my roles.. [UPDATE] It seems it is possible to run memcached locally on my roles. Is there a more elegant "native" solution without using memcached itself?
You can certainly install memcached on Web and Worker roles. Steve Marx blogged getting memcached running on Azure Cloud Service several years ago before the Virtual Machine features were present. This is an older post, so you may run into other ways of dealing with this, such as using start up tasks instead of the OnStart method in RoleEntryPoint, etc.
I have used the "free" versions of SQL Server for local caching and they have worked great. It depends on what you are doing, but I have ran both SQL Server Express/Compact for storing entire small static data sets for a fantasy football site I wrote that included 5 years of statistics. They worked really well even on a small/medium Azure instances, because of the small footprint.
http://blogs.msdn.com/b/jerrynixon/archive/2012/02/26/sql-express-v-localdb-v-sql-compact-edition.aspx
Best part is you can use t-sql. Your cache requirements might be more complex or not scale to this.

How to build a highly scaleable global counter in Azure?

I am trying to setup in Windows Azure a global counter which would keep track of the number of games started within a day. Each time a player starts a game, a Web Service call is made from the client to the server and a global counter would be incremented by one. This should be fairly simple to do with a database... But I wonder how I could efficiently do this. The database approach is good for a few hundreds clients simultaneously, but what will happen if I have 100,000 clients?
Thanks for your help/ideas!
A little over a year ago, this was a topic in a Cloud Cover episode: Cloud Cover Episode 43 - Scalable Counters with Windows Azure. They discussed how to create an Apaythy Button (similar to the Like Button on Facebook).
Steve Marx also discusses this in detail in a blog post with source code: Architecting Scalable Counters with Windows Azure. In this solution they're doing the following:
On each instance, keep track of a local counter
Use Interlock.Increment to modify the local counter
If the counter changed, save the new value in table storage (have a timer do this every few seconds). For each deployment/instance, you'll have 1 record in the counters table.
To display the total count, take the sum of all records in the counters table.
Well, there are a bunch of choices. And I don't know which is best for you. But I'll present them here with some pros and cons and you can come to your own conclusions given your requirements.
The simplest answer is "put it in storage." Both SQL Azure and the core Azure table or blog storage options are out there for you. One issue to contend with is performance in the face of large scale concurrency, but I'd also encourage you to think about correctness. You really want something that supports atomic increment to outsource this problem IMO.
Another variation of a storage oriented option would be a highly available VM. You could spin up your own VM on Azure, back a data drive on to Azure Drives, and then use something on top of the OS to do this (a database server, an app that uses the file system directly, whatever). This would be more similar to what you'd do at home but would have fairly unfortunate trade-offs...your entire cloud is now reliant on the availability of this one VM, cost is something to think about, scalability of the solution, and so on.
Splunk is also an option to consider, if you look at VMs.
As an earlier commenter mentioned, you could compute off of log data. But this would likely not be super real time.
Service Bus is another option to consider. You could pump messages over SB for these events and have a consumer that reads them and emits a "summary." There are a bunch of design patterns to consider if you look at this. The SB stack is pretty well documented. Another interesting element of SB is that you might be able to trade off 100% correctness for perf/scale/cost. This might be a worthy trade-off for you depending upon your goals.
Azure also exposes queues which might be a fit. I'll admit I think SB is probably a better fit but it is worth looking at both if you are going down this path.
Sorry I don't have a silver bullet but I hope this helps.
I would suggest you follow the pattern described in .NET Multi-Tier Application. This would help you decouple the Web role which faces your clients and the Worker role, which will store the data to a persistence medium (either SQL Server / Azure Storage) by using the Service Bus.
Also, this is an efficient model to scale as you can span new instances of web role or worker role or both. For the dashboard depending on the load you can Cache your data periodically and server it from the Cache. This would compromise on the accuracy of the data, but would still provide with an option for easy scaling. You can even invalidate the cache every 1 minute and get it loaded from the persistence medium to get the latest value.
Regarding to use SQL Server or Azure storage, if there is no need for relational capabilities like JOINS etc, you can very well go for the Azure storage.

Design Real-Time Product Stock Management

I need to design a real-time product stock management engine (C# & WCF) but i don't know how to proceed in order to handle concurrency access and data integrity.
Here is some of the features the engine should be handle :
Stock Incoming products
Order preparation
Move products from one place to another
...
May i use MSMQ in order to ensure correct stock count (Messages processed in order by message pooling) or may i use application thread locking.
Note that my application have to be in Real-Time, preparer have to know in real-time how many products there are in stock in time. If there is lack of products at picking he can send a "request" to an operator.
Use a SQL database. They are already designed with data integrity, concurrency and data storage in mind.
you should probably use an SQL database as Lee says. If you use a transaction to e.g. store an order and decrease available product counts (both in the same transaction) the database guarantees atomicity. You probably also want some kind of concurrency mechanism (like a row version) to prevent inconsistent values (1st process reads, 2nd process updates the same value, then 1st process updates too overwriting the previous update based on outdated values).
Well the scenario that you have mentioned is generally where one has to use a queue rather than a persistent storage to meet the throughput needs. On searching on the net you can find a lot of case studies for the same where people have employed queuing systems to enhance the throughput of the system. SQL server can just not scale to that levels.
In special cases when your need to make your queue persistent very special methods are used as to how to mitigate the performance effects because of this. For ex. Apache's ActiveMQ has its own special file storage system which performs much better compared to simply using a MySQL for the backend persistence. Probably MSMQ also provides a similar option but am not sure.

Categories