Skip to main content

Designing Great Cloud Applications


I get strange looks when I talk to developers about the difference between developing an application to a product versus developing an application to a service.  The application you write on premise is written to a piece of software purchased, installed and configured on a piece of computer hardware that you privately own.  The application you write in the cloud is written to a set of services that are available to you as well as the public to exploit.  So let’s explore how they are different.

When you write your application for your on-premises server there are expectations that you have from this set of hardware and software.  You can expect that you will connect successfully every time you logon.  You control what applications run on this server so you expect the same level of performance every time.  You know and control the security context of your data.  You configured the software and hardware so you expect that if something goes wrong all you have to do is look at a logged event and you can most likely figure out what happened.  For the most part you have a pretty good pulse on the behavior of your application for your privately owned, managed and maintained server.  On the other hand, you also know that if you run out of compute on this server you now have to move to a different, more capable configured system.   And if it turns out that your server has more compute capacity than your application needs, you have wasted compute and now need to migrate other workloads onto the same server to better utilize the compute you have.  Lastly once it is all up and running you need to maintain the hardware and software.  It is some of these reasons why you may be looking at the cloud in the first place where the hope is to lower your overall cost of computing.

Now you decide to either move your application or build your application in the cloud.  In the cloud you have the same capabilities that you have on premise:  authentication services, application services (Cache, messaging, etc.), database services, and availability services, to name a few.  But you also have significant new opportunities such as ability to scale on demand and the cost benefits of pay-as-you-go on commodity hardware.   If you desire you can develop your application to these services similarly to how you wrote your on-premise application and for the most part it will work.

However building a great cloud application requires a little more.

Maximizing the performance, scalability and manageability of cloud applications written for Windows Azure requires architecting and coding to exploit the unique features of the cloud platform.  Examples include:

Capturing application telemetry throughout your code so you can proactively react to the applications behavior in real-time.
Exploiting scale-out of compute and storage nodes for great scale or utilizing multiple data centers for greater availability.  Designing your application to grow through deploying new “scale units” (groupings of compute, storage, and database) allows provisioning for both planned and unplanned growth.
Tolerating any single component failing while ensuring the app continues to run.  The best cloud applications in the world support this but it does require you to write your application expecting a potential component failure and responding to that outage in real-time without notable impact to the end user.
Leveraging cache for data retrieval whenever possible, and spreading database requests across a number of separate databases (shards) to support scalability and maximize performance.


To help jump start the learning process of writing a great cloud application this blog points to seven reusable cloud components with sample code and documentation that you can leverage for almost any PaaS application that you are developing on the Microsoft Windows Azure platform.

What we created:  The AzureCAT (Microsoft’s Customer Advisory Team in engineering) has been part of the design, build and deployment of hundreds of Azure PaaS applications.   We learned a lot from this and decided to write and share these seven reusable components below to help others see what we mean by writing to a service.  We wrote them as a single end to end application to ensure the components work seamlessly.  The application itself is a rather simple application that registers users, logs in and adds comments.  It was built and tested at scale requiring telemetry and sharding code to be added to ensure great performance.

The seven reusable components will benefit anyone who is writing a PaaS application.   The components are:



  • Configuration – configuration files are key to helping make management of your application seamless
  • Logging – the logging of application, event and performance telemetry
  • Data Access – this is actually two components:  a) retry logic for database commands and connections and b) sharding using a custom hashing algorithm
  • Caching – writing and reading user data into/from cache
  • Scheduling - background worker role to collect telemetry data every minute and move it into a custom ops database in SQL Azure
  • Reporting –reporting on the telemetry collected into the SQL Azure ops database
  • Application Request Routing (ARR) – technology in IIS for routing users to multiple hosted services (application workload load balancing)


  • Comments

    Popular posts from this blog

    Python and Parquet Performance

    In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

    How to construct a File System that lives in Shared Memory.

    Shared Memory File System Goals 1. MOUNTED IN SHARED MEMORY The result is a very fast, real time file system. We use Shared Memory so that the file system is public and not private. 2. PERSISTS TO DISK When the file system is unmounted, what happens to it? We need to be able to save the file system so that a system reboot does not destroy it. A great way to achieve this is to save the file system to disk. 3. EXTENSIBLE IN PLACE We want to be able to grow the file system in place. 4. SUPPORTS CONCURRENCY We want multiple users to be able to access the file system at the same time. In fact, we want multiple users to be able to access the same file at the same time. With the goals now in mind we can now talk about the major design issues: FAT File System & Design Issues The  FAT File System  has been around for quite some time. Basically it provides a pretty good file structure. But I have two problems with it: 1. FAT IS NOT EXTENSIBLE IN PLAC...

    Common Sense Identification of the Security Problems

    Organizations make key information security mistakes, which leads to inefficient and ineffective control environment. High profile data breaches and cyber-attacks drive the industry to look for more comprehensive protection measures since many organizations feel that their capability to withstand persistent targeted attacks is minimal. But at the same time, these organizations make some key information security mistakes, that jeopardize their efforts towards control robustness. Although many firms invest in security technologies and people, no one has the confidence that the measures taken are good enough to protect their data from compromises. Below are the 10 worst mistakes which are common to find, and important to address in the path of mature information security posture. If you analyze the cyber security scenarios, and organizational capabilities, the prevailing trend is a vendor-driven approach. In many cases, security professionals adopt the attitude of procuring...