Skip to main content

Rethinking the way we build on the cloud: Layering the Cloud


To help, you can split your cloud architecture into three layers:
  • Visible
  • Volatile
  • Persistant

Visible

This is the layer between the cloud and the rest of the world. It is mainly public DNS entries, load balancers, VPN connections etc. etc. These things are fairly static and consistent. If you have a website you will want the same A Name records pointing at your static IP.
Things in the visible layer rarely change or if they do they are for very deliberate reasons.

Volatile

This is where the majority of your servers are. It’s called volatile because it tends to change a lot. The known state (how many servers, what versions of which software they run etc.) are changing frequently, perhaps even several times per hour.
Even in a traditional data centre your servers are being upgraded, having security patches applied, new versions of software deployed, extra servers added etc. On the cloud this is even more volatile when you use patterns such as the Phoenix server and machines are rebuilt with new IP addresses etc.
You should be able to destroy the entire volatile layer and rebuild it from scratch without incurring any data loss.

Persistent

This is where all the important stuff is kept, all the stuff you can’t afford to lose.
Ideally only the minimum infrastructure to guarantee your persisted state is here, so for example, the DB server itself would be in the volatile layer but the actual state of the DB server, its transaction files etc. would be kept on some sort of robust storage that is considered ‘permanent’.
By organising your infrastructure around these three layers you are able to apply different qualities to each layer, for example the persistent layer would require large investment into things like backup and redundancy to protect it whilst this is completely unnecessary for the volatile layer. Instead the volatile layer should be able to accommodate high rates of change while you will want to maintain a considerably more conservative attitude towards the persistent and visible layers.
Stay tuned for the next part where we analzye the issues with the traditional approach to environments on the cloud.

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

Kubernetes Configuration Provider to load data from Secrets and Config Maps

Using Kubernetes Configuration Provider to load data from Secrets and Config Maps When running Apache Kafka on Kubernetes, you will sooner or later probably need to use Config Maps or Secrets. Either to store something in them, or load them into your Kafka configuration. That is true regardless of whether you use Strimzi to manage your Apache Kafka cluster or something else. Kubernetes has its own way of using Secrets and Config Maps from Pods. But they might not be always sufficient. That is why in Strimzi, we created Kubernetes Configuration Provider for Apache Kafka which we will introduce in this blog post. Usually, when you need to use data from a Config Map or Secret in your Pod, you will either mount it as volume or map it to an environment variable. Both methods are configured in the spec section or the Pod resource or in the spec.template.spec section when using higher level resources such as Deployments or StatefulSets. When mounted as a volume, the contents of the Secr...

Andriod Bug

A bug that steals cash by racking up charges from sending premium rate text messages has been found in Google Play.  Security researchers have identified 32 apps on Google Play that harbour the bug called BadNews. A security firm Lookout, which uncovered BadNews, said that the malicious program lays dormant on handsets for weeks to escape detection.  The malware targeted Android owners in Russia, Ukraine, Belarus and other countries in eastern Europe. 32 apps were available through four separate developer accounts on Google Play. Google has now suspended those accounts and it has pulled all the affected apps from Google Play, it added. Half of the 32 apps seeded with BadNews are Russian and the version of AlphaSMS it installed is tuned to use premium rate numbers in Russia, Ukraine, Belarus, Armenia and Kazakhstan.