Skip to main content

Enterprise Architecture and Cloud

Amazon Web Services

AWS is primarily Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). AWS has a wide scope of different services allowing you to configure entire complex, powerful, secure, scalable and high-available IT environments consisting of private networks, gateways, load balancers, servers, storage, databases, monitoring etc., all virtual and to be set up through configuration wizards or scripting. Moreover AWS provides advanced services like containers, serverless computing, machine learning, message queuing etc. giving you a headstart with those technologies without the upfront platform investments.
The scope of AWS can range from hosting a single simple solution like a web server, to a virtual high available hosting facility, fully replacing physical on-site hosting facilities. Also hybrid solutions where AWS acts as a cloud-extension of on-premises IT are possible.
Amazon Web Services is one of the leaders in IaaS/PaaS. AWS has set the standards, and keeps on setting the standard. But there are other parties, like Microsoft, Google, IBM, Rackspace, or specialty vendors, that can deliver comparable services, that could better fit your needs. In this post I will stick with terminology and examples from AWS.
What is the EA perspective on cloud computing? I'll answer using some TOGAF terminology.

 From the bottom up

Clearly the different services of AWS provide architects, designers and engineers with a number of raw and elementary solution building blocks (SBB) at the Technology Architecture level. They are realizations of architecture building blocks (ABB) like, like Private Network, Virtual Server, load balancer, etc., that at the solution level translate into Amazon VPC, Amazon EC2 instance, Amazon EC2 Load Balancer, etc.
Through further choices about the configuration of the building blocks and linking them together you can compose more sophisticated ABB's describing patterns like "high available load balanced extensible farm of web application servers accessible from the internet" that communicates with a "high available database service running the database schema in a private subnet" using ODBC. Such building blocks can be used and re-used for realizing specific solutions based on specific data and applications.
These high level composite ABB's can be specialized into solution building blocks by selecting the matching components from the AWS catalog of services, and configured according to the rules that are set in the corresponding ABB's - which focus on a higher level of abstraction and mainly provide requirements to the Solution layer. And finally these ABB's and SBB's can be combined with ABB's and SBB's covering the applications and data that need to be loaded to deliver a full solution landscape.
The organization specific Enterprise Architecture provides baseline and target landscapes covering the solutions in scope of the request for architectural work. At the Solution level such landscapes can contain several AWS based solutions each consisting of a number of composite SBB's, as well as 'traditional' on premises solution.
The detail engineering for a particular instance of one of the solutions in the landscape is done by Solution Architects, following the rules and standards as documented in the ABB's and SBB's.

From the top down

The use of cloud, on premises, or a hybrid solution to support the application landscape is a decision that Enterprise Architecture will propose based on information acquired in each of the early TOGAF stages of the Architecture Development Method (ADM).
 In the Business Architecture stage information about the target new or changed business model and processes becomes available. This results in knowledge about the customers, departments, employees, business partners that are involved, plus the kind of processes, the type of IT services required. This will feed into requirements about the capacity of the target state processes and systems (and the level of uncertainty about required capacity), expectations about scalability, performance, business continuity, etc.
In the Information and Applications Architecture stage, this is worked out in more detail. The type and amount of information in scope becomes more clear. The type of applications that are needed, how they support the business processes, which data is critical, what data loss is acceptable, which systems are critical, how long they can be missed, expectations of application performance etc. Here some of the requirements identified in the Business Architecture stage are becoming more specific, e.g. Recovery Time Objective (RTO) and Recovery Point Objective (RPO) for the different applications in scope.
In the Technical Architecture Stage, based on the application architecture a system landscape and its supporting infrastructure are worked out in more detail. Requirements from the Business Architecture and Information and Application Architecture stages are feeding in to requirements for the different components of the system landscape. It is at this level where requirements can lead to decisions about the type of platforms to be used and their configuration.
The baseline and target states gets described in terms of ABB's and SBB's. Many of those building blocks already exist, and can be found in the Architecture Repository. But there will be gaps compared to the baseline: ABB's and SBB's not yet defined. These are worked out in detail as part of the Architecture deliverable, so that they can be (re-)used for describing architectures.

When and how will Enterprise Architects consider cloud/AWS

Nowadays many modern applications are primarily or exclusively delivered as cloud solutions (SaaS), this is increasingly true for development platforms (PaaS) as well. So if the (only) solution in scope is SaaS based, you are forced to use cloud. It is a no-brainer. However don't underestimate the impact on the technology landscape in case these solutions must be integrated with systems that are elsewhere in the cloud, or on-premises.
 Next to this obvious case, there are good cases for considering IaaS/PaaS or hybrid (partially IaaS/PaaS, partially on-premises) landscapes:
  1.  If the new or changed business model and its business processes have a lot of unknowns. E.g. if it is not clear how many and how fast customers will onboard on the new solution, and you have no idea what will be the maximum number of users and if it is important to warrant the performance regardless of this, it makes sense to architect the solutions to rapidly scale up or scale out. The only way to do this on-premises is to architect for the worst, and max-out every component, which results in investments in very expensive and highly underutilized platforms. In such case it is better to look at environments that are built for rapid delivery of capacity and are architected for scalability. Virtualization, on premises using Vmware or Hyper-V, can help. You then no longer look at the dynamics of single solutions but to the whole and engineer for the composite maximum demand. In effect, by pooling resources you can improve utilization efficiency without sacrificing scalability. The next level of this is IaaS, like AWS. There pricing models allow you to let cost evolve 1:1 with your usage, i.e. cost and value scale more or less linearly. So there is no upfront investment and virtually unlimited scalability.
  2.  Where current on-premises facilities are not at par with what is needed. Either because the current facility does not have the right capacity, or if it is lacking capabilities (e.g. no dual datacenter, no fail-over facility, etc..). The upfront cost of building new computer rooms, and the time that is lost, can be prohibitive. In that case you can look at hybrid solutions, e.g. extend datacenter with a backup facility on AWS, that secures your data is in an offsite location and allows you to rapidly restore functionality using AWS resources case of a disaster: you spin up servers only in case of disaster recovery, and as you pay for usage only, don't incur too much running cost for your DRP facility. Or you can look at full IaaS and leverage the reliability and recovery features offered by providers like Amazon.
  3.  If you don't have on-premises facilities, e.g. when you don't have an office, or are starting up a new office. Setting up a computer room takes work, time, and capital, three things that dynamic companies, like startups and scale-ups don't have, or at least don't want to spend on infrastructure. IaaS/PaaS requires no upfront investment, is readily available and can be set up by specialist shops, service providers, consultants etc.. So you can be up-and-running in a short time and only pay for what you use. BE CAREFUL though. You shouldn't underestimate this costs. Some years ago I made comparisons of the run cost of servers on different IaaS and virtualization platforms and tried to make a like-for-like comparison of TCO. I saw no significant differences in the run-rate per instance you need to take into account. This may have changed over the last few years. Nevertheless be aware that small rates times massive amounts of TB storage, GB data movement, etc. still is significant money. Some financial planning is required. The main differentiator is the spending pattern - the need to invest up front in on-premises overcapacity which is then underutilized for >95% of the time - and this of course can matter as stated before. So, the business case for total (lift & shift) replacement of existing on-premises data center facilities by IaaS may be a difficult one to make. However this is different when you are facing the need for an expensive modernization project to keep your facilities up-to-date and up-to-par with evolving business needs.
  4.  If you want to make a start with new technologies. Sometimes it is important to get your hands dirty with something new, like Machine Learning, Containers, IoT etc. to see if it fits a business scenario that you want to explore. Advanced IaaS/PaaS vendors, or specialist vendors can offer entire platforms that you can spin up in hours instead of months. So you can get started with your specific business scenario very fast, and without investments in platforms that in the end may not be right for you. So if you fail, at least you fail at al low cost and don't get stuck with systems collecting dust because you can't use them. Architects need to keep this in mind now that companies need to be more agile and the speed of change is increasing.
  5.  Your favorite reason. For sure there are more good scenarios where cloud or hybrid solutions help Architects address business priorities. Please provide your favorite examples as comment to this blogpost.
It is only but fair to mention that choosing IaaS/PaaS, and its implementation, needs to be done cautiously, taking into account a whole range of factors that determine IF you can use it, or put restrictions on HOW you can use it. A few of the more important ones are of those are:
  • Operational Technology. Modern process control systems for chemical manufacturing, assembly lines, etc. use "standard" IT technologies, like Ethernet, TCP/IP, Wintel hardware, Windows, linux, Oracle etc., but in ways, and with requirements that are quite different than usually. For instance there may be real-time requirements, demanding predictable and short delays in delivery of network packages containing commands between control systems and actuators. This requires a tight control over what happens on the network connecting control systems and devices like PLCs, which cannot be guaranteed when using public infrastructure like the Internet. As a consequence sometimes it is necessary to keep parts of IT landscapes on-premises, and end up with a hybrid solution.
  • Privacy and control. Governments, like the European Union, Russia, and China, are increasingly imposing restrictions on where information (e.g. health records, personnel files, LinkedIn profiles etc.) can be managed, for reasons of protection of privacy of civilians and other reasons. Such regulatory requirements may not block you from using SaaS/PaaS/IaaS, but you have to comply to the regulations and for instance make sure that server and storage instances are located in certain geographies.
  • Big Data transport. Amazon delivers several platforms that deliver big data, analytics and advanced analytics services. Also you can install any solution in this area you like on plain IaaS instances. The one thing you should not forget is that most or some of the data that you need to store and analyze does come from source locations outside of Amazon. You have to address the network security aspects related to moving data between environments, partially across public networks. And be aware that moving around large datasets takes time and that for part of those movements not bandwidth but network latency is limiting the rate at which you can move the data. This may result in moving your analytics solution to the edge, i.e. co-locate with the data source on a high speed network, or if you have options, choose IaaS hosting locations with low(-er) latency to the data sources. You have to recognize this, and address it in the design and the expectations you set on what you can achieve with your solution when it is built on an cloud platform.

Cloud strategy

Specifically for Enterprise architects there is a challenge to think about IaaS and PaaS in a more structural way, not only focusing on supporting immediate needs for particular solutions, but also as elements of the Enterprise architecture in the long run.
A cloud strategy is important to ensure a controlled evolution of the technology landscape, and prevent a sprawl of opportunistic spot solutions by putting in place a capable environment that addresses typical needs when dealing with IaaS and PaaS, as well as to provide a decision framework that guides architects and developers with decisions on what to host where and how.
Some key aspects to address in a cloud strategy are:
  • Criteria for putting what where
  • Standards, patterns: provide architects with a head start by providing higher level building blocks (ABB, SBB) that can be used as starting point for developing specific solutions
  • Security. How to protect data, processing platforms, and data flows across locations and networks. And how to manage security in a holistic way. Make sure solutions are in place that allow this management.
  • Integration. Especially in hybrid scenario's where data and systems are spread across several locations, e.g. on premises, on cloud IaaS/PaaS, and other SaaS platforms integration can become challenging. For larger environments it is important to have an integration strategy and the systems/service, like message queing, ESB, ETL lined up for integrating across the different locations, taking away the burden of building the basic and common services from individual projects.
  • Systems administration and monitoring. When having an hybrid environment with IaaS/PaaS, several SaaS solutions and on-premises platforms it is important to have a holistic design of the organizations, processes and technologies for managing and monitoring systems.
  • Identity and Access management. In larger cloud and hybrid environments you need to have solutions and standards in place for managing identities, access and authorizations.
  • Business continuity and Disaster Recovery. You need to address the specific challenges and leverage the benefits resulting from your cloud or hybrid environment.

In summary

Enterprise Architects are responsible for addressing cloud in a structural way: helping the business understand what cloud is, how to leverage the benefits and manage the risks, and structurally addressing the issues that arise from cloud and hybrid scenarios.
Methodologies like TOGAF provide concepts and methods that help work out your cloud strategy and systematize the use of cloud.


Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

Kubernetes Configuration Provider to load data from Secrets and Config Maps

Using Kubernetes Configuration Provider to load data from Secrets and Config Maps When running Apache Kafka on Kubernetes, you will sooner or later probably need to use Config Maps or Secrets. Either to store something in them, or load them into your Kafka configuration. That is true regardless of whether you use Strimzi to manage your Apache Kafka cluster or something else. Kubernetes has its own way of using Secrets and Config Maps from Pods. But they might not be always sufficient. That is why in Strimzi, we created Kubernetes Configuration Provider for Apache Kafka which we will introduce in this blog post. Usually, when you need to use data from a Config Map or Secret in your Pod, you will either mount it as volume or map it to an environment variable. Both methods are configured in the spec section or the Pod resource or in the spec.template.spec section when using higher level resources such as Deployments or StatefulSets. When mounted as a volume, the contents of the Secr...

Andriod Bug

A bug that steals cash by racking up charges from sending premium rate text messages has been found in Google Play.  Security researchers have identified 32 apps on Google Play that harbour the bug called BadNews. A security firm Lookout, which uncovered BadNews, said that the malicious program lays dormant on handsets for weeks to escape detection.  The malware targeted Android owners in Russia, Ukraine, Belarus and other countries in eastern Europe. 32 apps were available through four separate developer accounts on Google Play. Google has now suspended those accounts and it has pulled all the affected apps from Google Play, it added. Half of the 32 apps seeded with BadNews are Russian and the version of AlphaSMS it installed is tuned to use premium rate numbers in Russia, Ukraine, Belarus, Armenia and Kazakhstan.