Skip to main content

Services-specific resource architecture & constraints


 While standardization on the construct of one or more "VM" resource pools for the server resource requirements is necessary and an important step, it is not sufficient. We need to look at the entire solution across a number of dimensions in order to safely and successfully deploy complex solutions onto a virtualized-dominated datacenter world.

However, it would be too complex to introduce solution specific resource definitions for each and every solution that a given customer might deploy. We need to find a workable compromise that allows complex services to benefit from the virtualized and highly automated environment while at the same time ensuring optimal deployment for the solution requirements. After reviewing a number of complex solutions including SharePoint and Exchange, it appears that a number of dimensions have to be expressed and designed into any resource architecture that will host complex services:
  • Hypervisor feature support – a better definition might be shared infrastructure: dynamic memory, high-availability and disaster recovery techniques such as live migration.
  • Placement rules: certain scenarios, such as Microsoft Exchange, require 1:1 deployment between an Exchange server and a physical host. While it is permissible to deploy another workload to the same physical host, placing another Exchange server on the same physical host is not supported. While the product documentation will actually support placing more Exchange servers onto the solution, the recommended deployment strategy – due the nature of the built-in HA/DR architecture of Exchange – is to not deploy more than one Exchange server onto a physical host.
  • Storage architecture: we need to be able to identify the storage type and architecture, e.g. DAS or SAN, for storage-intensive and sensitive workloads such as Exchange or SQL Server. While this requirement obviously goes against the entire ideal of virtualization and standardization, the real world is unfortunately not quite as advanced today.
  • Storage IOPS: We also need to be able to provide the storage-sensitive VM with optimized ways of accessing the storage primarily guaranteeing IOPS. Currently the hyper-v solution does not provide storage QoS which would obviously be an elegant way to ensure the right level of IOPS support for any given workload.
  • Network performance: similarly to storage, complex solutions have very specific requirements on network performance. The good news is that Windows Server 2012 provides ways to manage network performance either through QoS (ideal) or through SR-IOV (high-performance).
  • Run state change management of mixed state environments, very common within existing complex services is especially complex where mixed stateful and stateless settings span across VMs (through roles) and within VMs (through files, registry). This aspect of complex solution management is beyond the scope of the proposal but something to consider: how to leverage IaaS optimizations offers on solutions running states?
With these constraints in mind, there are six categories or types of solution patterns emerging based upon close collaboration between the application workload architects driving the aforementioned PLA’s and the infrastructure architects driving and defining the IaaS PLA: 
  • The Messaging-category: Messaging is a major workload in most enterprises and has a number of constraints and rules when deploying in an IaaS-type environment.
· Hypervisor features: dynamic memory and hypervisor HA/DR features disabled
· Placement: 1:1 Exchange server and physical host deployment, VM's of other application types are OK
· DAS is the preferred storage recommendation because of presumed cost and data segmentation.
· Network QoS required

Comments

Popular posts from this blog

How to construct a File System that lives in Shared Memory.

Shared Memory File System Goals 1. MOUNTED IN SHARED MEMORY The result is a very fast, real time file system. We use Shared Memory so that the file system is public and not private. 2. PERSISTS TO DISK When the file system is unmounted, what happens to it? We need to be able to save the file system so that a system reboot does not destroy it. A great way to achieve this is to save the file system to disk. 3. EXTENSIBLE IN PLACE We want to be able to grow the file system in place. 4. SUPPORTS CONCURRENCY We want multiple users to be able to access the file system at the same time. In fact, we want multiple users to be able to access the same file at the same time. With the goals now in mind we can now talk about the major design issues: FAT File System & Design Issues The  FAT File System  has been around for quite some time. Basically it provides a pretty good file structure. But I have two problems with it: 1. FAT IS NOT EXTENSIBLE IN PLAC...

Common Sense Identification of the Security Problems

Organizations make key information security mistakes, which leads to inefficient and ineffective control environment. High profile data breaches and cyber-attacks drive the industry to look for more comprehensive protection measures since many organizations feel that their capability to withstand persistent targeted attacks is minimal. But at the same time, these organizations make some key information security mistakes, that jeopardize their efforts towards control robustness. Although many firms invest in security technologies and people, no one has the confidence that the measures taken are good enough to protect their data from compromises. Below are the 10 worst mistakes which are common to find, and important to address in the path of mature information security posture. If you analyze the cyber security scenarios, and organizational capabilities, the prevailing trend is a vendor-driven approach. In many cases, security professionals adopt the attitude of procuring...

Design of Large-Scale Services on Cloud Services PART 2

Decompose the Application by Workload Applications are typically composed of multiple workloads. Different workloads can, and often do, have different requirements, different levels of criticality to the business, and different levels of financial consideration associated with them. By decomposing an application into workloads, an organization provides itself with valuable flexibility. A workload-centric approach provides better controls over costs, more flexibility in choosing technologies best suited to the workload, workload specific approaches to availability and security, flexibility and agility in adding and deploying new capabilities, etc. Scenarios When thinking about resiliency, it’s sometimes helpful to do so in the context of scenarios. The following are examples of typical scenarios: Scenario 1 – Sports Data Service  A customer provides a data service that provides sports information. The service has two primary workloads. The first provides statistics for th...