Yes Cloud!

Archive for April 2009

 “To cloud or not to cloud?” is not up for debate – clouds were here before, are here now and will always be here in future, albeit under different labels to fit market inflections of each era.  Benefits to enterprise customers (of all sizes) for doing clouds have been well established also: lower cost with pay-as-you-grow pricing model and higher flexibility/agility.  Because these business benefits were also applicable to traditional hosted datacenter/application providers, it is important to identify newer elements that make up modern clouds.

As described in A Simplified Framework for Clouds, modern clouds may be characterized at least in two dimensions: infrastructure or application, public or private.  All clouds, modern or not, need traditional elements such as scale, resiliency, multi-tenancy and automated provisioning, though there could be fresh requirements imposed on them by modern clouds for addressing today’s business requirements. In contast, canonical elements discused below are enablers for modern clouds and are not typical of legacy hosted datacenter/application environments:

  • Computing: server virtualization, live application migration (LAM)
  • Networking: modern network fabric
  • Security: policy-based segmentation
  • Application: federated single sign-on (SSO)

1) Server virtualization, a must-have element of infrastructure/compute clouds (public or private) that (a) maximizes efficacy of server computing by partitioning each physical server into many virtual machines, and (b) decouples OS – and thereby applications – from physical servers thus allowing applications to be portable (via application+OS packaged as a machine-executable “file” that can be moved, copied, stored, carried, deleted…).  With deployment of dense multi-core blade servers, hundreds of virtual machines can be instantiated per server rack.  Such massive server scale, along with application mobility, enables unparallel compute elasticity that allows rapid scale up/down of the number of virtual machines allocated to a given application workload.

2) Live application migration (LAM), an exciting element of infrastructure/compute (public or private) clouds that enables live (in-service) migration of application workloads from one virtual machine to another using technologies such as VMWare’s vMotion and Citrix XenServer’s XenMotion.  Of course, the immediate benefit is that servers/OS can be upgraded/retired without bringing down the application itself.  Extending this further, applications can be envisioned to run anywhere in a cloud through policies based on dynamic context, such as temperature (follow-the-mercury), cost (follow-the-price), time (follow-the-sun), availability (follow-the-uptime), capacity (follow-the-GHz)…  For instance, an application workload could move from a hotter rack to a cooler rack, or from a higher cost data center to a lower cost one during non-business hours.  Ultimately, application workloads could migrate live from one cloud to another, or expanded across multiple clouds during peak demands (e.g. from private cloud to private + public clouds).  As migration footprint widens, crossing heterogeneous administrative domains, multiple new business and technology challenges emerge:

  • security (should intellectual property data be served from a piracy-prone region?);
  • compliance (can personal information move outside jurisdictional boundary?);
  • eDiscovery (will a secondary, sub-contracting cloud provider block enterprise’s access to enterprise’s own electronically stored information?);
  • cross-cloud interoperability – provisioning, consistent policy for LAM, SLA ownership, trouble-shooting etc. (who is the responsible party?).

Because of above issues, cross-cloud (inter-cloud) or even cross-datacenter LAM, while hugely exhilarating, will take some time to become practical (for more inter-cloud meandering, see A Hitchhikers Guide to the Intercloud).

3) Modern Network fabric, an underlying networking element of all modern clouds for enhancing server virtualization and live application migration, and for collapsing parallel cloud fabrics into a single Ethernet-based unified fabric.  In particular, the latest Ethernet incarnation is adopting concepts from IP, Fibre Channel and Infiniband protocols for powering modern cloud environments (for more on Ethernet’s evolution, see Ethernet 4.0).  Specifically:

  • Server virtualization drives much higher utilization of 1Gb/10Gb Ethernet links, thus requiring a line-rate network infrastructure having symmetrically balanced cross-sectional bandwidths (i.e. ingress-to-egress bandwidth ratio in Ethernet/IP switches trending 1-to-1).  Porting virtual machine images (few Gigabytes in size) to servers also needs a very high speed network.  Similarly, LAM needs larger Layer-2 domains for broadening the live migration footprint and benefits from lower network latencies.  With 10GbE network substrate (that is architecturally ready for 40GbE/100GbE interfaces) and lower latency Ethernet, plus enlarged Layer-2 domains via Ethernet multi-pathing or through virtual private LAN Service (VPLS), modern Ethernet/IP networking meets the needs of virtualized cloud environments.
  • Unified fabric, an emerging ingredient for collapsing parallel cloud fabrics into a single transport fabric.  In particular, Ethernet-based data fabric, Fibre Channel-based storage fabric and Infiniband-based cluster computing fabric can be replaced in aggregate by a single unified fabric, that is based on the latest evolution of Ethernet (aka Data Center Ethernet or Converged Enhanced Ethernet).  Benefits of lower cost and reduced operational complexity are attained because (a) no need to buy and manage disparate devices, (b) lower number of host adapters on servers from up to six to two, (c) proportional reduction in cabling and (d) simplified transport topology.

4) Policy-based segmentation, a necessary ingredient for extending traditional network segmentation concepts (de-militarized zone or DMZ, Extranet, Intranet) to the cloud world.  In virtualized, elastic and collaborative cloud environments where network borders are hard to quantify (see also Networks without Borders), segmentation based on VLANs, IP addresses and Layer-4 ports has become insufficient.  What is needed are virtual zones (vZones) that (a) leverage policies based on identity, protocol, network, environment and application attributes (b) apply to user-to-machine as well as machine-to-machine transactions, and (c) are reconfigurable at a moment’s notice based on changing business, regulatory and security environments.  In effect, along with traditional network zones created by firewall’s Layer-4 ACLs (access controls lists), vZones established by granular Layer-7 ACLs are necessary for policy-based enforcement without touching servers, OS or applications. With unprecedented granularity and control provided by policy-based vZones, IT can maintain consistent security posture and ensure regulatory compliance while enabling the business to reduce cost, improve agility and broaden collaboration.

5) Federated single sign-on (SSO), a necessary ingredient of all application clouds for providing highly convenient and seamless access to application resources.  As described in the cloud framework post, enterprises will retain many applications in the private cloud, subscribe to multiple SaaS (public cloud) apps, and utilize public infrastructure clouds for running other internal apps. Despite such highly heterogeneous application environment, it is imperative that access to cloud applications be seamless, via cloud-agnostic single sign-on (SSO), and leverage one or more enterprise-administered directory stores for ensuring consistent, attribute-based application access.  A user having contractual relationship with the enterprise – as an employee, customer, partner/vendor, contractor, etc. – should need to login only once to access all allowable cloud applications wherever they may reside.

The above five elements do appear to form a rather complete basis for modern clouds.  Interestingly, as one would expect, canonical elements are applicable by and large to infrastructure clouds – a center point for most of the current cloud sizzle!



When it comes to Ethernet, it’s a rather joyful and nostalgic walk on the Layer-2 memory lane.  This most recent walk has been triggered by a few good articles I read this week:

Rise of Ethernet as the king of network connectivity, wired and wireless, has been simply fascinating.  Not necessarily the best technology, its price-performance, ease of use and flexibility to adopt better traits of others have helped Ethernet get ahead and stay ahead of its Layer-2 compatriots, whether it be FDDI, ATM, Frame Relay or Infiniband (hey Fibre channel – watch out, Ethernet is coming to town).  A clear case of pragmatism winning over perfection!  Perhaps the only native Ethernet characteristic that has remained constant is its frame format…

As Ethernet has evolved, the industry has ended up using different modifiers to differentiate Ethernet from its prior avatars, using terms such as shared Ethernet, fast Ethernet, switched Ethernet, carrier/metro Ethernet…  These prefix modifiers certainly provide a precise functional description but lack the temporal sense of the way Ethernet has evolved.  This post is a curious attempt to chronologically categorize advances in Ethernet using a numerical suffix modifier – in the same spirit as the widely used Web 2.0 categorization.

According to the following, it seems that we are in the era of Ethernet 4.0:

  • Ethernet 1.0 (Classic era, pre-1990): shared or classic Ethernet as one of many Layer-2 technologies, proposed by Bob Metcalfe at Xerox PARC in 1973 and standardized by the IEEE in 1985. All users on the network share the total bandwidth, and collisions are avoided based on the CSMA/CD algorithm (Carrier Sense Multiple Access with Collision Detection). Interface speed: up to 10Mb/s.
  • Ethernet 2.0 (LAN era, 1990 – 2000): This was the “coming out” era for Ethernet, where it became the LAN technology of choice with functionalities such as bridged (switched) Ethernet, spanning tree protocol, VLAN, link aggregation, class of service, Wireless LAN (WLAN), power over Ethernet (PoE)… Ethernet, along with its Layer-3 counterpart Internet Protocol (IP), enabled convergence of parallel data, voice and video networks into one multi-service Ethernet/IP network for data/voice/video.  Interface speeds: 100Mb/s and 1Gb/s.
  • Ethernet 3.0 (MAN era, 2000 – 2007): A major win for Ethernet as it began to penetrate service provider networks for metro Ethernet services using technologies such as Q-in-Q, Mac-in-Mac and virtual private LAN service (VPLS).  Interface speed: 10Gb/s.
  • Ethernet 4.0 (Cloud era, 2008 – ?): Ethernet is adopting concepts from IP, Infiniband and Fibre Channel protocols for powering next-generation virtualized, workload agile data centers and modern cloud environments.  These advanced characteristics consist of reliability, lower latency, multi-pathing and unified I/O (including Fibre Channel over Ethernet, or FCoE), and are being standardized in IEEE and IETF (see here).  Like its 2.0 predecessor, Ethernet 4.0 has the opportunity to collapse parallel networks of data (Ethernet), storage (Fibre Channel) and cluster computing (Infiniband) into a single unified Ethernet/IP cloud network.  Interface speeds: 40Gb/s and 100Gb/s (estimated 2010).

Interestingly, enhanced functionality of Layer-2 Ethernet as well as Ethernomics (10X interface speed increase for 3X price) have been the catalyst for most network equipment churns thus far.  MPLS and IPv6 are perhaps the only celebrated enhancements that come to mind at Layers 3 and 4 (IP and TCP/UDP) to which network refresh can be attributed in limited customer segments and/or geographics (service providers, federal/defense, Asia).

Though not a concern any time soon, Ethernet should continue its watchful eye on other popular connectivity protocols, such as Bluetooth and USB.  Like Ethernet, both these protocols are easy to use, very cost effective and widely deployed on personal computers, mobile phones, PDAs and electronic gadgets of all types.

What will Ethernet be up to in 5.0?  400 Gigabit and Terabit Ethernet?  TCP-less data centers (via RDMA over Ethernet)?  Massively scalable processor/memory area networks (through disaggregation of servers into mega islands of CPUs and memory communicating over Ethernet interconnect)?

After a long pause, the new world of networking is getting interesting again!


Update (5/21/09): Bob Klessig – who initiated me to the world of Metro Ethernet – indicated that “another key factor to the success of Ethernet is the addressing.  MAC addresses are administered in an open way which allowed easy market entry for vendors and a high probability of uniqueness for each manufactured device.  The similar nature of IP address administration is why IP was successful and why ATM failed.”

Will we run out of MAC addresses some day?  IETF solved the similar IPv4 addressing concerns with IPv6.  Perhaps IEEE 802 would need to deal with MAC address issue during the Ethernet 5.0 timeframe…

“Cloud” has become the latest love term that promises to provide IT services at much lower cost and with much higher agility.  However, there has been much debate in regards to the type of services a cloud provides and how it provides them.  Traditionally, networking folks have drawn clouds on network diagrams for conceptually representing transport services. Now-a-days anyone who touches (directly or indirectly) data center, virtualization, computing, networking, storage, security, provisioning, convergence, scaling, federation, software, hosting, infrastructure, platform, etc. etc. is on the cloud bandwagon. This “everything but the kitchen sink” approach undoubtedly has caused confusion as to what clouds really mean; a case in point is last week’s “What is a cloud?” article by Tim Green of Network World.

For a cloud services framework to be simple yet meaningful, two key categories come to the forefront:

  • Cloud service type – infrastructure versus application: An infrastructure cloud service (aka infrastructure-as-a-service or IaaS) is the one where some portion of the IT infrastructure, such as compute, storage, programming, security, identity, etc., is offered as a service. This infrastructure service is an enabler for running end user applications. An application cloud service (aka software-as-a-service or SaaS), on the other end, is a self-sufficient soup-to-nut application offering to the end user, i.e. there is no additional IT effort and/or dependencies that an end user needs to address
  • Cloud service usage – public versus private: A public cloud service can be subscribed by any end user (public at large) and is typically accessed through the Internet, e.g. using HTTP/HTTPS web protocols, and mostly via the web browser user interface. A private (or internal) cloud, on the other hand, is owned/controlled by a particular end user and hence its access restricted – e.g. through a campus network and/or through VPN tunnels.

The above Type x Usage framework forms a nice 2×2 analysis grid for evaluating cloud services.  Amazon’s Elastic Compute Cloud (EC2) service and Simple Storage Service (S3), for example, are public infrastructure cloud services, whereas and Cisco Webex are public application cloud services.  Private clouds are not as prevalent today, though they are now being talked about more frequently; see the recent InformationWeek’s article on “Why ‘Private Cloud’ Computing Is Real — And Worth Considering“.  Bechtel, for instance, has been an early adopter of private clouds, even before the term “private cloud” was founded; see “The Google-ization of Bechtel” and “Cloud Computing to the Max at Bechtel“.

Based on discussions with several large enterprise customers and cloud providers, it seems that large enterprises would likely follow Bechtel’s lead:

  • build their own private IaaS and SaaS clouds,
  • scale private clouds’ reach/capacity by extending to hosted IaaS & SaaS,
  • subscribe to public IaaS opportunistically for non-core infrastructure needs,
  • convert internal applications to private SaaS,
  • outsource certain enterprise applications to public SaaS.

Traditional SMEs, on the other hand, are more likely to gravitate towards public cloud infrastructure for most of their IT needs.

Certainly, it’s natural to expect blending of the above framework components. For instance, start-up SaaS providers often leverage one or more IaaS services (e.g. compute, storage, security). Similarly, a public cloud provider may instantiate their service for private use. In most scenarios, the above framework should be reasonably sufficient for describing cloud services.

Needless to say, the hypothesis will be tested more thoroughly when applying widely in subsequent cloud-related posts.



  • None
  • Kevin Clark: Interesting article. Keep it up.
  • Michael Segal: Prashant, An interesting analogy between the earlier and the most recent cloud models. What became evident to me based on this analogy, is the gro
  • Pete K.: Prashant, You are ever the professor. I see cloud computing as a semi-dumb interface (more than a CRT in a network PC - such as at an airport gat