Yes Cloud!

Archive for the ‘Networking’ Category

 “To cloud or not to cloud?” is not up for debate – clouds were here before, are here now and will always be here in future, albeit under different labels to fit market inflections of each era.  Benefits to enterprise customers (of all sizes) for doing clouds have been well established also: lower cost with pay-as-you-grow pricing model and higher flexibility/agility.  Because these business benefits were also applicable to traditional hosted datacenter/application providers, it is important to identify newer elements that make up modern clouds.

As described in A Simplified Framework for Clouds, modern clouds may be characterized at least in two dimensions: infrastructure or application, public or private.  All clouds, modern or not, need traditional elements such as scale, resiliency, multi-tenancy and automated provisioning, though there could be fresh requirements imposed on them by modern clouds for addressing today’s business requirements. In contast, canonical elements discused below are enablers for modern clouds and are not typical of legacy hosted datacenter/application environments:

  • Computing: server virtualization, live application migration (LAM)
  • Networking: modern network fabric
  • Security: policy-based segmentation
  • Application: federated single sign-on (SSO)

1) Server virtualization, a must-have element of infrastructure/compute clouds (public or private) that (a) maximizes efficacy of server computing by partitioning each physical server into many virtual machines, and (b) decouples OS – and thereby applications – from physical servers thus allowing applications to be portable (via application+OS packaged as a machine-executable “file” that can be moved, copied, stored, carried, deleted…).  With deployment of dense multi-core blade servers, hundreds of virtual machines can be instantiated per server rack.  Such massive server scale, along with application mobility, enables unparallel compute elasticity that allows rapid scale up/down of the number of virtual machines allocated to a given application workload.

2) Live application migration (LAM), an exciting element of infrastructure/compute (public or private) clouds that enables live (in-service) migration of application workloads from one virtual machine to another using technologies such as VMWare’s vMotion and Citrix XenServer’s XenMotion.  Of course, the immediate benefit is that servers/OS can be upgraded/retired without bringing down the application itself.  Extending this further, applications can be envisioned to run anywhere in a cloud through policies based on dynamic context, such as temperature (follow-the-mercury), cost (follow-the-price), time (follow-the-sun), availability (follow-the-uptime), capacity (follow-the-GHz)…  For instance, an application workload could move from a hotter rack to a cooler rack, or from a higher cost data center to a lower cost one during non-business hours.  Ultimately, application workloads could migrate live from one cloud to another, or expanded across multiple clouds during peak demands (e.g. from private cloud to private + public clouds).  As migration footprint widens, crossing heterogeneous administrative domains, multiple new business and technology challenges emerge:

  • security (should intellectual property data be served from a piracy-prone region?);
  • compliance (can personal information move outside jurisdictional boundary?);
  • eDiscovery (will a secondary, sub-contracting cloud provider block enterprise’s access to enterprise’s own electronically stored information?);
  • cross-cloud interoperability – provisioning, consistent policy for LAM, SLA ownership, trouble-shooting etc. (who is the responsible party?).

Because of above issues, cross-cloud (inter-cloud) or even cross-datacenter LAM, while hugely exhilarating, will take some time to become practical (for more inter-cloud meandering, see A Hitchhikers Guide to the Intercloud).

3) Modern Network fabric, an underlying networking element of all modern clouds for enhancing server virtualization and live application migration, and for collapsing parallel cloud fabrics into a single Ethernet-based unified fabric.  In particular, the latest Ethernet incarnation is adopting concepts from IP, Fibre Channel and Infiniband protocols for powering modern cloud environments (for more on Ethernet’s evolution, see Ethernet 4.0).  Specifically:

  • Server virtualization drives much higher utilization of 1Gb/10Gb Ethernet links, thus requiring a line-rate network infrastructure having symmetrically balanced cross-sectional bandwidths (i.e. ingress-to-egress bandwidth ratio in Ethernet/IP switches trending 1-to-1).  Porting virtual machine images (few Gigabytes in size) to servers also needs a very high speed network.  Similarly, LAM needs larger Layer-2 domains for broadening the live migration footprint and benefits from lower network latencies.  With 10GbE network substrate (that is architecturally ready for 40GbE/100GbE interfaces) and lower latency Ethernet, plus enlarged Layer-2 domains via Ethernet multi-pathing or through virtual private LAN Service (VPLS), modern Ethernet/IP networking meets the needs of virtualized cloud environments.
  • Unified fabric, an emerging ingredient for collapsing parallel cloud fabrics into a single transport fabric.  In particular, Ethernet-based data fabric, Fibre Channel-based storage fabric and Infiniband-based cluster computing fabric can be replaced in aggregate by a single unified fabric, that is based on the latest evolution of Ethernet (aka Data Center Ethernet or Converged Enhanced Ethernet).  Benefits of lower cost and reduced operational complexity are attained because (a) no need to buy and manage disparate devices, (b) lower number of host adapters on servers from up to six to two, (c) proportional reduction in cabling and (d) simplified transport topology.

4) Policy-based segmentation, a necessary ingredient for extending traditional network segmentation concepts (de-militarized zone or DMZ, Extranet, Intranet) to the cloud world.  In virtualized, elastic and collaborative cloud environments where network borders are hard to quantify (see also Networks without Borders), segmentation based on VLANs, IP addresses and Layer-4 ports has become insufficient.  What is needed are virtual zones (vZones) that (a) leverage policies based on identity, protocol, network, environment and application attributes (b) apply to user-to-machine as well as machine-to-machine transactions, and (c) are reconfigurable at a moment’s notice based on changing business, regulatory and security environments.  In effect, along with traditional network zones created by firewall’s Layer-4 ACLs (access controls lists), vZones established by granular Layer-7 ACLs are necessary for policy-based enforcement without touching servers, OS or applications. With unprecedented granularity and control provided by policy-based vZones, IT can maintain consistent security posture and ensure regulatory compliance while enabling the business to reduce cost, improve agility and broaden collaboration.

5) Federated single sign-on (SSO), a necessary ingredient of all application clouds for providing highly convenient and seamless access to application resources.  As described in the cloud framework post, enterprises will retain many applications in the private cloud, subscribe to multiple SaaS (public cloud) apps, and utilize public infrastructure clouds for running other internal apps. Despite such highly heterogeneous application environment, it is imperative that access to cloud applications be seamless, via cloud-agnostic single sign-on (SSO), and leverage one or more enterprise-administered directory stores for ensuring consistent, attribute-based application access.  A user having contractual relationship with the enterprise – as an employee, customer, partner/vendor, contractor, etc. – should need to login only once to access all allowable cloud applications wherever they may reside.

The above five elements do appear to form a rather complete basis for modern clouds.  Interestingly, as one would expect, canonical elements are applicable by and large to infrastructure clouds – a center point for most of the current cloud sizzle!


When it comes to Ethernet, it’s a rather joyful and nostalgic walk on the Layer-2 memory lane.  This most recent walk has been triggered by a few good articles I read this week:

Rise of Ethernet as the king of network connectivity, wired and wireless, has been simply fascinating.  Not necessarily the best technology, its price-performance, ease of use and flexibility to adopt better traits of others have helped Ethernet get ahead and stay ahead of its Layer-2 compatriots, whether it be FDDI, ATM, Frame Relay or Infiniband (hey Fibre channel – watch out, Ethernet is coming to town).  A clear case of pragmatism winning over perfection!  Perhaps the only native Ethernet characteristic that has remained constant is its frame format…

As Ethernet has evolved, the industry has ended up using different modifiers to differentiate Ethernet from its prior avatars, using terms such as shared Ethernet, fast Ethernet, switched Ethernet, carrier/metro Ethernet…  These prefix modifiers certainly provide a precise functional description but lack the temporal sense of the way Ethernet has evolved.  This post is a curious attempt to chronologically categorize advances in Ethernet using a numerical suffix modifier – in the same spirit as the widely used Web 2.0 categorization.

According to the following, it seems that we are in the era of Ethernet 4.0:

  • Ethernet 1.0 (Classic era, pre-1990): shared or classic Ethernet as one of many Layer-2 technologies, proposed by Bob Metcalfe at Xerox PARC in 1973 and standardized by the IEEE in 1985. All users on the network share the total bandwidth, and collisions are avoided based on the CSMA/CD algorithm (Carrier Sense Multiple Access with Collision Detection). Interface speed: up to 10Mb/s.
  • Ethernet 2.0 (LAN era, 1990 – 2000): This was the “coming out” era for Ethernet, where it became the LAN technology of choice with functionalities such as bridged (switched) Ethernet, spanning tree protocol, VLAN, link aggregation, class of service, Wireless LAN (WLAN), power over Ethernet (PoE)… Ethernet, along with its Layer-3 counterpart Internet Protocol (IP), enabled convergence of parallel data, voice and video networks into one multi-service Ethernet/IP network for data/voice/video.  Interface speeds: 100Mb/s and 1Gb/s.
  • Ethernet 3.0 (MAN era, 2000 – 2007): A major win for Ethernet as it began to penetrate service provider networks for metro Ethernet services using technologies such as Q-in-Q, Mac-in-Mac and virtual private LAN service (VPLS).  Interface speed: 10Gb/s.
  • Ethernet 4.0 (Cloud era, 2008 – ?): Ethernet is adopting concepts from IP, Infiniband and Fibre Channel protocols for powering next-generation virtualized, workload agile data centers and modern cloud environments.  These advanced characteristics consist of reliability, lower latency, multi-pathing and unified I/O (including Fibre Channel over Ethernet, or FCoE), and are being standardized in IEEE and IETF (see here).  Like its 2.0 predecessor, Ethernet 4.0 has the opportunity to collapse parallel networks of data (Ethernet), storage (Fibre Channel) and cluster computing (Infiniband) into a single unified Ethernet/IP cloud network.  Interface speeds: 40Gb/s and 100Gb/s (estimated 2010).

Interestingly, enhanced functionality of Layer-2 Ethernet as well as Ethernomics (10X interface speed increase for 3X price) have been the catalyst for most network equipment churns thus far.  MPLS and IPv6 are perhaps the only celebrated enhancements that come to mind at Layers 3 and 4 (IP and TCP/UDP) to which network refresh can be attributed in limited customer segments and/or geographics (service providers, federal/defense, Asia).

Though not a concern any time soon, Ethernet should continue its watchful eye on other popular connectivity protocols, such as Bluetooth and USB.  Like Ethernet, both these protocols are easy to use, very cost effective and widely deployed on personal computers, mobile phones, PDAs and electronic gadgets of all types.

What will Ethernet be up to in 5.0?  400 Gigabit and Terabit Ethernet?  TCP-less data centers (via RDMA over Ethernet)?  Massively scalable processor/memory area networks (through disaggregation of servers into mega islands of CPUs and memory communicating over Ethernet interconnect)?

After a long pause, the new world of networking is getting interesting again!


Update (5/21/09): Bob Klessig – who initiated me to the world of Metro Ethernet – indicated that “another key factor to the success of Ethernet is the addressing.  MAC addresses are administered in an open way which allowed easy market entry for vendors and a high probability of uniqueness for each manufactured device.  The similar nature of IP address administration is why IP was successful and why ATM failed.”

Will we run out of MAC addresses some day?  IETF solved the similar IPv4 addressing concerns with IPv6.  Perhaps IEEE 802 would need to deal with MAC address issue during the Ethernet 5.0 timeframe…

Look ma – no borders!

Enterprise networks are going through phenomenal transformations, driven by the business’ determination to reduce cost and become highly agile.  In the process, both internal and external borders or edges (or perimeters or boundaries) of enterprise networks are dissipating.  Traditionally, network edges have been quite critical as many intelligent services are applied to network traffic crossing the edge.

Canonically, network edges can be mapped into three main categories: Campus-facing, External-facing, and Server-facing.  In the new world, all three network edges are being re-defined.

1) Campus-facing network edge: In a typical campus environment, end user devices – e.g. desktops, laptops, IP phones – connect to the network through wiring closet switches and wireless access points.  With virtual desktop infrastructure (VDI), the PC itself is moving to the data center and hence no longer connected to the campus edge.  End users would connect to their “data center PCs” via smart terminals (e.g. ones that support RDP – the remote desktop protocol).  Cost savings are obvious: OS patching, HW/SW upgrades, etc. are now done centrally, and, thanks to serer virtualization, server HW can be shared across multiple users.  Edge features such as NAC, protocol recognition, … are no longer relevant on networking devices.

2) External-facing network edge: Traditionally, this edge delineated the trusted inside vs the untrusted outside using network firewalls.  Firewalls provided controlled access to designated network segments, e.g. demilitarized zone (DMZ), ExtraNet zone.  Because inter-enterprise collaboration is rapidly becoming web based and identity driven, network firewalls are no longer effective in providing the necessary controls to HTTP and SSL transactions – these transactions pass through the FW!  Controls need to move much closer to servers/applications, taking into account user identity & attributes (not just source IP address), application attributes such as URLs & sub-sites & folders & files (not just destination IP address & port number) and potentially application-specific actions that are exposed in the protocol (e.g. via HTTP query string, header attributes, methods and even payload).  This “vanishing perimeter” phenomenon has been widely covered in the industry and vendors are providing appliance-based solutions to re-establish controls through policy-driven virtual zones (vZones).

3) Server-facing network edge: Not too long ago, physical servers connected to a “top of rack” or “end of rack” switch, which formed the server-facing network edge.  With the advent of blade servers, this edge moved into the blade servers in the form of a blade switch.  Now with server virtualization coming to fame, that server-facing network edge has further moved out to the virtual “hypervisor” switch that connect multiple virtual machines within a server (or server blade).  Interestingly, these virtual switches have been provided by server virtualization vendors; Cisco is the first traditional networking vendor that recently announced plans to offer its own virtual switch product, the Nexus 1000v.

Additionally, with the emergence of cloud computing, enterprise network edges are to be extended to the cloud – sometimes deterministically and other times on demand, e.g. on a per application basis or even on a per workload basis.  And, as the network edges get re-defined, so must the network design best practices.  After a long pause, the new world of networking is getting interesting again!


Update (25 April 2009): Network World article on “Cloud computing a ‘security nightmare,’ says Cisco CEO” quoted Tom Gillis, vice president of marketing with Cisco’s Security Technology Business Unit: “The move to collaboration, whether it be video or the use of Web 2.0 technologies or mobile devices is really dissolving the corporate perimeter. This notion of security as a line that you draw in the sand… that notion is just gone.”

There has been an amazing innovation’s race between connectivity (i.e. networking) and computing disciplines in the last 50-60 years.  Generally speaking, end nodes are the “computing” elements that interact with each other over a “connectivity” fabric.  Together, they have fundamentally changed the way we live, work, play and learn.  As Cisco gets ready to launch its “Unified Computing” vision on Monday, it is worth looking at how these two disciplines have evolved and where they are headed.

First it was computing that led the way in the 60s, 70s and 80s with innovations in mainframes, minicomputers and personal computers.   Connectivity was nothing more than slow serial lines that allowed dumb terminals to connect to computing brains.  Next it was connectivity’s turn in the 80s and 90s with standards-based advancements in Ethernet, IP, TCP/UDP and HTTP that gave rise to client-server computing and our beloved Internet.  Connectivity not just caught up nicely; it began to lead.

Then we entered a new decade of a new century where computing started out with a bang, lead by server virtualization and workload mobility.   Sure, there was 10GbE to brag about on the connectivity front, but that was just speeds and feeds – no new intelligence added to the network.  It seemed that edge of the connectivity fabric was getting abstracted, absorbed by computing heads.  Even if one were to discount the blade switch (access layer sucked in a blade server), the rise of software “hypervisor” switch seemed a real shocker to the packet heads.

Where is my network edge?  How do I apply network services if the packets don’t come out?  Who owns the hypervisor switch?  How do I maintain network policies if workloads move?  Is the network becoming “dumb” pipes?

In the past those who bet against networking always lost!  Will that trend continue?  Will virtual machines, inter-VM interactions and VM movements be exposed to the connectivity fabric?  And in a standards-based manner for mass-scale deployment that is interoperable across vendors?  The answer needs to be “Yes” and is so proclaimed by the king of connectivity.  Let’s see how the future unfolds… Perhaps it is no longer a race between connectivity and computing, but instead a more symbiotic “connected computing” relationship!


    • Kevin Clark: Interesting article. Keep it up.
    • Michael Segal: Prashant, An interesting analogy between the earlier and the most recent cloud models. What became evident to me based on this analogy, is the gro
    • Pete K.: Prashant, You are ever the professor. I see cloud computing as a semi-dumb interface (more than a CRT in a network PC - such as at an airport gat