Yes Cloud!

Archive for March 2009

Look ma – no borders!

Enterprise networks are going through phenomenal transformations, driven by the business’ determination to reduce cost and become highly agile.  In the process, both internal and external borders or edges (or perimeters or boundaries) of enterprise networks are dissipating.  Traditionally, network edges have been quite critical as many intelligent services are applied to network traffic crossing the edge.

Canonically, network edges can be mapped into three main categories: Campus-facing, External-facing, and Server-facing.  In the new world, all three network edges are being re-defined.

1) Campus-facing network edge: In a typical campus environment, end user devices – e.g. desktops, laptops, IP phones – connect to the network through wiring closet switches and wireless access points.  With virtual desktop infrastructure (VDI), the PC itself is moving to the data center and hence no longer connected to the campus edge.  End users would connect to their “data center PCs” via smart terminals (e.g. ones that support RDP – the remote desktop protocol).  Cost savings are obvious: OS patching, HW/SW upgrades, etc. are now done centrally, and, thanks to serer virtualization, server HW can be shared across multiple users.  Edge features such as NAC, protocol recognition, … are no longer relevant on networking devices.

2) External-facing network edge: Traditionally, this edge delineated the trusted inside vs the untrusted outside using network firewalls.  Firewalls provided controlled access to designated network segments, e.g. demilitarized zone (DMZ), ExtraNet zone.  Because inter-enterprise collaboration is rapidly becoming web based and identity driven, network firewalls are no longer effective in providing the necessary controls to HTTP and SSL transactions – these transactions pass through the FW!  Controls need to move much closer to servers/applications, taking into account user identity & attributes (not just source IP address), application attributes such as URLs & sub-sites & folders & files (not just destination IP address & port number) and potentially application-specific actions that are exposed in the protocol (e.g. via HTTP query string, header attributes, methods and even payload).  This “vanishing perimeter” phenomenon has been widely covered in the industry and vendors are providing appliance-based solutions to re-establish controls through policy-driven virtual zones (vZones).

3) Server-facing network edge: Not too long ago, physical servers connected to a “top of rack” or “end of rack” switch, which formed the server-facing network edge.  With the advent of blade servers, this edge moved into the blade servers in the form of a blade switch.  Now with server virtualization coming to fame, that server-facing network edge has further moved out to the virtual “hypervisor” switch that connect multiple virtual machines within a server (or server blade).  Interestingly, these virtual switches have been provided by server virtualization vendors; Cisco is the first traditional networking vendor that recently announced plans to offer its own virtual switch product, the Nexus 1000v.

Additionally, with the emergence of cloud computing, enterprise network edges are to be extended to the cloud – sometimes deterministically and other times on demand, e.g. on a per application basis or even on a per workload basis.  And, as the network edges get re-defined, so must the network design best practices.  After a long pause, the new world of networking is getting interesting again!

PG.

Update (25 April 2009): Network World article on “Cloud computing a ‘security nightmare,’ says Cisco CEO” quoted Tom Gillis, vice president of marketing with Cisco’s Security Technology Business Unit: “The move to collaboration, whether it be video or the use of Web 2.0 technologies or mobile devices is really dissolving the corporate perimeter. This notion of security as a line that you draw in the sand… that notion is just gone.”

The Twitter phenomenon, or micro-blogging, has been quite intriguing. Though not yet a regular tweeter myself, I am told that the “aha” moment will come when I start using it actively. So I started tweeting this week on Twitter and Facebook.

As I was warming up, a new tweet popped up in my mind. What are the infrastructure implications of tweeting, in terms of HTTP connection rate, rate of new storage required, etc. I quickly looked up Twitter stats on tweetstats.com – nearly 2 million tweets per day. What if most of the world starts tweeting using smart phones (very much like SMS today)? To get a better sense of the infrastructure needed for this human urge to tweet, I did some quick back of the envelope calculation.

Assumptions:
Average Tweet Size: 100 bytes
# of Tweets: 10 per tweeter per day
# of Tweeters: 1 billion worldwide (think big!)

Infrastructure Requirements:
Tweet Rate: 10 billion tweets per day
Tweet Storage: 100 Gigabytes per day (with 10:1 compression)

Each tweet is essentially an HTTP transaction (request and response). The tweet rate of 10B/day translates to ~115K HTTP transactions/sec for tweets uniformly distributed throughout the day. Assuming that the compute infrastructure (aggregate of web, application, database servers) can process 1000 transactions/sec/server, about 115 servers are needed. If a peak to average ratio of 3:1 is assumed, then about 350 servers are needed.

Storage needs appear to be quite manageable also – 100GB/day means ~37TB/year, which is no sweat in the petabyte world we live in today.

Net-net, setting up a tweeting service does not seem to need an onerous compute/storage infrastructure (even if people double or triple their daily tweetings). Any techie tweeters out there who can validate/correct the above?

An interesting extension of this would be to estimate capacity of handling all new thoughts of every human being on this planet!!!

PG.

Building a cloud-centric data center infrastructure demands the following canonical components:
• Connectivity – data networking, storage networking and Layer 4-7 services (e.g. firewalls, load balancers)
• Compute – servers & OS, virtualization software
• Storage – arrays/file shares for structured and unstructured data (CIFS, iSCSI, Fibre Channel based blocked storage, etc.)
• Provisioning – automated, end-user driven provisioning of cloud infrastructure

Multiple data center vendors are positioning themselves to provide one or more of these components. On Monday, Cisco announced its Unified Computing vision, which unifies connectivity and computing disciplines using a holistic architectural approach. It includes a portfolio of products under a new Unified Computing System (UCS) product line. ().

GigaOm provided some details on the announced products. How are these products different that what is available today? Few immediate thoughts:

1. A 4/8-slot, 6-RU blade server chassis (UCS 5100 blade chassis and B-Series blades) that can take up to 8 half-size or up to 4 full-size server blades. Key notables:
• Leverages the latest Intel Xeon processor and Nehalem microarchitecture
• Each blade server utilizes unified I/O network adapter (for Ethernet, FCoE, Data Center Ethernet and FCoE connectivity). Three different network adapters are available, though it is unclear whether their interface is 10G or 1G or something else
• Ability to do memory expansion to up to 384GB (no details available)
• Up to 2 fabric extenders (see below) for external fabric connectivity (in lieu of traditional blade switches)
• No separate management module!

2. Fabric extenders (UCS 2100), aka FEX, which is inserted in the blade server chassis for network connectivity. According to Nexus 2000 and 5000 literature, FEX is a “remote I/O module” that extends internal fabric to external data center/cloud fabric, providing singly-managed entity with common supervisory functions and inheriting unified fabric switch port characteristics. Though this blade FEX as four 10Gb uplinks, it isn’t clear whether the internal blade chassis fabric is 10Gb or 1Gb (like Nexus 2148T) or something else. Of course, the key theme is operational simplicity.

3. Unified fabric switches (UCS 6100) providing 20 or 40 ports of 10GbE connectivity. Key notables here are that these switches natively support unified I/O (consolidation of Ethernet and Fibre Channel) via Data Center Ethernet (DCE) and FCoE, plus they enable port extension to UCS 2100 FEXes – very much like the Nexus 5000 switch family.

4. UCS Manager that manages the unified computing infrastructure, up to 320 discrete servers. One potential configuration could be: 40 blade chassis, each with 8 half-size blade servers (total 320 servers), connected to one or a single HA pod of UCS 6100 fabric switches utilizing one 10G port per switch per chassis. By addressing the operational complexity head on across multiple discrete products, Cisco intends to reduce cost and increase operational agility of data centers and cloud infratructure – key end user care-abouts.

5. Because server virtualization is central to the unified computing vision, the virtual “hypervisor” switch – Nexus 1000v – has be an integral component as well as the ability to expose VMs to blade FEX/fabric switches via VN-link technology. These technologies ensure that consistent network policies can be applied across the VM infrastructure, even during VM migration, and the entire process can be managed centrally. It would be natural to offer this functionality as a pre-configured option for UCS blade servers.

Overall, this is a cool architectural-centric product offering for next generation data center and cloud computing infrastructure, consisting of an end-to-end centrally managed solution. No doubt Cisco has up the ante in the data center. It’ll be interesting to see how other data center vendors respond – via their own product innovations, M&A activities and/or partnership re-alignments.

PG.

There has been an amazing innovation’s race between connectivity (i.e. networking) and computing disciplines in the last 50-60 years.  Generally speaking, end nodes are the “computing” elements that interact with each other over a “connectivity” fabric.  Together, they have fundamentally changed the way we live, work, play and learn.  As Cisco gets ready to launch its “Unified Computing” vision on Monday, it is worth looking at how these two disciplines have evolved and where they are headed.

First it was computing that led the way in the 60s, 70s and 80s with innovations in mainframes, minicomputers and personal computers.   Connectivity was nothing more than slow serial lines that allowed dumb terminals to connect to computing brains.  Next it was connectivity’s turn in the 80s and 90s with standards-based advancements in Ethernet, IP, TCP/UDP and HTTP that gave rise to client-server computing and our beloved Internet.  Connectivity not just caught up nicely; it began to lead.

Then we entered a new decade of a new century where computing started out with a bang, lead by server virtualization and workload mobility.   Sure, there was 10GbE to brag about on the connectivity front, but that was just speeds and feeds – no new intelligence added to the network.  It seemed that edge of the connectivity fabric was getting abstracted, absorbed by computing heads.  Even if one were to discount the blade switch (access layer sucked in a blade server), the rise of software “hypervisor” switch seemed a real shocker to the packet heads.

Where is my network edge?  How do I apply network services if the packets don’t come out?  Who owns the hypervisor switch?  How do I maintain network policies if workloads move?  Is the network becoming “dumb” pipes?

In the past those who bet against networking always lost!  Will that trend continue?  Will virtual machines, inter-VM interactions and VM movements be exposed to the connectivity fabric?  And in a standards-based manner for mass-scale deployment that is interoperable across vendors?  The answer needs to be “Yes” and is so proclaimed by the king of connectivity.  Let’s see how the future unfolds… Perhaps it is no longer a race between connectivity and computing, but instead a more symbiotic “connected computing” relationship!

PG.



    • Kevin Clark: Interesting article. Keep it up.
    • Michael Segal: Prashant, An interesting analogy between the earlier and the most recent cloud models. What became evident to me based on this analogy, is the gro
    • Pete K.: Prashant, You are ever the professor. I see cloud computing as a semi-dumb interface (more than a CRT in a network PC - such as at an airport gat

    Categories