I hope you read Part-1 of this series. Today, we are going to talk about Cisco UCS (Unified Computing System). Cisco Journey in x86 server market was started back in 2009 and now it is giving tough competition to other Vendors in this space.
You will find all kind of fancy articles/Blogs on UCS ,but i would like to make it simple because that's how i understand.
So, lets first understand Building Blocks of UCS and then we will deep dive into architecture/Design and other stuff.
Cisco UCS Components
1. Chassis :- Cisco offers 6U 5100 series chassis which can house 8 half height or 4 full height B Series Blades. The fully populated chassis would require from 3.5 to 4 KVA power.if you want to accurately calculate power requirement click here to get Cisco UCS Power Calculator. Total of 8 Fans and 4 power supply would provide power & cooling to the systems. from the density perspective 5100 series does not look attractive as other vendors can offer 8 to 16 Blades in single chassis. Click here to get more details.
Lets quickly understand Chassis Architecture with the help of picture.
Front View of 5100 Chassis
Rear view
2. Fabric Extender:- Cisco UCS chassis has 2 I/O slots (remember I/O
bays) from my previous post. There is no intelligence build on Fabric
Extender and therefore I call it Pass-through device. Dual fabric extender
provides total of 80GB converged (LAN + SAN) bandwidth to Blade servers. With
the release of Cisco 2nd Generation UCS, Fabric extender (2208) can
scale up to 8x10 GB from each Module which means total of 160GB converged
Bandwidth to Blade systems. Each Fabric Extender provides 16 to 32 downlink ports to connect with blade servers via BackPlane and 4 to 8 uplink ports to connect with external world. uplink ports can be populated with 10GB SFP+ Gibic.
3. Fabric Interconnect :- To Many interconnects :). Well this is where Cisco is different from other vendors. Fabric Interconnects is the first intelligent hop to which UCS chassis connects to access external LAN or SAN Fabric.Then you would ask, what Fabric extender is doing ? :P.. like i said , Extender is doing nothing - if i make it simple ,its more or less like a port that you see on rack servers the only difference is that you can't plug RJ-45. You need twinax cable to connect Fabric Extender with Fabric Interconnect. so how Cisco offering is different , well answer is simple , with Fabric Interconnect you can connect multiple UCS chassis, and since all short of management is done from this device only, you don't need to think about multiple console. From the overall LAN infrastructure landscape perspective you are replacing distribution layer switches (the switches where your servers would generally connect, remember Top of the rack or Side of the Rack Topology). so in simple words, if you are going with HP , you need to buy Blade switch which will fit into the I/O slot and then buy top of the rack switch (Cisco/HP/Brocade) and then connect with aggregation layer. I hope you are aware of conventional 3 Layer (Distribution/Aggregation & Core) network architecture. You need pair of Fabric Interconnect for redundancy.
Having said that, Cisco offers Following Fabric interconnects
- 6100 series :- First generation of Fabric interconnects are available in 2 different Models
- 6120 :- 20 Ports
- 6140 :- 40 Ports
- 6200 series :- Second generation of Fabric interconnects are available in
- 6296 :- 96 Ports
- 6248 :- 48 Ports
so lets understand by diagram , how and where this device fits into the UCS system.
as we can see in the diagram, Fabric interconnect is the first device where UCS chassis is connecting with Twinax cable. There are no additional cable for FC, because all the traffic between UCS chassis and Fabric interconnect is converged (FCOE) which mean single cable is used for Fiber and Ethernet packets.unlike traditional servers where you need additional cable to FC. Depending on the customer requirements and scale of deployment , you can make a decision to isolate Ethernet & FC traffic from Fabric interconnect or continue with converged network for LAN & SAN having said that, i have not shown any connection after fabric interconnect. I will write another article around designing FCOE network in context of UCS and other BladeCenters.
4. UCS Blades :- Cisco Offers verity of Blades, what makes Cisco different from other vendor is that they offer Extended memory (sales pitch). HP can provide more memory support with G8 series Blades. For more information on Cisco Blade Server offerings Click here.
5. I/O Devices :- Fancy world but its basically NIC & HBA's. All major vendors are now shipping severs with CNA cards. Many people get confused with CNA's , folks its a simple consolidated MicroChip , which has 2 MicroChip built on top of it (1st for NIC- Broadcom , 2nd for HBA - QLogic or Emulax) , These vendors are known, you may find CNA having NIC/HBA MicroChip from other vendors. Very common question is , if its a single integrated Microchip how much bandwidth is allocated to NIC & HBA, well very simple if its 10GB CNA , 4GB is hardcoded for HBA and rest 6GB is for NIC. I know some people would say.. we will decide this allocation using Hardware based QoS in UCS.. :)... My friend go back and double check.. QoS reserves 40% of 10GB , so you are only left with 60% which is 6GB.
Due to the virtue of server Virtualization , a new trend has been started to consolidate Network/Storage/NIC everything. Having said that, remember those days when you needed 6 NICs on the server and there were no PCI slot available to install additional card. Talk to people who are working on server virtualization projects, they need 8-10 Nics due to the number of VLANs and traffic isolation for underlying virtual infrastrucutre (VMware FT, vMotion, Storage vMotion, Microsoft CSV, Cluster heartbeat and many more).
so in simple word , what we need is more I/O interfaces (ports) with less PCI cards or Mezzanine cards. so what is the answer, well build a Mezzanine card and use PCIe functions (SR-IOV) to chop it into smaller pieces and fool OS and Hypervisor . Single Root IO virtualization is huge topic in itself and lots of improvement happening in this space. I am planning to write article purely focused on I/O virtualization.
so what cisco offers , They offer something called "Virtual Interface Card" nick name "Palo Adapter" , its a single chip which can scale up to 256 Nic/HBA depending on the Model. They do offer traditional cards with 4 and 2 ports. complete protofolio is available here.
6. UCS Manager :- last but not the least, this is software component which is built inside fabric interconnect and runs in HA mode. UCS Manager is built on very modular architecture , which is capable of abstracting the Hardware identity. I can write 2 page article on UCS manager but, i think that's not in the scope of this post.
so these are major component which makes Cisco UCS system. Fabric Interconnect/UCS Manager & Palo Adapters requires another post and hopefully you will find some posts around that soon.
Thanks for the time and I wish you great weekend ahead!
Cheers!!
Pankaj
Front View of 5100 Chassis
Rear view
2. Fabric Extender:- Cisco UCS chassis has 2 I/O slots (remember I/O
bays) from my previous post. There is no intelligence build on Fabric
Extender and therefore I call it Pass-through device. Dual fabric extender
provides total of 80GB converged (LAN + SAN) bandwidth to Blade servers. With
the release of Cisco 2nd Generation UCS, Fabric extender (2208) can
scale up to 8x10 GB from each Module which means total of 160GB converged
Bandwidth to Blade systems. Each Fabric Extender provides 16 to 32 downlink ports to connect with blade servers via BackPlane and 4 to 8 uplink ports to connect with external world. uplink ports can be populated with 10GB SFP+ Gibic.
3. Fabric Interconnect :- To Many interconnects :). Well this is where Cisco is different from other vendors. Fabric Interconnects is the first intelligent hop to which UCS chassis connects to access external LAN or SAN Fabric.Then you would ask, what Fabric extender is doing ? :P.. like i said , Extender is doing nothing - if i make it simple ,its more or less like a port that you see on rack servers the only difference is that you can't plug RJ-45. You need twinax cable to connect Fabric Extender with Fabric Interconnect. so how Cisco offering is different , well answer is simple , with Fabric Interconnect you can connect multiple UCS chassis, and since all short of management is done from this device only, you don't need to think about multiple console. From the overall LAN infrastructure landscape perspective you are replacing distribution layer switches (the switches where your servers would generally connect, remember Top of the rack or Side of the Rack Topology). so in simple words, if you are going with HP , you need to buy Blade switch which will fit into the I/O slot and then buy top of the rack switch (Cisco/HP/Brocade) and then connect with aggregation layer. I hope you are aware of conventional 3 Layer (Distribution/Aggregation & Core) network architecture. You need pair of Fabric Interconnect for redundancy.
Having said that, Cisco offers Following Fabric interconnects
- 6100 series :- First generation of Fabric interconnects are available in 2 different Models
- 6120 :- 20 Ports
- 6140 :- 40 Ports
- 6200 series :- Second generation of Fabric interconnects are available in
- 6296 :- 96 Ports
- 6248 :- 48 Ports
so lets understand by diagram , how and where this device fits into the UCS system.
as we can see in the diagram, Fabric interconnect is the first device where UCS chassis is connecting with Twinax cable. There are no additional cable for FC, because all the traffic between UCS chassis and Fabric interconnect is converged (FCOE) which mean single cable is used for Fiber and Ethernet packets.unlike traditional servers where you need additional cable to FC. Depending on the customer requirements and scale of deployment , you can make a decision to isolate Ethernet & FC traffic from Fabric interconnect or continue with converged network for LAN & SAN having said that, i have not shown any connection after fabric interconnect. I will write another article around designing FCOE network in context of UCS and other BladeCenters.
4. UCS Blades :- Cisco Offers verity of Blades, what makes Cisco different from other vendor is that they offer Extended memory (sales pitch). HP can provide more memory support with G8 series Blades. For more information on Cisco Blade Server offerings Click here.
5. I/O Devices :- Fancy world but its basically NIC & HBA's. All major vendors are now shipping severs with CNA cards. Many people get confused with CNA's , folks its a simple consolidated MicroChip , which has 2 MicroChip built on top of it (1st for NIC- Broadcom , 2nd for HBA - QLogic or Emulax) , These vendors are known, you may find CNA having NIC/HBA MicroChip from other vendors. Very common question is , if its a single integrated Microchip how much bandwidth is allocated to NIC & HBA, well very simple if its 10GB CNA , 4GB is hardcoded for HBA and rest 6GB is for NIC. I know some people would say.. we will decide this allocation using Hardware based QoS in UCS.. :)... My friend go back and double check.. QoS reserves 40% of 10GB , so you are only left with 60% which is 6GB.
Due to the virtue of server Virtualization , a new trend has been started to consolidate Network/Storage/NIC everything. Having said that, remember those days when you needed 6 NICs on the server and there were no PCI slot available to install additional card. Talk to people who are working on server virtualization projects, they need 8-10 Nics due to the number of VLANs and traffic isolation for underlying virtual infrastrucutre (VMware FT, vMotion, Storage vMotion, Microsoft CSV, Cluster heartbeat and many more).
so in simple word , what we need is more I/O interfaces (ports) with less PCI cards or Mezzanine cards. so what is the answer, well build a Mezzanine card and use PCIe functions (SR-IOV) to chop it into smaller pieces and fool OS and Hypervisor . Single Root IO virtualization is huge topic in itself and lots of improvement happening in this space. I am planning to write article purely focused on I/O virtualization.
so what cisco offers , They offer something called "Virtual Interface Card" nick name "Palo Adapter" , its a single chip which can scale up to 256 Nic/HBA depending on the Model. They do offer traditional cards with 4 and 2 ports. complete protofolio is available here.
6. UCS Manager :- last but not the least, this is software component which is built inside fabric interconnect and runs in HA mode. UCS Manager is built on very modular architecture , which is capable of abstracting the Hardware identity. I can write 2 page article on UCS manager but, i think that's not in the scope of this post.
so these are major component which makes Cisco UCS system. Fabric Interconnect/UCS Manager & Palo Adapters requires another post and hopefully you will find some posts around that soon.
Thanks for the time and I wish you great weekend ahead!
Cheers!!
Pankaj
A server architecture that houses multiple server modules ("blades") in a single chassis. It is widely used in datacenters to save space and improve system management. Either self-standing or rack mounted, the chassis provides the power supply, and each blade has its own CPU, memory and hard disk. Redundant power supplies may be an option. Blade servers generally provide their own management systems and may include a network or storage switch.
ReplyDeleteRfid Market in India