Received lot of requests to write about Blade systems so there you go! The way server technology is evolving, soon Blade Systems will become history and some open server Architecture will be adopted by all Major hardware Vendors. Our discussion is around x86 Blade Systems not power or Sparc.
Facebook & Google are 2 Big examples where Blade systems are not used. They are encouraging Open System architecture. for more details have a look at following articles.
Alright!!.. All Major Hardware vendors (HP/Cisco/Dell/IBM/Sun Oracle etc) offer wide rage of Blade systems with few common or different feature sets. This post will focus on Blade system architecture followed by offerings from different Vendors in subsequent Posts.
Any Blade system you talk about are made of more or less Following components
- Chassis:- Consider this as a empty box with 8 to 10 unit in height which is the building block of the entire system.
- BackPlane :- This component is assembled inside the chassis to provide high speed IO (input/output) path to Blade Server via I/O Bays.
- Bays :- Consider this as a slot where you can install blades. Bays can be customized to allow full/Half height blades installation or Mixture of both.
- I/O Interconnect Bays :- These are again empty slots where you can install switches (Fiber or Ethernet) to connect Blade Servers with external Fiber or Ethernet networks. unlike rack servers which connects directly to Fiber or Ethernet network. Blade servers connects with High speed BackPlanes which further connects with I/O Bays - and the switches installed inside I/O bays would allow further connectivity.
- Blades:- Well, its the actual compute power which you install in bays. The reason they are called blades is because its highly dense in form factor and takes very less space.
- Fan :- No Need to Explain
- Power Supply :- No Need to Explain
- Management Modules :- Consider this as a device which allows you to manage all the above components that we talked about.
too much of text :), I think picture speaks automatically. so lets understand all these component using HP Blade System.
Other vendors follows more or less the same architecture except Cisco. Having Management Module and I/O switches in every chassis increases Management as well as cabling that's why Cisco splits Management Module & I/O switches from the chassis. They further merge management Module into the I/O switch itself to reduce number of devices. This design increases efficiency by sharing I/O switches with multiple chassis , which is not possible when switches are mounted inside the chassis. so lets understand this design with examples.
- Consider 2 Cisco Chassis
- Consider 2 Other Chassis which follows integrated I/O switch & Management architecture
- For Cisco , you need only 2 I/O switches (Redundant) , you don't need Management modules as its integrated with I/O switches.
- For other you need 4 (2 in each for redundancy) + 4 Management Modules (redundant)
This is just an example , you will find more detailed comparison in upcoming posts.
Next Post is about Cisco UCS Blade system followed by other Vendors!!
Cheers!!
Pankaj
Next Post is about Cisco UCS Blade system followed by other Vendors!!
Cheers!!
Pankaj
Nice post Panks. Waiting for the next parts..
ReplyDeleteThanks.