For [livejournal.com profile] bootedintexas

Jul. 21st, 2010 04:03 pm
bjarvis: (Olympus SP-500 UZ)
[personal profile] bjarvis
Earlier today, I commented on FB that I was at the data center slamming blades into a chassis and imaging them with Linux. [livejournal.com profile] bootedintexas suggested photos to explain what on earth I was talking about so...


To start with, this is a Hewlett Packard blade server:


This little piece weighs about 20lbs. It's about 16 inches along, about six inches wide and about two inches deep. This single module is nearly a full computer in its own right: this particular unit has two CPUs, 16GB of RAM, a 72GB hard drive & disk controller and such. What it does not have is a power supply, a keyboard, a monitor or network ports.

Here's the same unit opened for the world to see:

One can see the disk in the bottom right and the eight parallel memory modules in the upper middle. Of particular note is the adapter in the rear of the blade at the extreme left...

This is a blade chassis:


This particular chassis can hold 16 blades, eight in each row. You can see the upper row is filled. The lower row has blades occupying the first four bays, the next empty bay is uncovered, and the last three are vacant but have covers inserted. Why covers? These blades generate a lot of heat: air flow to cool the components is critical.

The extra blade in the prior photos would be inserted into the empty device bay. The chassis itself has heavy duty redundant power supplies, redundant network connections, redundant keyboard & mouse ports and redundant monitor ports. Each blade then can share these communal resources via the port in the back of the blade. I can run different operating systems on each blade: Windows on one blade, Solaris x86 on another, Red Hat Linux on yet another, etc.. The hardware doesn't care what the operating system might be.

This configuration takes a lot less space than separate stand alone units with all of their individual support components. Can you imagine how much space 16 monitors & keyboards might take? Even with a shared keyboard/monitor, we would need a lot of cables. This single chassis typically uses less power than sixteen stand alone units. I can power down individual blades if they are unneeded to save further power & air conditioning. I love the amount of redundancy built into blade systems as the single most important part of my job is keeping systems running continuously, at least from the perspective of our customers --I have five HP chassis and four IBM models.


Today, I received five blades from the mothership in California. I upgraded each to 16GB of RAM, placed each into available bays in the chassis, labeled them appropriately and installed & configured Ubuntu Linux on each.

Alas, one of the blades isn't recognizing its local disk so I'll have to open a service call. Still, despite one dead blade, the others chug along as they should so I'm a happy camper.

Date: 2010-07-21 08:18 pm (UTC)
From: [identity profile] bootedintexas.livejournal.com
i now know what IT pron is...thank you

Date: 2010-07-21 10:06 pm (UTC)
From: [identity profile] bjarvis.livejournal.com
It's what pays me the big bucks. :-)

Date: 2010-07-21 09:09 pm (UTC)
From: [identity profile] pklexton.livejournal.com
Instructive. Thanks!

Date: 2010-07-21 10:06 pm (UTC)
From: [identity profile] bjarvis.livejournal.com
Maybe some time in future I'll say more about the other systems I look after: storage arrays, network routers & switches, console servers, virtual servers, etc..

I'm sure it would be a useful sleep aid for any bouts of insomnia. :-)

Date: 2010-07-21 11:09 pm (UTC)
From: [identity profile] cpj.livejournal.com
Ah, HP C-Class blades. I see you've already discovered what the average component failure rate is. :)

Date: 2010-07-22 03:33 am (UTC)
From: [identity profile] theoctothorpe.livejournal.com
not to be confused with the Hep-C blades ;-)

Date: 2010-07-22 03:47 am (UTC)
From: [identity profile] bjarvis.livejournal.com
Generally, I haven't had too many issues with the HP blades. They definitely don't travel well: it seems every shipment of blades I've received from the mothership has at least one problem child in the lot. Once they're functional though, they tend to stay functional --I haven't had an operational HP blade die on me yet.

Would that I could say the same for my old Sun x4600 servers... :-^

And it's just as well the HP blades tend to behave themselves: their customer service sucks bilge water.

Date: 2010-07-22 10:01 am (UTC)
jss: (badger)
From: [personal profile] jss
Agreed. Of my 16 blades that're all less than 2 years old, I've blown and replaced 3 system boards (and one of the replacements was DOA) and had to replace a disk controller. The fact that following procedures that Support gave me caused at least 2 if not all 3 of the system board failures helps illustrate your last point. At least we're at the point where I can say to the Support weenie, "Here's the problem, here's what I did to identify the cause, and we've got next-day service, so a technician with the correct replacement parts will be here tomorrow morning. Schedule it."

But I love dealing with blades and not self-contained servers. The not having to manage cables alone (dual power plus two network plus KVM plus any fiber to the SAN) is a huge win.

Date: 2010-07-22 07:57 am (UTC)
vasilatos: neighborhod emergency response (Default)
From: [personal profile] vasilatos
I retired before blades, but I love them. :-)

January 2021

S M T W T F S
     1 2
3456789
10111213141516
17181920212223
24252627282930
31      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 24th, 2025 01:50 pm
Powered by Dreamwidth Studios