
I'm writing this on my flight from Phoenix back to the DC area. We've been in the air about a half-hour now, and this is the first breathing space I've had in nearly three days.
I had flown to Phoenix to meet with several others with my firm and visit three candidate data centers to replace our Savvis SC4 facility in Santa Clara, CA. The SC4 site is one of the oldest data centers in Silicon Valley and, frankly, it looks it: the air conditioning is subpar, the internal power distribution is insufficient for current-day server densities and so on. The place needs to be gutted and renovated from the ground up. We have our non-production machines housed there and while that was adequate in the past, our future plans require more than SC4 can offer.
Why Phoenix? We already have offices and staff there. There are a number of competing data centers, all cheaper than anything we could find in the Bay area. The climate lacks the hurricanes of the east coast, earthquakes of the west coast, tornadoes of the central south and terrorist threats of the capitol region. Still, I thought it was a bit insane to stuff equipment requiring heavy air conditioning into a desert, but Phoenix has become a regional nexus for a vast amount of cross-continental optical fibre networks: connectivity is easy and cheap here.
The folks in the contracting & purchasing division of our firm had already selected candidate sites; they and my upper management would be making decisions largely weighted by the cost of the facility and the overall negotiated package. My purpose was to was to look to how my systems engineering team would deploy, maintain and decommission equipment and our degree of satisfaction that the facilities each met our minimal power & network redundancy.
After a flurry of meetings and site visits Monday and Tuesday, I'm comfortable putting our equipment into any of these sites: they're well built, well connected, accessible with appropriate levels of security and meet our paycard compliance requirements. That said, each site had interesting quirks of their own.
The first location was CyrusOne, a new facility still under construction in Chandler, AZ. One bay is completed and ready for occupation: if we moved in, we could be among the first. The front office space is still being built out but will be done within a month. As a new building, they've built in a number of interesting architectural features to help channel hot air away from the floor into peaked roofs where the air is collected, chilled and recirculated into pressurized side wall containments which then channel the cool air to the pressurized floor. Rain water is collected for use in the chillers. Solar panels are being considered for supplementary power. It'll be a gorgeous, sexy building when it is completed.
Next, we looked at IO in Phoenix proper. The facility was originally built as a designer bottled water plant, although that firm went bankrupt, killed by its own fraudulent accounting. IO picked up the building for a song and rebuilt the factory floor into an extraordinary data center facility. Wisely, they kept the stunningly opulent lobby and front offices, some of which they rent out to firms with servers on the data center floor. Walking into the building was like walking into a luxury New York hotel.
The building is just down the street from the Van Buren street fibre nexus. Indeed, it's close enough we could get huge bandwidth with only two cans connected by string. Oddly, while we can make connections through this massive meeting point to nearly every major telecom player in North America, Comcast isn't one of them: they don't serve Arizona.
The data center itself had two flavours: the older “phase one,” a traditional data center floor with individually walled cages for each client. Their newer and vastly preferred “phase two” area uses their proprietary modules. These modules look like stainless steel cargo containers: one can install 14 cabinets down the center of the container, power & network connections fed from overhead trays. The air conditioning and power distribution systems are in separate lockers under the floor. This allows the maintenance to be performed on a module without ever entering the customer space above. Each modules has its own fire control system and its own entry lock system (your choice of physical key, badge, fingerprint scanner, iris scanner or any combination thereof). While these modules are all inside a common warehouse, they're designed to be weather resistant and could be deployed remotely as needed. IO also offers a larger version we affectionately called the “double-wide,” but that's larger than we need.
The third facility is a Digital Realty site located next door to the previously mentioned CyrusOne location in Chandler, AZ. It's an older building, originally constructed as a data center by a financial securities firm for their own use. Of all of the sites, this was easily the most bland and unglamourous. That said, it also probably has the longest track record and best uptime record of all of the facilities we examined. Our cage there would look largely like our existing facilities: a standard cage on the data center raised floor inside a nondescript building. What caught our eye with DRT, however, is how transparent and open they are about their maintenance procedures and their logs. They were open to letting us look through any of their log books to examine their past preventative maintenance, corrective actions, root cause analysis and more.
As an older facility, DRT's site wasn't up for LEEDS consideration and they used an open water system for their chillers, taking water off the municipal system and then returning it again after use. I've been burned by that kind of dependency on a constant outside water source at a previous employer and am not keen on that again. That said, they do maintain their own two 60,000 gallon tanks for backups and despite being in operation for years, have never had to tap their stocks.
DRT's facility, like all others, have onsite generators for backup power. They own their own electrical substation instead of simply having the utility's substation on their premises like CyrusOne. Since generators take a few seconds to spin up and reach required power outputs, all facilities have some sort of bridging power, typically a huge bank of batteries to provide up to 30 seconds of continuous megawatt-hours of energy. DRT offered the choice of batteries to bridge the power gap, or hitec flywheel generators. These are 12,000 pound flywheels in constant motion while there is external power: once external power is cut off, the flywheels continue rotating and begin acting as generators themselves while the diesel generators rev up. Personally, I prefer batteries: they're easier to check remotely to ensure they're charged and ready, and they're trivial to replace as they wear out. 12,000 pound flywheels aren't exactly trivial to fix or replace when the bearings go.
At the moment, I have no idea which data enter we'll go with: it'll 80% based on the bottom-line price for each. If Savvis plays ball with us, we might stick with SC4 for one more year but would hit the road in June of 2014. If Savvis doesn't play, we're outta there by June of this year. Exciting times!