Want more unvarnished truth?
What I'm saying now
What you're saying...
Looking for something? Look here!
I think tag clouds are pretty, and not to be taken overly seriously
##MoveWithGary #Home Inspection #MoveWithGary 111 Chop House 75 on Liberty Wharf 9/11 A Broth of a Boy ABCs Abiouness accountability activities alcohol Allora Ristorante Analysis Angry Hams ANSI/TIA 942 Anthony's Pier 4 Apple Application Armsby Abbey Arsenal Arturo's Ristorante Ashland AT&T Audio Automation baby Baby Monitor babysitting Back To School Bad News Bangkok Thai banks lending movewithgary Bar Bay State Common baystateparent BBQ BCP Bees BeeZers Before I die I want to... behavior Big Bang Bike Bill of Rights Bistro Black Box BlackBerry Boston Boston Marathon boundaries Boyston BPO brand Breakfast Bridge Bring Your Own Technology Budget Building permit Burlington Burn Burrito buyer BYOD Cabling Cambridge Camp Campaign career Casey's Diner Castle casual cCabling Cell Phone Central Square Change Management Cheers Chef Sun ChengDu Chet's Diner Children Chinese Christmas Christmas Families Holiday CIO Cloud coddle collage College College Acceptance co-lo Co-Location Co-Location Tier Power Cooling Comfort Food Condo Control Country Country Kettle Crisis customer dad Dad Phrases damage daredevil Data Center Data Center Design Davios Day Care Dead Death declaration Del Frisco's Design Desktop Video dinner Disaster Recovery Divorce Do Epic Shit dodgeball downsizing Downtown Crossing DR driving Droid Easter Economic Kids Edaville Education Elbow Night Elevator Employee Engagement Erin Estate Planning Etiquette Evaluation events Exchange Expiration Dates Facebook Failing family Family Law Fatherhood Favorite things first time buyer Flash Flemings Fogo de Chão Food Hits and Misses Format Foundry on Elm Foxborough Frameworks fraternity Fraud French Fried Clams friends fun Fusion Generations germs Girl Scouts girls Global Go/No Go GPS Grafton Grandchild Grandpa Harry's hazing Healthcare Healthy Choices while Dining Out Help Desk Hisa Japanese Cuisine Historic holiday Home Home Inspection home renovation hope Horizons hose Hot Dog Hurricane IIT Assessment incident Indecision Indian Infrastructure Inn Innovation Insurance Internet Inventory Management iPhone IT IT Assessment IT Satisfaction Italian Jack Daniels Jakes Restaurant Janet Japanese Jazz Joey's Bar and Grill JP's Khatta Mitha kickball kids Laid off Lakes Region Lala Java Leadership Learning legacy Legal Legal Harborside Les Zygomates L'Espalier Liberty Wharf life transition lights out Linguine's loss Love Lucky's Cafe luxury luxury home M&M Macys Thanksgiving Day Parade mai tai Managed Application Services Managed Services managers Mandarin Manners Mark Fidrych marlborough marriage Mary Chung mass save Maxwell-Silverman Mediterranean meetings Memorial Day memory Mendon Mergers Mexican MiFi Migration Ming III miss MIT MIT CIO Symposium mmortgage Mobility Moes Hot Dog Truck MOM money mortgage Mother MoveWithGary Moving on Name nature neanderthal neighborhood Network new listing New York Marathon newborn newtomarket Northborough Not Your Average Joe's Nuovo Nursing On-Call Operations Operators Oregon Club Organization Pancakes Pandemic Parental Control Parenting Patch Peeves People Perserverance UMASS growth Photography Play Plug and Run Predictable Pride Problem Process Production program Project Management propane PTA. PTO PUE QR Quick Response Rant re/max Real Estate Realtor Recognition Red Rock Resiliency Respect restaurant Restaurant Guy RFP ribs Ritual Root Cause Analysis rReal Estate Sam Adams Sandy Sapporo savings School Sea Dog Brewing Company Sea Dog Steak and Ale Seafood Seaport Security Sel de la Terra Service Service Desk Service Indicator Light sharing ShearTransformation SHIRO Shit Pump Shriners SHTF Simplification Skunk Works Skype Sleep sleepovers Sloan Smith & Wollensky soccer Son SOP sorority spanking Squarespace staffing staging Starbucks Status Reporting Steak Steve Jobs Storage Strategy stress Summer Sushi swimming Tacos Acalpulco teacher Technology Teen Telephony Temperature Strip Tenka terrorist Testing Texas BBQ Company Text Thai Thanksgiving in IT The Mooring Thomas Thought Leader Three Gorges III TIA 942 Timesheets Toby Keith Toddlers traditions Transition treehouse turnover TV Twitter unspoken moments Valentine's Day Value Vendor Venezuelan Verizon Vermont Video Vietnamese voice VoIP Watertown Wedding Westborough Korean Restaurant Westborough MA. StormCam WiFI Wi-Fi Wilbraham Wine Worcester work work life balance working Yama Zakura Zem Han Zitis

Entries in Data Center (9)

Monday
May072012

An (Electronic) Alternative to Data Center Construction

In these posts, we’ve covered a variety of ways to extend the life of a data center, or to consider co-location.  All these topics assume the load or demand from the data center equipment is fixed, and that the supply (space, cooling or power) must accommodate the demand.

One client looked at this opportunity a different way and came out with an intriguing alternative.

This client has a data center in a downtown office building.  The data center was hemmed in on all sides, cooling improvements were proving costly, and the lease expiration on the building was on the horizon.

This client had already migrated to blade chassis and virtualized 75% of their environment.

They went the next step, addressing their storage, too. They implemented flash drives over traditional spinning disk.

This client dropped storage heat load by 50%!  Flash drives offer 50% lower heat output consume 55% less energy as compared to spindle based drives and the realized 9  times I/O bandwidth increase. (This helped them consolidate workloads onto fewer disks and drop rack space for storage by 52% as well.)

They were able to leverage this additional bandwidth to meet their business goals with many fewer disks, many of which individually also produced less heat.

They found the capital expenditures for this project was less than expanding the datacenter cooling capacity, given the horizon for retiring the datacenter. Furthermore, this project supported the consolidation of the storage environment, easing the eventual transition out to a new datacenter.

This client found flexibility and adaptability as the most critical design element for a datacenter. When you’re faced with constraints and want to look outside the box, take a different look at what’s in your (datacenter) box first!

What would you do if faced with a similar set of circumstances?

Monday
Oct102011

Infrastructure Cabling Brought Down to Size

A friend asked for some help neatening up a server closet in a small business. 

“The place is a mess, I’m almost embarrassed to have you here.”  

“We can make it better.  Are there any times you can be down?”

“Every Sunday afternoon, after 1PM.”

What better place to be than a server closet on a Sunday afternoon?

This business is located in what was probably once a grand house.  The business needs had eclipsed some of the grander parts of the house.

The server closet was as advertised: a (former) closet.

Sample

It turned out the situation wasn’t as grim as projected.  Over time, the people doing work in the closet had kept to ~80% of a reasonable wiring standard:

  • High voltage (power) was kept at one level, and in one color
  • Data cabling was blue
  • Analog phone cable was white
  • Equipment was everywhere

To address this in an afternoon meant using what we had on hand, and not doing a rip and replace.

Starting with high voltage, we quickly separated the two UPS systems and placed on separate power feeds in the room.  I’m not a fan of small UPS’s being right on the floor (water is my fear, being a minor leak, a mop, or a spill), so one got mounted to the backboard, the other placed on a shelf.  All power cables were then run along the same path, with 90 degree turns and cable tied together.

The phone cabling was in pretty good shape, although the cable management could be improved.  Over time, every bundle ended up with its own set of cable ties.  We cut those, and bundled together.

The data side needed a bit more work.  We literally unplugged and reran all the data cabling within the closet.

Any cable not being used was removed.  This is a key in large data centers and closets.   Take the time to remove unused cables…power, data or voice.

In the course of this exercise, we did remove the clothes rod (it really had been a closet), and swept up.  We also identified and re-enabled an exhaust fan for the room, providing some external venting for the machine, and re-programed the external building sign light timer.

After a few hours, we had everything back up and running and in neat order.  This will pay dividends over time with improved maintainability. 

Hats off to the Verizon FiOS and Security System company…all their cable runs were neat, tidy, and buttoned up nicely.

This is a matter of discipline, and a case where large company approaches can be “rightsized” for smaller firms, paying benefits over time.  Any company can do it.

Monday
Oct032011

Evaluating Co-Location Data Centers

Certainly by now you know I’m a fan of co-location.  Considering co-location is a must for any organization with a data center need under 20,000 square feet.

Having determined requirements and engaged co-location providers with an RFP, it’s now time to narrow the focus and make a decision.

Using an evaluation matrix is a useful way to narrow the herd.   In the first column, indicate your evaluation criteria (location, price, total capability, etc).  Each criterion is then assigned a weight, with 1 being the lowest and 10 being the highest.  Each vendor is then evaluated on a 1-10 scale, with the resulting score = weight x eval.   Those scores are then added for a final…allowing comparison.

 

While I am a fan of an evaluation matrix, having performed this exercise dozens of times there are some issues with this approach worth discussing.

If a vendor simply doesn’t meet the requirements, they get a “low” score, yet not meeting requirements should be sufficient to disqualify.  If ALL vendors miss on requirements, the chances are good either the requirements or RFP are “too tight.”  It may also be indicative of a need to do an in-house data center….and in practice we rarely see this.

What is more likely is the vendors will end up being in groups.  We use these groups for our analysis and next steps.  The “top” group will be the ones to continue conversations with, and the bottom group is most likely a group worth passing on.

Here’s the dirty little secret.  The evaluation matrix gives an objective appearance to a subjective process.  You still need to go out and visit the (short) list of companies before making a final decision.  We use the evaluation matrix as a tool to narrow the list, not the ultimate decision tool.

Once you’re met with vendors at their location and kicked the tires, a final evaluation matrix can be assembled.

When you visit the co-location providers, take time in your evaluation.  The ANSI/TIA standard 942 can be used as a guideline, or professional services can assist.

 Walk around the building and get a general sense of the neighborhood and plant housekeeping.  You should not be able to get close to the operating equipment at street level.

When you tour the data center, is security reasonable for your company? 

Spend time understanding the Mechanical, Electrical and Plumbing (MEP) parts of the data center…this is the “guts” and arguable most important part of the facility.  Is the housekeeping of the MEP pristine?   I’m a big believer in taking my old car for service in the ugliest garage in town…and the MEP area shouldn’t look like that!   If you see buckets under leaking valves, you can quickly realize maintenance isn’t what it should be. 

Ask the hard questions….ask to see evidence of equipment maintenance (we visited a co-location provider touting regular infrared electrical panel scanning, and the last time it was performed was three years previously.)  Ask for a history of outages (you may be surprised by what you discover.) 

Take time to understand the communications carriers already in the building.  Do not assume your carriers are already there.  Co-location providers have “meet me” areas, where carriers and customers are interconnected.  Cable management in this space is imperative. 

Ask to speak to references…and do so.  Find out if the provider is easy to work with, or a fan of up- charging for everything.  Let’s face it, your equipment will break, and you may need to get a vendor to ship a part; you shouldn’t have to pay for “processing a delivery!”

Spend time on Google, and find out if there are any articles about outages/issues in the facility.

I’m a big believer in the end people buy from people…so don’t hesitate to think about whether you can simply work with the provider and their people.  People will change from time to time, and the “move in” period is a very important time.

Look at the contracts.  All will have protections for the provider for outages; it’s how they deal with them you need to consider.

Only when you have completed your due diligence can you make your final decision.

Monday
Sep262011

How to Select a Co-Location Provider using an RFP

In a recent post, the case was made for companies with modest data center needs to explore co-location.

We made our case and the executive team agreed.  Having an agreed upon direction, it was now time to do something.  And the client was stumped.

“Let’s go visit them,” the enthusiastic client espoused. 

We would recommend visiting a couple to ground yourself in some of the various dimensions…and then would quickly go back to a conference room to understand requirements.

“Our systems are important.  We need to put everything in a Tier 4 data center.”

The ANSI/TIA-942 standard defines the concept of Tiering, as a way to distinguish design considerations.  The standard is quite lengthy, and is summarized below.  You’ll see “N” mentioned.  “N” stands for NEED….and is always debated since everything else drives from it.

 

Tier 1

Tier 2

Tier 3

Tier 4

Delivery Paths

One

One

One Active

One Alternate

Two Active

Capacity Components of Support

N

N+1

N+1

N after any failure

Points of Failure

Many + Human Error

Many + Human Error

Some + Human Error

Fire, EPO & Human Error

Yearly Downtime

28.8 Hours

22.0 Hours

1.6 Hours

0.8 Hours

Site Availability

99.67%

99.75%

99.98%

99.99%

 

While there are some commercially available Tier 4 co-location facilities, unless there is a specific business requirement, we find most co-location facilities at a Tier 3 level provide the kind of availability one would expect for a paid service.  The yearly projected downtime between a Tier 2 and a Tier 3 is substantial (don’t assume the “yearly downtime” happens once a year.  For example, what if it worked out the 22 hours (Tier 2) was spread over a year in weekly increments?  While the facility would be hitting its uptime goal, your systems outage time (quiescing applications, database, and systems, rebooting, then bringing up systems, databases and applications) could add to hours each week!)

With a good understanding of the capabilities available and your business needs, a Request for Proposal (RFP) can be put together.

Putting together an RFP can be a fun task, or maddening.  The key to the RFP process is using the development period to driving alignment of the company parties. 

At the most basic level, the RFP should cover:

  • Company Background (don’t assume people know your company)
  • Stated Requirements (high level)
  • Growth considerations (this is the hard part….and often drives initial buildout costs)
  • Key Questions (on how delivery is provided)
  • Response format (similar response formats make comparisons easier)
  • Any specific legal topics (such as a need for background checks)

There is a fine balance in writing RFPs.  Companies need to be specific enough deliverable solutions can be proposed, yet broad enough to allow creativity.

For example, one area driving costs in RFPs is around power/cooling.  The company needs to identify what the heat load is of the equipment envisioned to be placed, in kilowatts (kW). 

If the company is running a dense environment, it is tempting to express this in watts per square foot (W/SF)

When W/SF are used, some providers may be automatically “disqualified.”   How?  A data center designed for 50W/SF can indeed support 100W/SF….it just takes twice the space and appropriate chilled air distribution! 

So smart companies analyze their needs, and let the co-location providers respond.

Philosophically, we prefer shorter RFPs to longer ones.  Providers need to have time to put together responses.  If the RFP is “too big, too heavy,” some providers may not respond at all or will respond generically with their capabilities, while not answering the underlying questions in the RFP.

Once the RFPs hit the street, you need to put a cone of silence on vendor conversation at ALL levels.

Next week, we’ll talk about how to analyze the responses.

Have a favorite RFP story?  Share it with the community.  We’d suggest masking company names…

Monday
Sep122011

Data Centers are Refrigerators

At their simplest level, data centers are refrigerators.  There are walls, a ceiling and floor, multiple racks, a couple doors and anything hot inside gets cooled.

How you lay out the contents of a refrigerator is determined by the manufacturer.  Similarly, how you lay out the contents of a data center is often determined by the architect and/or Mechanical, Electrical and Plumbing (MEP) firm.

Changes to your refrigerator layout are generally rarely performed, and when they do happen you take anecdotal measures for success (i.e.:  did the milk spoil quickly, did the soda freeze.)

We advise clients to be a bit more analytical when making changes in the air cooled data centers.  Some organizations have IP-based temperature probes throughout the data center, providing a precise view of the data center.   Often, we see less sophisticated organizations making layout changes to extend the data center life without much more than a “hope” the changes are positive.

What’s a simple way to measure the impact of changes?

We advocate use of a simple temperature strip attached to the input (cold aisle) side of racks:

  

 

 

 

This will immediately give a visual indication of inlet temperature, in a simple unobtrusive manner.

Ready to make changes?

A simple Post-it can be used for recording temps before, and then after changes.  Record the starting point, and once the room stabilizes (in a matter of an hour), record the ending point. 

 

We believe in KISS (Keep It Simple, Stupid).  Metrics are a must, and even a simple approach is preferred over no approach.  While we do believe a number of data center managers can use their body as a thermometer, a bit more science is generally preferred.

How hot should the data center be in practice?  64-81 °F or 18-27 °C ambient temperature, according to TIA/ANSI standard 942.  At the limits, there can be issues (freezing in direct expansion air conditioning under 68°F or increased device fan noise approaching 81°F.)  In a hot aisle/cold aisle orientation, the hot aisle can be significantly warmer (it’s called the “hot aisle” for a reason) without issue to the equipment.

So why do some data center managers keep it cool?   Often fear.  If a computer room air conditioning unit fails or is taken offline for maintenance, the air flow (distribution) may be insufficient.  Paying attention to the design intention of the space is imperative when making changes and accommodating maintenance.

With appropriate air distribution, data center managers can raise the ambient temperature of the data center and realize lower cooling costs.

Monday
Feb212011

Inventory Management in Data Centers

We are seeing a large number of companies re-engaging in data center construction activities after the Great Recession of 2008-2010. After putting large expenditures on hold, companies are finding data center environmental constraints (power, cooling, and white space) are requiring infrastructure upgrades and/or relocations.

We are finding many companies would benefit from inventory management disciplines typically found in retail or manufacturing environments.


 

Many IT organizations have “lost control” of their inventories because of parochial approaches in departments managing the underlying information. In other words, they are suffering from an overwhelming amount of data!

In a retail or warehousing environment, every item (or Stock Keeping Unit (SKU)) is tracked closely in a master catalog, with annual physical inventory and/or cycle counting approaches used to maintain a key understanding. There is often one number universally used through the organization (sales, distribution, warehousing, manufacturing, design, sales administration, etc.). Each department can then use their own systems for understanding products. These approaches are well understood in the arguably more mature retail/warehousing environments.

Information Technology departments often suffer from a hubris preventing a shared perspective. Each department uses their own view…often with overlap, and not uniformity. Each department manages “their data”, often creating different indexing inhibiting sharing for the total organization’s good.

For example, an IT organization may use the following “keys” for storing their data:

Department

Key

Issue

Data Center

Server Name

Process breaks down if server upgrades use same name

Networking

IP Address

Does not uniquely identify a machine.

Server team

MAC Address

Not externally identifiable


 

Obviously this is a contrived example. In the real world, organizations use a combination of identifiers to uniquely track an environment. Unfortunately, these schemes often break down and, are not maintained, and often struggle to reflect a virtualized environment.

That said, how can we leverage this parochial view for a breakthrough in understanding.

What’s a company to do?

Many companies start down the path of an iron-clad asset management initiative. Often a czar of asset management is appointed, and new processes are introduced. Some companies even go so far as to place RFID tags on servers. As a place to start, Asset Management will make a marked improvement.

The real answer may be more subtle.

A configuration management data base (CMDB) is a repository of information related to all the components of an environment. The CMDB is a fundamental component of the IT Infrastructure Library (ITIL) framework’s Configuration Management process.

One could argue CMDB is a fancy way of doing asset management. The key difference is CMDB repositories frequently involve federation, the inclusion of data into the CMDB from other sources, such as Asset Management, so the definitive source of the selected data retains accountability for the data.

In the retail/warehousing environment, the individual areas are responsible for their data and have “figured out a way” to share (federate) the data so the organization has a single shared view.

Our recommendation to companies beginning any data center process is spend the time “up front” understanding their data, and rationalizing into a system transcending the data center construction effort. Since a data center effort requires a solid inventory, enter into the discovery effort with an eye towards all the data needed…not to derail the migration effort, but to accelerate it during the move and beyond.

As a side benefit? You will find servers that can be repurposed or decommissioned…better leveraging the strategic IT investment with savings in complexity, license and maintenance costs, repairs, etc.

Monday
Aug092010

Stretching the Data Center

Data centers are the un-sung heroes of the IT world.

A number of companies pushed off addressing facilities limitations during 2008 & 2009 for budgetary reasons. Now, some facilities are reaching their design limits.


When data centers are designed, the basic questions are how large (square footage), how many watts/square foot (driving power and cooling) and what Tier (defined by organizations like the Uptime Institute and ANSI/TIA 942) see vastly simplified table:


When these design limits are reached, large infrastructure expenditures or cloud-sourcing may be indicated. The resourceful data center manager needs to look at short and long term strategies and alternatives.

One alternative is raising the temperature of the data center. While counterintuitive, today’s equipment can operate at significantly higher temperatures. The ANSI/TIA standard calls for air intake temperature as high as 81 degrees F.

Hot aisle/cold aisle is a key way to maximize efficiency. The cold aisle is maintained at 81, and the hot aisle is, well, hot. Temperatures over 100 degrees F are acceptable.


While not recommended, we know of two companies where the server racks were “spun” 180 degrees with the IT equipment running. Professional millrights were used, and they expertly raised racks, did the spin, and replaced all while a data center technician kept strain off cables. Obviously an extreme approach!

Hot Aisle/Cold aisle containment is also an option. There are elaborate systems for containment, and simpler systems (using the area above the drop ceiling as a return air plenum.


Raised (access) floor systems used for cooling should have all penetrations closed when not used for cooling, and all racks should have blanking systems in place.



http://upsitetechnologies.com/ and http://www.snaketray.com/snaketray_airflowsolutions.html are typical companies offering solutions in this space.

When spot cooling is needed, there are above rack cooling options, and fan options. Here’s one we’ve seen effectively cool a 200 WSF load in a designed 50 WSF data center


From http://www.adaptivcool.com/

We recently were in a facility where the temperature was set to 60 degrees to provide “spot” cooling for an individual rack. This is a very expensive approach for spot cooling.

Another options used to address simple hot spots is increasing the speed on air handlers. The newest air handlers are variable speed. On some older systems, a change in the fan speed may be sufficient to address the air flow needs.

Other companies are taking an all together different approach. By using rack mounted blade servers and a virtualized environment, the overall power and space used in a data center can actually be reduced! Here’s a case where a technology refresh may be the smarter option than a facilities investment.

Every company has unique challenges and one short post cannot address every alternative. The smart data center manager takes the time to understand the root cause of their issues, and through a careful, thoughtful, measured approach can make changes extending the effective lifespan of the data center. And always consult others including MEP engineering firms or strategic IT consulting firms for best practices in your specific environment.
 

Sunday
Nov292009

Provision a Data Center in 30 days

This is not some cleverly named cloud computing article. This was a real requirement.
(I will leave the details around how this company got in the situation of needing a data center in 30 days….clients find themselves in situations from time to time, either due to growth, capacity, or outages.)

The key to performing data center miracles rests with the network. If the network is near the facility, you’re golden.

This company had two main wide area network vendors (household name companies), and two major ancillary metropolitan area network providers. We felt we would have no issue finding a co-location site within the geographic boundaries some of the technologies required.

In this particular case, we needed a modest “cage” in a larger facility. We came up with our short list fairly quickly, and promptly went out and toured facilities. One facility percolated quickly to the top of our list; two of our network providers were there, the facility was in walking distance of another one of the company data centers, and we discovered the facilities manager was a alumni of a past employer.

This started a massive parallel effort.

The staff was fully brought into the decision, the business need, and helped plan the transition. Circuits, servers, storage arrays, etc. were all ordered immediately as the contract with the co-lo facility was worked out. Hardware vendors are used to performing miracles; they often install a large amount of equipment leading up to their quarter end.

While all vendors began responding, one communications vendor balked. They had a process, and as a part of that process had timeframes, and those timeframes were incontrovertible. We met with the vendor, and you could almost see the “A crisis on your part does not create a crisis for me” sign on their foreheads.

We went to the other communications vendors not in the building and made our case. One company, a scrappy start up charged with selling communications along utility rights of way, acknowledged have services “in the street,” but would have to get approvals from the City. (Full disclosure. This was Boston during the Big Dig. Every street was torn up on a tightly coordinated plan. The issue wouldn’t be getting the approvals, it would be getting Big Dig approvals.)

One thing scrappy start ups often have going for them is they are nimble. This supplier had the street open within a week (at night, no less), and brought new facilities into the colo site. It was good for us, and over time they certainly got additional business.)

Following the same process and using vendors used before allowed us to accelerate the delivery. The electricians “knew” what we wanted (once we gave them requirements,) the cable plant was put in by the usual suspects, the security department deployed full badge readers and security cameras, etc.

By following the same processes, our team was able to deploy on a predictable basis. Certainly some long nights were required, and this was a sacrifice made by the staff. Since they understood the business need and Information Technology imperative, they responded with aplomb.

And the facility was up and running on time.

Processes and engagement allowed the IT area to meet the business need. Processes run amok had one communications vendor wondering what happened.

By focusing on repeatable and timely process, this excellent team delivered!

Saturday
Oct242009

Data Center Disciplines

I have been teased my entire career about my nearly obsessive behavior around keeping data center rooms neat and tidy. While I’d love to blame my Mother for my neatness, the truth is keeping a data center clean is about one word: discipline.


Having a neat and tidy data center environment sends a reinforcing message to everyone entering about the gravity of the work performed by the systems in the area. This is important for staff, vendors, and clients.

The observational characteristics I look at when I walk in a data center are:


  • Life Safety – are the aisles generally clear? Are there Emergency Power Off switches and fire extinguishers by the main doors, is there a fire suppression system in place, is the lighting all working….

  • Cleanliness – Forty years ago data centers were kept spotless to prevent disk failures. A speck of dust might make a hard drive disk head fail. These days, disks are generally sealed, and can operate in pretty rough environments (consider the abuse of a laptop disk drive.)
    While disk drives are generally sealed, why should data centers be dirty? Look for dust, dirty floors, and filthy areas under raised flooring. One data center I went in had pallets of equipment stored in the space…was the data center for computing or warehousing?

  • Underfloor areas – are the underfloor areas, assuming use as an HVAC plenum, generally unobstructed? More than one data center I’ve been in had so much cable (much abandoned in place) under the floor the floor tiles wouldn’t lay flat. This impacts airflow, and makes maintenance a challenge.
    I also like to see if the floor tiles are all in place, and if some mechanism is used to prevent cold air escaping through any penetrations. 30% of the cost of running a data center is in the cooling, and making sure the cooling is getting where it needs to be is key. (While at the opposite end of the space, I like to see all ceiling tiles in place. Why cool the area above the ceiling?)

  • HVAC – are the HVAC units working properly? Go in enough data centers, and you’ll learn how to hear if a bearing is failing, or observe if the HVAC filters are not in place. As you walk the room, you can simply feel whether there are hot spots or cold spots. Many units have on board temperature and humidity gauges – are the units running in an acceptable range?

  • Power Distribution Units – are the PDUs filled to the brim, or is available space available? Are blanks inserted into removed breaker positions, or are their “open holes” to the power. When on-board metering is available, are the different phases running within a small tolerance of each other? If not, outages can occur when hot legs trip.

  • Hot Aisle/Cold Aisle – Years ago all equipment in data centers was lined up like soldiers. This led to all equipment in the front of the room being cool, and all the heat cascading to the rear of the room. Most servers today will operate as high as 90 degrees before they shut themselves down or fry. By having a hot aisle/cold aisle orientation, including blanks in empty shelves on servers, cooling is most effectively in place. Some organizations have moved to cooling being in the racks as a designed alternative.

  • Cable plant – the power and communications cable plants are always an interesting tell tale sign of data center disciplines. Cables should always be run with 90 degree turns (no transcontinental cable runs, no need for “cable stretching”). Different layers of cables under a raised floor are common (power near the floor, followed by copper communications then fiber). (A pet peeve of mine in looking at the cable plant is how much of the data center space is occupied with cables. Cables need to get to the equipment, but the cable plant can be outside the cooled footprint of the data centers. Taking up valuable data center space for patch panels seems wasteful. One data center devoted 25% of the raised floor space for cable patch panels. All this could have been in not conditioned space.)

  • Error lights – As you walk around the data center, look to see what error lights are illuminated. Servers are often monitored electronically, and error lights utility is lessened is a argument. That said, error lights on servers, disk units, communications units, HVAC, Power Distribution units and the like are just that: errors. The root cause of the error should be eliminated.

  • Leave Behinds – what’s left in the data center is often an interesting archeological study. While most documentation is available on line, manuals from systems long since retired are often found in the high priced air and humidity controlled data center environment. Tools from completed projects laying around are a sign thoughtfulness isn’t in place for technicians (I’ll bet their own tools are where they belong).

  • Security – data centers should be locked, and the doors should be kept closed. Access should be severely limited to individuals with Change or Incident tickets. This helps eliminate the honest mistakes.

While far from an inclusive list, this article is to help silence my lifelong critics about my data center obsessions. These are simple things anyone can do to form a point of view on data center disciplines. Obviously follow ons with reporting, staff discussions, etc. is appropriate.