Want more unvarnished truth?
What I'm saying now
What you're saying...
Looking for something? Look here!
I think tag clouds are pretty, and not to be taken overly seriously
##MoveWithGary #Home Inspection #MoveWithGary 111 Chop House 75 on Liberty Wharf 9/11 A Broth of a Boy ABCs Abiouness accountability activities alcohol Allora Ristorante Analysis Angry Hams ANSI/TIA 942 Anthony's Pier 4 Apple Application Armsby Abbey Arsenal Arturo's Ristorante Ashland AT&T Audio Automation baby Baby Monitor babysitting Back To School Bad News Bangkok Thai banks lending movewithgary Bar Bay State Common baystateparent BBQ BCP Bees BeeZers Before I die I want to... behavior Big Bang Bike Bill of Rights Bistro Black Box BlackBerry Boston Boston Marathon boundaries Boyston BPO brand Breakfast Bridge Bring Your Own Technology Budget Building permit Burlington Burn Burrito buyer BYOD Cabling Cambridge Camp Campaign career Casey's Diner Castle casual cCabling Cell Phone Central Square Change Management Cheers Chef Sun ChengDu Chet's Diner Children Chinese Christmas Christmas Families Holiday CIO Cloud coddle collage College College Acceptance co-lo Co-Location Co-Location Tier Power Cooling Comfort Food Condo Control Country Country Kettle Crisis customer dad Dad Phrases damage daredevil Data Center Data Center Design Davios Day Care Dead Death declaration Del Frisco's Design Desktop Video dinner Disaster Recovery Divorce Do Epic Shit dodgeball downsizing Downtown Crossing DR driving Droid Easter Economic Kids Edaville Education Elbow Night Elevator Employee Engagement Erin Estate Planning Etiquette Evaluation events Exchange Expiration Dates Facebook Failing family Family Law Fatherhood Favorite things first time buyer Flash Flemings Fogo de Chão Food Hits and Misses Format Foundry on Elm Foxborough Frameworks fraternity Fraud French Fried Clams friends fun Fusion Generations germs Girl Scouts girls Global Go/No Go GPS Grafton Grandchild Grandpa Harry's hazing Healthcare Healthy Choices while Dining Out Help Desk Hisa Japanese Cuisine Historic holiday Home Home Inspection home renovation hope Horizons hose Hot Dog Hurricane IIT Assessment incident Indecision Indian Infrastructure Inn Innovation Insurance Internet Inventory Management iPhone IT IT Assessment IT Satisfaction Italian Jack Daniels Jakes Restaurant Janet Japanese Jazz Joey's Bar and Grill JP's Khatta Mitha kickball kids Laid off Lakes Region Lala Java Leadership Learning legacy Legal Legal Harborside Les Zygomates L'Espalier Liberty Wharf life transition lights out Linguine's loss Love Lucky's Cafe luxury luxury home M&M Macys Thanksgiving Day Parade mai tai Managed Application Services Managed Services managers Mandarin Manners Mark Fidrych marlborough marriage Mary Chung mass save Maxwell-Silverman Mediterranean meetings Memorial Day memory Mendon Mergers Mexican MiFi Migration Ming III miss MIT MIT CIO Symposium mmortgage Mobility Moes Hot Dog Truck MOM money mortgage Mother MoveWithGary Moving on Name nature neanderthal neighborhood Network new listing New York Marathon newborn newtomarket Northborough Not Your Average Joe's Nuovo Nursing On-Call Operations Operators Oregon Club Organization Pancakes Pandemic Parental Control Parenting Patch Peeves People Perserverance UMASS growth Photography Play Plug and Run Predictable Pride Problem Process Production program Project Management propane PTA. PTO PUE QR Quick Response Rant re/max Real Estate Realtor Recognition Red Rock Resiliency Respect restaurant Restaurant Guy RFP ribs Ritual Root Cause Analysis rReal Estate Sam Adams Sandy Sapporo savings School Sea Dog Brewing Company Sea Dog Steak and Ale Seafood Seaport Security Sel de la Terra Service Service Desk Service Indicator Light sharing ShearTransformation SHIRO Shit Pump Shriners SHTF Simplification Skunk Works Skype Sleep sleepovers Sloan Smith & Wollensky soccer Son SOP sorority spanking Squarespace staffing staging Starbucks Status Reporting Steak Steve Jobs Storage Strategy stress Summer Sushi swimming Tacos Acalpulco teacher Technology Teen Telephony Temperature Strip Tenka terrorist Testing Texas BBQ Company Text Thai Thanksgiving in IT The Mooring Thomas Thought Leader Three Gorges III TIA 942 Timesheets Toby Keith Toddlers traditions Transition treehouse turnover TV Twitter unspoken moments Valentine's Day Value Vendor Venezuelan Verizon Vermont Video Vietnamese voice VoIP Watertown Wedding Westborough Korean Restaurant Westborough MA. StormCam WiFI Wi-Fi Wilbraham Wine Worcester work work life balance working Yama Zakura Zem Han Zitis

Entries in Data Center Design (6)

Monday
May212012

Everyone Wants to be the Cool Kid

In this guest post, a good friend toiling in a very private company provides a thought provoking commentary on current data center design and in vogue design approaches.  My friend has asked for anonymity, although I vouch for their professionalism.

When I was a kid, we had very little money.  Designer jeans, a shiny new bike, or the latest fad toy: you name it. I did not have it.  So, I know all too well that feeling of wanting desperately to be cool.

I think that is what is going on in the data center space these days.  The cool kids are having all the fun.  Yahoo’s chicken coop is cool as is Google’s barge concept to harness energy from the ocean waves.  (Data centers are usually in particularly boring places so what data center operator wouldn’t want the opportunity to hang out at sea?  Sign me up.)

You know what else the cool kids are doing?  They are jamming as much compute into a rack as the laws of physics will allow.  So, we should too, right?  Wrong.

Before you get all excited and tell me all about that super-cool-rack of super-cool-cloud-ready-kit that you just installed, let me clarify.  I am absolutely not saying any given rack shouldn’t be filled to the brim.  In fact, you should put a couple of these together and make sure to walk your prospective clients on by so that they think you are part of the “in crowd”.  What I am saying however is most of us are not Amazon, Google, or Yahoo. 

Most of us don’t have rack-after-rack, row-after-row, and hall-after-hall of exactly the same stuff.  And our stuff probably isn’t all running the same app so we actually do care when some of it goes down.  For most companies, filling a large data center from wall to wall with 20+kW footprints is what I am questioning.

Let’s play pretend, shall we? 

Let’s pretend we are a big-enough company that building our own data center (rather than using a co-lo) makes sense.   This is a pretty big investment and as the project manager for such an undertaking, I have a lot of things to think about.  This is because data centers are a long term investment and therefore we are making decisions that are going to be with us for a very long time.  For argument’s sake, let’s say 20 years.  This means at the 15 year mark, we are still going to try and eek out one more technology lifecycle.

15 years is a really long time in technology.

A bit less than 15 years ago, I was walking in the woods with my dog sporting a utility belt even Batman would envy.  Clipped on my belt were my BlackBerry, my cell phone, and my two way pager.  And, in hopes a great shot was to be had, I was also carrying my Nikon FM2 35MM camera.  Today, I carry none of these devices but my iPhone is never more than three feet away from my body and usually in a pocket.

As I walked through the woods that fine afternoon, all three items on my belt started to ‘go off’ at the same time just as I was trying to take a picture of my dog playing in a field.  It was not lost on me this was quite ridiculous and”someday” greater minds than mine would certainly put an end to this madness.

I am deeply grateful for all of the people up-to-and-including Steve Jobs who brought us the iPhone.  Now my operations associates don’t need to seek chiropractic help each time they are on-call.

Let’s get back to the problem of data centers.

There are three fundamental decisions we need to make:

  1. how resilient will our data center need to be,
  2. how many MW does it need to be, and
  3. how dense we are going to make the whitespace (the space where all of the servers and the other technology gets installed).

We are play pretending we are an enterprise class operation so for our discussion, so we are going to assume we need a “concurrently maintainable” data center.  This might put us somewhere between a Tier 3 and Tier 4 based on the ANSI/TIA 942 or Uptime Institute’s established checklist. 

We will further assume we are going to need at least a MW of power to start off and  we expect that we will grow into somewhere near 4MW over time.

Only one more question to go.  Power Density.

Now, if I were a consultant, maybe building something ultra-dense is part of my objective because I want to be able to talk about it in my portfolio.  However, we will work under the premise I really do have the best interests of the firm at heart.

There is a metric in the Data Center world which is both infinitely valuable and amazingly useless at the same time.  It is called PUE and it stands for Power Usage Effectiveness.

Quite simply, it is an indication of how much power is being “wasted” to operate the data center beyond what the IT already consumes.

In a perfect world, the PUE would be equal to 1.  There would be no waste.  Actually, some would suggest the goal is to have a PUE of less than one.  Huh?

To accomplish this, you would have to capture some of the IT energy and use it to avoid spending energy somewhere else.  For example, you would take the hot air the servers produce and use it to heat nearby homes and businesses thereby saving energy on heating fuel.

On the flip side, PUE can be ‘wicked high’.  Imagine cooling a mansion to 65 degrees but never leaving the room above the garage.  This is what happens when a data center is built too big too fast and there isn’t enough technology to fill the space.  In fact, there are stories of companies who had to install heaters in the data center just so that the air conditioning equipment wouldn’t take back in “too cold” air. 

Google and Yahoo tout PUEs in the 1.1-1.3 range.  This is the current ‘gold standard’.  Most reports put the average corporate PUE above 2.  But this is where we start making fruit salad – comparing apples, oranges, and star fruit.

How companies calculate their PUE is always a little unclear and it is very unclear if that average is a weighted average.  If not, the average of above 2 is quite suspect since the “average” data center is little more than an electrified closet (Ed. Note: or refrigerator.)

Let’s ignore PUE for now and assume two things about our company: 

  1. our data centers are pretty full (remember PUE gets really bad if there is a lot of wasted capacity) and
  2. we have been doing an OK but not stellar job of managing our existing data centers. 

If we were even measuring it… (I bet most companies are not)….this might place our current PUE is in the range of 1.6-1.8.   This is a difference of .5.

By now, you are probably asking yourself what PUE and our question of density have to do with each other.  As you meet with various design firms, they will try to sell you on their design based on the assumed resulting PUE.  And, proponents of high-density data center space will tell you this will improve your PUE.

My argument is simply there are a lot of other ways to go about improving our PUE without resorting to turning our data center into a sardine can.  Not overbuilding capacity, hot/cold aisle containment, simply controlling which floor tiles are open, or running the equipment just a little warmer are some immediately coming to mind.  Remember we think we have a .5 PUE opportunity.

All of these things can be done with ‘low risk’ and I contend banking on high density demand in a design needing to last 20 years is folly.   Yes, there is some stuff on the market which runs hot.  But since we already decided we are a complex enterprise that doesn’t only have low-resiliency widgets, we will also have a bunch of stuff that runs much cooler.

For our discussion, let’s assume our existing data centers average 4kW per footprint.  This puts us at about 100 watts a SF: at the high end of most data center space but not bleeding edge by any means.   I have seen some footprints pushing over into double-digits but these are offset by lower density areas of the data center (patch fields, network gear, open floor space, etc.).

Another component that isn’t breaking the power bank is storage.  Storage arrays are somewhere south of 10kW per rack and seem pretty steady.  A smart person once told me we get about twice as much disk space in a rack every two years for about the same amount of power.  And, that assumes we keep buying spinning disk for the next 20 years.  Some of the latest pc/laptop systems are loaded with flash drives and these have even started making their way into some data centers.   In a recent Computerworld article, a discussion around a particular flash storage array put the entire rack at 1.4kW.   Granted, these are pricy right now but such is the joy of Moore’s Law.  Again, 20 years is a long time.

We should also consider all of the talk about ‘green IT’ in the compute space.  Vendors “get it”.  There is much discussion today about how much compute power per watt a system provides.  This is great.  If we measure it, we can understand it.  If we understand it, we will manage it.

Intel has talked about chips that will deliver a 20X power savings.  Remember that super cool (hot) rack with 22kW of stuff jammed in?  What if it only took 1.1kW?  And what if that happens 5 years into or 20 year data center life?

The sales pitch may also try to sell you on cheaper construction due to the smaller whitespace footprint but this is just silly unless you are in downtown Manhattan.  I read something once that put the price of ‘white space’ at $80/sf.  This would mean that building a MW at 100 watts/sf would cost $800K versus $288K for the sardine can. 

This is a bit more than a $0.5MM per MW.  To put this in perspective, I would put the price tag at $35MM/MW including the building, the MEP, racks, cabling, core network, etc.  So we are talking about 1.4% of the cost.  And, since we are going to keep this investment for 20 years, we are only talking about ~25K per year in “excess” depreciated build cost per MW.

And, let’s pretend we buy into the assumption this is going to cost us some opportunity on our PUE.  This is important because paying the electric company is something that happens every year.   If we put our data center someplace reasonable, a .1 PUE difference might be worth around another $40K.  This assumes $0.06 kw/hr power which honestly we should be able to beat.

(1000kW DC) * (80% Life time Average Utilization) * (.1 PUE Impact) * (365*24 Hours) * (.06 per kW/hr)

= $42,048

So, we are out of pocket as much as $65K per year per MW.  While, I understand this is real money.  I still say, even if that PUE difference is real, it seems like a small amount to pay for future-proofing our 20 year investment. 

Even though they supposedly didn’t paint the fuselage to save weight rather than to save money, maybe we could take a lesson from the space shuttle and cut back somewhere else.  Somewhere that doesn’t lead us to treating our technology like salty fish.

Just saying.

From Google’s Data Barge Patent

Monday
May142012

Data Center Patching

We’ve all heard the expression “the network is the computer.”  Many people make a handsome living making sure data center switches and routers from vendors like Cisco, Juniper or others hum along nicely.

And while we’re hearing about the impacts of wireless, wireless data centers seem to be a long way off…leaving data centers filled with miles of wire…cables…and patch cables.

Non-technical managers may simply say “cable is cable” and not fully appreciate the value of a cable plant thoughtfully designed and implemented.

Every cable type (wired or fiber) has designed speed and maximum length characteristics.  It is amazing to me the number of times in a large data center a “flaky cable” ends up being of a cable length over the designed maximum length.  In my mind, it should be called a “flaky installation.”

A well-executed cabling job can qualify as a work of art.

We use some basic guidelines around patching we think make sense.  These are often applicable in high end data centers, with suitable modifications for smaller shops.

A word on “making cables.”  Small shops seem to love to “make cables.”  Small shops often don’t really have the expertise to field make and test a cable.  When the costs of making the cable are included (it is not “free”), the savings through reduced troubleshooting become clear.

There are numerous documents available qualifying as “prior art” on cable plants.  Here’s some high level guidelines we used on a recent immplementation:

General  

  • Plastic zip ties are not permitted for cable management for either copper or fiber patch cords. Hook and loop fasteners (aka, Velcro®) shall be used to dress bundles of cables and to provide strain relief.
  • All patch cords must be sized appropriately for the application, with only a small service loop at each end to facilitate tracing. Large loops of excess cable are not permitted.
  • All cables must be neatly routed and dressed.
  • Only patch cords from reputable, nationally known, industry recognized firms such as Belden, Siemon, Ortronics, TYCO/AMP, Corning, etc. are permitted. “No brand” generic cords are not permitted.

Copper Network Patch Cords

  • All copper patch cords must be “factory” manufactured, terminated, and tested in an appropriate facility. Field-terminated cords are not permitted.
  • All copper patch cords shall be ANSI/EIA Category 6A (Cat 6A) tested and certified.
  • All Cat 6A patch cords shall be F/UTP or STP construction.

Fiber Patch Cords

  • All fiber patch cords must be “factory” manufactured, terminated, and tested in an appropriate facility.
  • Field-terminated cords are not permitted.
  • Multimode fiber patch cords shall be OM3, laser optimized multi-mode fiber (LOMMF), supporting 10Gb Ethernet to 300m
  • Single mode fiber patch cords shall be OS1
  • Connector types shall be coordinated with the Client appropriate to the application.

Other Patch Cords

  • Patch cords for DS1 (T-1) circuits shall be Cat 6A.
  • Patch cords for DS0 circuits may be made on site to provide for specific pinnings or connector type. Cords should be tested with a pair tester.
  • Patch cords for DS3 circuits may be made on-site with the appropriate co-axial cable and connectors. Cords should be tested for continuity with an ohm meter or co-axial cable tester if available.

Data Center Cable Naming Standards

  • Two labels on each wire on each end (4 labels per wire)
  • First label is RX-X where X-X is the rack number and the run number
  • The second label is P.X.X where X.X is the port of the switch it is plugged into

Data Center Cable Color Standards

  • End User Connection - Blue
  • LAN Server Connections - Orange
  • Management (iLO and KVM) - Green
  • DMZ – Pink
  • Phones - White
  • Internal Uplinks: Yellow
  • External uplink: Grey or Black

What standards/practices do you find valuable?

 

Monday
Sep122011

Data Centers are Refrigerators

At their simplest level, data centers are refrigerators.  There are walls, a ceiling and floor, multiple racks, a couple doors and anything hot inside gets cooled.

How you lay out the contents of a refrigerator is determined by the manufacturer.  Similarly, how you lay out the contents of a data center is often determined by the architect and/or Mechanical, Electrical and Plumbing (MEP) firm.

Changes to your refrigerator layout are generally rarely performed, and when they do happen you take anecdotal measures for success (i.e.:  did the milk spoil quickly, did the soda freeze.)

We advise clients to be a bit more analytical when making changes in the air cooled data centers.  Some organizations have IP-based temperature probes throughout the data center, providing a precise view of the data center.   Often, we see less sophisticated organizations making layout changes to extend the data center life without much more than a “hope” the changes are positive.

What’s a simple way to measure the impact of changes?

We advocate use of a simple temperature strip attached to the input (cold aisle) side of racks:

  

 

 

 

This will immediately give a visual indication of inlet temperature, in a simple unobtrusive manner.

Ready to make changes?

A simple Post-it can be used for recording temps before, and then after changes.  Record the starting point, and once the room stabilizes (in a matter of an hour), record the ending point. 

 

We believe in KISS (Keep It Simple, Stupid).  Metrics are a must, and even a simple approach is preferred over no approach.  While we do believe a number of data center managers can use their body as a thermometer, a bit more science is generally preferred.

How hot should the data center be in practice?  64-81 °F or 18-27 °C ambient temperature, according to TIA/ANSI standard 942.  At the limits, there can be issues (freezing in direct expansion air conditioning under 68°F or increased device fan noise approaching 81°F.)  In a hot aisle/cold aisle orientation, the hot aisle can be significantly warmer (it’s called the “hot aisle” for a reason) without issue to the equipment.

So why do some data center managers keep it cool?   Often fear.  If a computer room air conditioning unit fails or is taken offline for maintenance, the air flow (distribution) may be insufficient.  Paying attention to the design intention of the space is imperative when making changes and accommodating maintenance.

With appropriate air distribution, data center managers can raise the ambient temperature of the data center and realize lower cooling costs.

Monday
May092011

An Elevator in the Data Center

As a trusted advisor to CIOs and their staff, we often hear the dirty little secrets. We call it the “close the door moment.” We know when a client lowers their voice and says, “Would you please close the door?” we are about to hear what’s keeping the client awake.

We recently had a client reach out with a unique problem. “Facilities needs to add an elevator to the building, and it must go through the data center. Can you help?”


What the client was asking was how to mitigate risk in the buildout. This is a perfect example of where external services are of benefit.

We are in data centers nearly every day. From 500 square feet to 50,000 square feet and beyond, this is one of our core areas. We assist with the processes, oversight, management structures, systems migrations, etc.

Obviously this client had architectural/structural/construction resources, and none were familiar with risk mitigation during construction and on to operation.

Clearly adding an elevator shaft to an operating data center presents challenges in construction dust/debris, vibration, EMI of the running elevator, security, cooling, power, network, etc. By bringing to bear our perspectives garnered over many data center expansions, we delivered a risk mitigation plan allowing construction and operation to continue, while also adding some facility improvements to create an overall better data center for this client.

From the client perspective, construction in and around the data center was a very scary thought. Because we help numerous clients, we had the skills and knowledge to help the client on a very efficient basis.

The CIO knew he needed help, and did not hesitate asking for help early in the construction process.

Monday
Aug092010

Stretching the Data Center

Data centers are the un-sung heroes of the IT world.

A number of companies pushed off addressing facilities limitations during 2008 & 2009 for budgetary reasons. Now, some facilities are reaching their design limits.


When data centers are designed, the basic questions are how large (square footage), how many watts/square foot (driving power and cooling) and what Tier (defined by organizations like the Uptime Institute and ANSI/TIA 942) see vastly simplified table:


When these design limits are reached, large infrastructure expenditures or cloud-sourcing may be indicated. The resourceful data center manager needs to look at short and long term strategies and alternatives.

One alternative is raising the temperature of the data center. While counterintuitive, today’s equipment can operate at significantly higher temperatures. The ANSI/TIA standard calls for air intake temperature as high as 81 degrees F.

Hot aisle/cold aisle is a key way to maximize efficiency. The cold aisle is maintained at 81, and the hot aisle is, well, hot. Temperatures over 100 degrees F are acceptable.


While not recommended, we know of two companies where the server racks were “spun” 180 degrees with the IT equipment running. Professional millrights were used, and they expertly raised racks, did the spin, and replaced all while a data center technician kept strain off cables. Obviously an extreme approach!

Hot Aisle/Cold aisle containment is also an option. There are elaborate systems for containment, and simpler systems (using the area above the drop ceiling as a return air plenum.


Raised (access) floor systems used for cooling should have all penetrations closed when not used for cooling, and all racks should have blanking systems in place.



http://upsitetechnologies.com/ and http://www.snaketray.com/snaketray_airflowsolutions.html are typical companies offering solutions in this space.

When spot cooling is needed, there are above rack cooling options, and fan options. Here’s one we’ve seen effectively cool a 200 WSF load in a designed 50 WSF data center


From http://www.adaptivcool.com/

We recently were in a facility where the temperature was set to 60 degrees to provide “spot” cooling for an individual rack. This is a very expensive approach for spot cooling.

Another options used to address simple hot spots is increasing the speed on air handlers. The newest air handlers are variable speed. On some older systems, a change in the fan speed may be sufficient to address the air flow needs.

Other companies are taking an all together different approach. By using rack mounted blade servers and a virtualized environment, the overall power and space used in a data center can actually be reduced! Here’s a case where a technology refresh may be the smarter option than a facilities investment.

Every company has unique challenges and one short post cannot address every alternative. The smart data center manager takes the time to understand the root cause of their issues, and through a careful, thoughtful, measured approach can make changes extending the effective lifespan of the data center. And always consult others including MEP engineering firms or strategic IT consulting firms for best practices in your specific environment.
 

Sunday
Nov292009

Provision a Data Center in 30 days

This is not some cleverly named cloud computing article. This was a real requirement.
(I will leave the details around how this company got in the situation of needing a data center in 30 days….clients find themselves in situations from time to time, either due to growth, capacity, or outages.)

The key to performing data center miracles rests with the network. If the network is near the facility, you’re golden.

This company had two main wide area network vendors (household name companies), and two major ancillary metropolitan area network providers. We felt we would have no issue finding a co-location site within the geographic boundaries some of the technologies required.

In this particular case, we needed a modest “cage” in a larger facility. We came up with our short list fairly quickly, and promptly went out and toured facilities. One facility percolated quickly to the top of our list; two of our network providers were there, the facility was in walking distance of another one of the company data centers, and we discovered the facilities manager was a alumni of a past employer.

This started a massive parallel effort.

The staff was fully brought into the decision, the business need, and helped plan the transition. Circuits, servers, storage arrays, etc. were all ordered immediately as the contract with the co-lo facility was worked out. Hardware vendors are used to performing miracles; they often install a large amount of equipment leading up to their quarter end.

While all vendors began responding, one communications vendor balked. They had a process, and as a part of that process had timeframes, and those timeframes were incontrovertible. We met with the vendor, and you could almost see the “A crisis on your part does not create a crisis for me” sign on their foreheads.

We went to the other communications vendors not in the building and made our case. One company, a scrappy start up charged with selling communications along utility rights of way, acknowledged have services “in the street,” but would have to get approvals from the City. (Full disclosure. This was Boston during the Big Dig. Every street was torn up on a tightly coordinated plan. The issue wouldn’t be getting the approvals, it would be getting Big Dig approvals.)

One thing scrappy start ups often have going for them is they are nimble. This supplier had the street open within a week (at night, no less), and brought new facilities into the colo site. It was good for us, and over time they certainly got additional business.)

Following the same process and using vendors used before allowed us to accelerate the delivery. The electricians “knew” what we wanted (once we gave them requirements,) the cable plant was put in by the usual suspects, the security department deployed full badge readers and security cameras, etc.

By following the same processes, our team was able to deploy on a predictable basis. Certainly some long nights were required, and this was a sacrifice made by the staff. Since they understood the business need and Information Technology imperative, they responded with aplomb.

And the facility was up and running on time.

Processes and engagement allowed the IT area to meet the business need. Processes run amok had one communications vendor wondering what happened.

By focusing on repeatable and timely process, this excellent team delivered!