Managed Service Nightmare - Lessons Learned, Part I


By any measure, Bob Andreini's network was a nightmare. Even though he had outsourced the management of the network, downtime was measured in days. His team regularly mediated finger-pointing between international telecom providers. Even the configuration of his infrastructure left a lot to be desired.

Andreini, the global director of IS and IT at Measurement Specialties Inc., a $200 million manufacturer of sensors with 12 locations around the world, knew something had to change.

His company had grown through acquisition, adding sites in locations in Galway, Ireland; Versailles, France; Dortmund, Germany; and Bevaix, Switzerland, a village of 3,500 outside of Neuchatel. But this growth also meant an increasing patchwork of network connections.

Reliability was a challenge, especially with an IT staff of 35 and only ten of those people devoted to the infrastructure -- able to work only part-time on this key task.

Downtime Here Meant Downtime There
A hub-and-spoke infrastructure meant that all communications went through MSI's corporate headquarters in Hampton, Va., some by point-to-point connections, others through VPNs. "If we had trouble in Hampton, we had trouble across the whole network," says Andreini.

Part of the problem, he admits, came from the telecom provider's own growth problems. It didn't always have the best relationship with telecommunications providers in the countries where MSI did business.

"We ended up with higher communications costs, because they couldn't get good resell rates." And because it didn't have good relationships, where there were problems, there was a lot of finger-pointing. "We would eventually have to have our IT people figure out the problem," sighs Andreini. Then the guilty party would admit it was their problem, sometimes as much as two days later.

Unmanaged Services
Some of Andreini's team even had to spend time learning about Cisco router configurations because their provider, which was supposed to be providing maintenance, had outsourced it to yet another company.

Solving MSI's network management problems took multiple steps, including signing on with another vendor of managed network services. MSI worked with the new company, Virtela, a Greenwood Village, Colo.-based company, and its team to create a new infrastructure. Together they took a close look at what kind of infrastructure would serve the global firm best.

In part II: how MSI solved its managed service problems.
You have read this article downtime / DSL / finger-pointing / managed services / Measurement Specialties / MPLS / reliability / Virtela / VPN with the title Managed Service Nightmare - Lessons Learned, Part I. You can bookmark this page URL https://apfurtado.blogspot.com/2009/01/managed-service-nightmare-lessons.html. Thanks!

No comment for "Managed Service Nightmare - Lessons Learned, Part I"

Post a Comment