Blogs/ 03.07.2018

Three Horror Stories about Terrible Software Decisions

At Service-Flow, we’ve been working on integration projects for a long time! And from the dreamy ‘couldn’t have gone better’ projects to the absolute chaotic nightmares, we have pretty much seen it all.

In this blog we take a look at some of the terrible things that can happen when integration projects don’t go to plan. Partly for the fun of it… to look back and laugh. But also, to help you learn from other people’s mistakes and hopefully avoid a similar fate. So where to start?

The story of the 365 day project… that’s still not finished

Once upon a time, there was a large-ish business that was upgrading and migrating a significant part of its IT infrastructure. Along-side this it was completely ripping-and-replacing it’s CRM and ITSM toolsets, as they we’re also looking pretty old and tired. They didn’t quite have the resources in house to manage it themselves, so they brought in the help of an external consulting company, to help design the new infrastructure and write a project plan.

A month into working with the consultant, there were some great architecture drawings and some very slick looking slide decks around how the project would play out. But one huge area of the whole thing was missed.

According to the project plan, around 30 days of work had been allocated across to integrate all the legacy, existing  and new systems. Because “Hey, how hard can it be?!” It’s just configuring a few API’s and writing out a few documents to demonstrate how it works, right?

WRONG! One whole year later, there are an army of new servers and backend services in place, a brand-new Salesforce implementation and a flashy new version of ServiceNow at the heart of the Service Desk.

However, nothing talks. No data moves from one system or service to another. People are picking up the phone to chase up files they sent to be uploaded to other systems weeks ago and the consultant that designed the whole thing is nowhere to be seen. The company has been left with lots of new toys, but with no real way to play with them… or see any kind of business value from them.

 

 

The story of the integration project that made everyone hate their jobs

Welcome to a big international bank, which you have probably heard off. These guys are BIG but there is a small team of system administrators based in London who once had the delightful task of every week, updating their own ITSM toolset with ticket data from about 3 other managed service suppliers.

The problem was that the time delay in downloading/uploading the data, and then the added delay of having to normalise miss-matching field names meant that data was never consistent and never up to date. It was a futile process. So, the team were tasked with creating a more ‘real-time’ integration, which would allow a more seamless exchanging of data between this set of systems.

The team knew their way round building a script and one of them had learned to code in Python too, so it actually seemed like a pretty straight forward task. Why hadn’t they thought of it before? However, four months into playing around with scripts, databases and thousands of lines of code, they eventually got to a place where data would pass around the integration a few times a day then just stop.

By this time, there was such a large expectation from the IT Leadership and the suppliers that this would be working seamlessly, the team would rush around every day, trying to repair the bad lines of code and find the data that got lost during the outages. The truth was that hand-coding such a complex integration was a terrible solution. It was way too fragile and the systems had far too many differences and variants, so there was rarely a standard exchange between any system.

Managing, maintaining and repairing, slowly but surely became a full-time and very stressful job, then one-by-one each of the System Administrators left the bank. This left the bank in a tricky position where they had a crucial integration in place and no one in the company with the knowledge to oversee its success. Six months later the bank paid over £100,000 to replace its ITSM software.

The story of the IT manager who didn’t know how API’s work

Our final short story ends in a truly beautiful mess. You’re going to love it!

Picture this, it’s 2016 the golden dawn of SIAM (Service Integration and Management) and a retail company mostly based in Europe has launched a huge digital transformation project in order to sell more products online. A key part of this project was a wide scale SIAM implementation.

If you are not familiar with SIAM, for the case of this company it meant hiring a third party to manage all top-level service requests. Then for all those service requests to be farmed out to a selection of about 18 different suppliers and internal teams. The third-party is then responsible for overseeing the overall fulfilment of each ticket, including the management of SLAs and KPIs across all the different suppliers.

However, this is not an easy thing to setup and requires a number of integrations between service desk tools, as trying to manually push ticket information around an ecosystem of that size is almost impossible. With the right technology in place, this can be achieved. However, the IT Manager who was leading this project under estimated the complexity and instructed his team to simply gather the details of APIs and to setup point-to-point connections between their own ITSM tool and each and every tool of their suppliers… already sounds messy right? It gets worse!

The problem with APIs is that are simply ‘sockets’ that other applications can plug into. There is no logic that understands the context of data being sent back and forth. Each API connection is unique too, meaning new code has be hand written and maintained for every end point.

The IT Manager at this retail company didn’t really know any of this, he had simply assumed that API’s could enable a ‘plug and play’ network for all the tools in the SIAM environment. But in reality, all he had created in the end was a ridiculous set of API calls where ticket data didn’t make any sense to the other systems receiving it and close to 50 new API end-points, which no-one really understood or knew how to manage!

The SIAM project was a complete failure, not because the Service Providers couldn’t live up to their SLAs and KPIs, or because the staff at the various suppliers were unable to work together and collaborate over solving issues. But because data and communications logged in support tickets and service requests were always wrong, inconsistent and never up-to-date.

Don’t let your story become as scary

Three horror stories, all with nightmare software integrations at the heart of their characters’ demise! Unfortunately, as dramatized as some of these may seem, they are all based on situations that many IT organisations find themselves facing every day.

Whether it be through the use of poor technology or misunderstanding the actions required to implement effective integrations, modern businesses are relying more and more on broad ecosystems of services. Therefore, getting these projects right first time and avoiding the horror story has never been more important

At Service-Flow, we are committed to ensuring businesses can seamlessly integrate ITSM tools and other applications across both their own set of services and their suppliers. We provide a simple SaaS based global integration hub, which enables business teams and suppliers to keep their tools, share real-time service data and collaborate seamlessly over IT services.

If you would like to learn more, get in touch with our expert team today flow@service-flow.com