WeeklyWorker

21.11.2013

IT house of cards

There is a disaster waiting to happen in information technology, warns Amir Parviz Pouyan

In the aftermath of the US government shutdown another story is making the headlines, bringing criticism from Democrats and Republicans alike. And that is the technical problems of the website, HealthCare.gov, a web portal allowing uninsured US citizens to register for subsidised private healthcare in 36 states. These problems were evident from the day the website was launched, with one woman trying to get help being told “please be patient” an infuriating 40 times.1

HealthCare.gov is a crucial part of Obamacare, which is supposed to bring affordable healthcare to 32 million uninsured Americans, and the administration is desperate to iron out the technical difficulties. Consulting firm Millward Brown Digital reports that “a mere one percent of the 3.7 million people who tried to register on the federal exchange in the first week actually managed to enrol”.2 The US administration will throw good money after bad to solve the website’s issues. However, it could be that it, as is the case with many other mass-access sites, these efforts will have little result. Unless we believe in conspiracy theories, on the face of it the world’s most advanced capitalist country appears incapable of launching and managing a website that is supposed to guide its citizens to one of the administration’s most heralded policies.

The reality is more simple. Like most major corporations, the US government has spent billions of dollars on this particular information technology development - not to mention billions invested in other IT projects - yet it remains a victim of decades of appalling computing infrastructure. Forget about the housing bubble, ignore the fact that financial markets cannot continue being profitable if they just print money: the disastrous state of corporate and government IT contracted to the private sector, demonstrated so blatantly in the failure of HealthCare.gov - as well as similar IT scandals in the UK, such as Capita’s involvement with the NHS a few years ago - puts them in the shade.

Thousands of badly planned, poorly managed major IT projects, covering every aspect of modern capitalism, from banking to insurance and pensions, from the travel industry to healthcare, are the Achilles heel of modern capitalism. We have all been in banks where the computers are so slow or ‘down’ that the cashier has to produce hand-written receipts. We have all been at airports where ‘system failure’ has delayed check-in and flight departures. We have all read about crucial patient data mismanaged or lost in a hospital. However, the real state of corporate IT is far worse.

In this article, I will look at what is wrong with modern IT in both the private and the public sector, why the reliance of big business on multiple suppliers - at times hundreds of them - selling expensive software and even more expensive, often inefficient, unreliable ‘support’ is creating a spiralling crisis.

The multiple supplier scenario produces a situation where as soon as anything goes wrong, as in the case of HealthCare.gov, the many ‘stakeholders’ blame each other for the problems. As ill-informed managers and directors try to find out what went wrong and where, hours are spent on conference calls across global time zones. Often at the end of such calls no-one is the wiser as to the causes of the problem, and the temporary ‘solution’ adds to the complexity of the next IT downtime. The reality is that in IT everyone knows the price of the service and the cost of ‘support’, but no-one knows the value of the work undertaken to allow the technology to function. That value is work done in universities, research institutions at minimum cost, often funded by taxpayers. Company directors pay extortionate prices for fundamental, simple services, without any understanding of how the programme was put together and how it should work.

Vicious circle

Since the 1980s businesses and institutions have come to increasingly depend on computers. The development of distributed multi-user computing, the internet and email has transformed them from the preserve of universities and engineering and scientific laboratories to a much wider, almost universal user community. (Contrary to the common belief held in the US, it was not Microsoft that discovered the internet. The worldwide web was developed by a British physicist, Tim Berners-Lee, working in the European Organisation for Nuclear Research - Cern).

With the perennial expansion of computers in day-to-day life, at home and at work, a debate has ensued regarding the ease of use of applications, software packages and the functionality of software. The easier the use of a package, the more complicated the functions behind the scene, the more expensive the application and the support. Ironically it is this drive for simplicity of use that has created the problem: programmers hide the complexity of the function within the package, and further coding is added with each subsequent development. All this is fine until the user faces a major problem - the resolution may require a level of expertise beyond the capability of the designated ‘support services’.

So let us start from the beginning and look at operating systems and hardware in terms that are understandable to the home or office user. Everyone knows that there is a vicious circle created by, on the one side, the latest version of the commonly used PC operating systems (mainly Microsoft and Apple) and software packages that perform various functions from word processing to email and basic spreadsheets and, on the other, the need for more memory and more hard disk space. A Windows 7 PC requires as a minimum a 1,000 MHz processor with 1GB of RAM, while the latest Linux operating system runs quite fast on a 700 MHz processor with 384MB RAM. In practice, though, the difference between the performance of the two operating systems is more pronounced. Why? Typically a Microsoft operating system requires around 60% of any computer’s capability (a combination of processing power, speed and memory). Applications written for this operating system are also major consumers of hardware power, so, as you add applications necessary for daily functions, you slow down your computer, and you need to buy a more powerful replacement. But as soon as you have done so, a new operating system and updated series of packages, requiring even more powerful hardware, appears on the market - the constant need to ‘upgrade software’ to avoid slow functionality demands that you buy new hardware.

The capitalist drive for profit is the reason for the existence of this vicious circle. There is no doubt that the overwhelming majority of personal computers (over 95%) both at work and at home rely on Microsoft. Yet you can use quite old hardware with a Linux operating system (most are available for free or at a much lower price than Microsoft Windows) and run equivalent packages much faster, in a more reliable environment than Windows allows. Linux on both PCs and servers can run for months or even years without needing a reboot. Most software programs, utilities and games available on Linux are ‘freeware’ or ‘open source’. They have value but no price. As far as security is concerned, Linux remains a far more secure environment. If you look for online support, solving problems you encounter in a Linux environment is much easier, free of charge, distributed via well researched forums, while in contrast online support for Microsoft often takes you to a software company asking you to pay before you ask your question. Usually the answer involves investing even more money to buy a fix for an already problematic application.

So why such a low usage for an efficient, secure and reliable operating system? The answer is partly to do with marketing and Microsoft’s bullish attitude, but the reality is that MS was very quick at producing easy-to-use software, as demanded by users and capitalist markets, but with it came the complication of what is often called the ‘back end’ (the actual program running when you use a package, as opposed to what you actually see).

Linux developers - typically computing scientists, physicists and engineers, as well as computer geeks - did not follow this trend. In fact until recently they have resisted making access to this operating system easy and user-friendly. By doing so they were able to maintain something that is hated by major IT companies: the open source project.

Programmes and packages released as open source are completely free, leading to the project being ‘accused’ of being like communism - if coding is a kind of property (intellectual property), its free distribution is indeed communistic. In recent times, given the overwhelming advantages of OSS, markets have had to adapt to this free software in recognition of the fact that it actually works better.

Corporate IT

As developments in the worldwide web and networking enhanced the use of servers, distributed and multi server systems, and as databases became an essential business requirement, information technology became a cornerstone of most corporations. Initially many companies and institutions had their own IT divisions providing in-house development of software. However, the cost of staffing this specialised sector rose, especially in terms of advanced support and server management.

The average computer user in a workplace is often confused as to the level of support he/she needs when something goes wrong. Is it just a simple user error? Or is the desktop PC faulty or the system down? Such scenarios led to the categorisation of support, ranging from level 1, where anyone with basic IT skills can deal with the user’s query, to level 3 and above, where you need a specialised understanding of the system and support for servers, at times requiring Linux or advanced Linux. As a result of this division of labour, companies need large numbers of level 1 support at relatively cheap cost, as opposed to level 3 or 4 practitioners, who are few but expensive. And, as you expand, your developers will mean even higher costs.

Capitalism’s solution to this problem was to create huge support companies, with cheap call centres - originally in the UK and Europe, but nowadays worldwide. In many ‘third world’ countries the high cost of Microsoft products opened the way for major progress in computing expertise in Linux, India being a major example. So companies started outsourcing server support to India. Servers are now hosted worldwide or ‘in the cloud’ - connected to users and the support teams via the internet. So in theory development and support can be based anywhere. However, all this adds to the complication and in some ways the Obamacare saga is a good example of what can go wrong with such solutions. Some universities (mainly institutions where research is less important) are also moving to this model of outsourcing support. All this has created its own norms and practises.

The origins of most software is open source. That is where you find creativity, intelligence and research. However, soon the need for profit and market forces intervene, so hardware, software and support become commodities and the HealthCare.gov website problems are a common result.

In response, a massive open-source collaboration platform, GitHub, was set up to provide software. In fact even at the time of the launch, open access to the computer code’s repository was provided. However, as the project developed and entered ‘production’, it was decided to engage a ‘much larger, more complex back end’ involving some 47 contractors. By this stage the website had become typical of current software systems: huge, interconnected entities, databases linking to each other, dynamic websites responding to queries.

The experiment failed. Most probably one or a few of the contractors had introduced inconsistencies or errors into the system. In an IT project when you have so many third party providers or contractors there is a need for structures, planning and coordination. However, IT managers cannot be expected to understand all the underlying technologies and failure is common. Although the new world of information technology has created its own ‘processes’ to monitor and record ‘technical development/change’, often this ‘process’ takes more time and effort than the actual ‘development or change’. Experts in the production of power point presentations with pretty graphs but little technical content fool company CEOs and directors.

In the words of one software developer, spreadsheet addicts and lawyers have taken over the IT industry. In any major city, tens of thousands of people are employed as administrators of IT systems, some at managerial level. However, many of them know very little about information technology and they definitely know next to nothing about how networks, databases and servers work. They may have memorised some acronyms, but that does not make them familiar with the technology - even if they started as computer experts, their knowledge may have become redundant and they are unable to keep up with a fast-moving industry. This administrative, bureaucratic layer adds to the total cost.

Outsourcing practices

So what are these time-consuming, expensive processes? Any fundamental change in the provision of IT services is usually carried out by a ‘change advisory board’, or cab. As I said earlier, there are so many contractors, developers, ‘stakeholders’ as well as users (customers) that any IT change needs to be conveyed to the entire structure.

This is true whether you have in-house computing or outsourced suppliers. The problem in the latter case is that no-one knows much about the other system providers and no-one takes responsibility for overall functionality and performance, so inevitably, whether the change undertaken is routine or an emergency, you can end up involved in endless conference calls. Your network provider may be in the UK, your database provider could be in eastern Europe, your ‘first line support’ company is in India, your ‘cloud support’ in east Asia, your web supplier is in the US and your hardware support is in western Europe. A cumbersome juggernaut that makes little progress and at times of failure all parties are busy shifting the blame to the other suppliers. Each will report that there is nothing wrong with their set-up or application.

Businesses will try and recuperate every penny they lose through a website going down or a server not working - ‘downtime’ can be very expensive indeed. But nine out of 10 suppliers get away with shoddy performance, because they can always blame other suppliers for what went wrong. These suppliers will have signed a service-level agreement (SLA), which is a legal contract. In practice, although most such contracts refer to technical terms and conditions, it is often the case that lawyers will determine them with some input from technical teams. Typically the company will guarantee ‘uptime’, when the server, database or website is online, of 99.9%-100%. However, as anyone who has taken a PC on maintenance or guarantee back to the shop will know, the guarantee has exceptions not specified in the terms and conditions or grey areas. For a non-technical parallel, think of that other wonderful example of competition and free market: the UK railway system. When there is a delay, the train company blames Network Rail and vice versa.

So if your website or database servers are down, if queues are forming at check-in and there is chaos at the airport, during the conference call urgently arranged to resolve the problem the network providers say they have done their checks and all is well, while the hardware supplier claims it is the software that is the problem. Application company A blames software company B, which in turn has produced wonderful graphs proving their system is perfect and it is the database managed by company C where the fault lies.

And then came the cloud

Over the last few years the buzz word in corporate IT is ‘cloud’. One hears directors and managers boasting about moving everything to the cloud, as if this would magically improve the state of their IT. PC Magazine defines cloud computing as “storing and accessing data and programs over the internet instead of the computer’s hard drive. The cloud is just a metaphor for the internet.”3

The reality is that most CEOs, directors or customers of Amazon and Google fail to realise that (a) the problems listed above with IT infrastructure will become 10 times worse in a cloud environment, with less flexibility and more automation; (b) public cloud will not give enough security for financial transactions, or where safety is paramount; and (c) private cloud is expensive, requiring a costly, time-consuming redesign of the entire set-up in modular, parallel structures.

Anti-capitalist hackers occasionally take pleasure in entering or bringing down various corporate websites, and the more intelligent members of staff in IT security firms, aware of corporate IT vulnerabilities, wonder why there are so few attacks. In my opinion it could be that there is no need for the malicious hacking of such sites. You can sit back, fold your arms and wait for this house of cards to collapse. When it does, the impact on finance, banking, insurance and the travel industry will be far more dramatic than the housing bubble ever was.

Notes

1. www.slate.com/articles/news_and_politics/ politics/2013/10/obamacare_one_woman_s_un­sucessful_quest_to_sign_up_for_obamacare.html.

2. http://mobile.businessweek.com/arti­cles/2013-10-16/open-source-everything-the-moral-of-the-healthcare-dot-gov-debacle#.

3. www.pcmag.com/article2/0,2817,2372163,00.asp.