« The bias against emergence | Main | Top taxes & growth »

May 13, 2017

Comments

Chris

I think as well there is an issue with the way that IT is viewed in large organisations. Do senior managers have the knowledge to allow them to make decisions about prioritising upgrades or making sure that the organisation isn't reliant on outdated systems or applications?

I worked in a very junior admin role in the NHS when I was a student and even in that role the IT was a shambles. I've since moved into working in IT in large organisations and many of the systems and services the public rely on in everyday life are based in software and systems written decades ago. In some cases those who can support these systems are retiring, leaving big gaps in support. You can see the results of this in some of the big IT failures seen in recent years.

What is needed to deal with this is to have leaders in organisations, both government and private sector, who have a background in IT and an appreciation of how to properly support and maintain systems and software. That might (a big might) prevent IT being a prime target for cuts and 'savings'.

bill40

Almost all economic problems are caused by just two factors, which may in fact condense down to one.

1. Treating a nation as a mix of household and business.
2. Shareholder value above all else (Friedman.

It really is as simple as that.

Peter K.

What's the name for this kind of mistake where "cost saving" and cuts on spending/investment are taken as an unambiguous good no matter if they cut muscle along with the fat?

It's similar to how lower consumer prices are always considered to be an unambiguously good thing for society no matter if the product is made by slave labor or by unionized happy workers who might not scapegoat immigrants and vote Trump since they are making a living wage and can support their families.

Mainstream economists have let us down. It's not just the corporate media and Mediamacro.

e

Too few bean counters understand, too few running IT have clout to override. And notwithstanding, rampant mistrust and anxiety born of competition based secrecy undermines efforts to build robust architectures. As with much else we seem to be increasingly useless.

Dipper

it is hard to make general points from something like this. It is tempting to blame it on underfunding the NHS but the breadth of scale and type of organisations hit by this make such generalisations not valid.

Large scale IT is an immensely complex evolved uncontrolled ecosystem with lots of incompatible software and hardware and legacy systems dotted all over the place. Upgrading operating systems in such environments can be a risk in itself. Everyone is wise after the event, but its a rare person who can be responsible for corporate IT for any length of time and maintain their sanity.

e

@ Dipper “legacy systems dotted all over the place” Lets be generous and say, only for over four decades. Equally generously, let's on this occasion ignore the fact hindsight is the politicians stock-in-trade.

Stewart S

I don't think that austerity is anything to do with this. It is just incompetence plain and simple. No serious organisation would risk leaving itself open to obvious cyber security threats or fail to have an adequate disaster recovery solution. We don't know to what extent these risks were understood internally and whether this was adequately communicated. Or perhaps the IT function did not have sufficient clout within the hierarchy. Leaving yourself open to attack should not have been an option even if this meant reducing some medical procedures. We can agree that the NHS is underfunded but who is to say that an extra few billion would have solved the IT problems (until now perhaps). It doesn't matter how much you spend on the NHS , the demand is insatiable and there will always be pressure to skimp on basics to meet the targets.

Dipper

if there is a political lesson for this it is that the conflict of interest of being both purchaser and provider is a toxic one. Increased privatisation and outsourcing clearly the way to go. The threat of action over a break in service like this would cause the service provider to invest in proper infrastructure, and the NHS would just have to pay the cost.

Dipper

People are barking up the wrong tree here. This is a fork-in-the-road, everything changes, moment. Its huge.

For those who haven't worked in large IT departments, they are extraordinarily difficult and expensive places to operate in. Large organisations have infrastructure that no-one understands, that works but no-one is quite sure how. People have put processes in that take data from point a and place it on drive b with no record of who uses it. So you cannot take them out, and are never sure if you replace a system whether you have to build a new feed from this drive. There are systems with no access to the source code, where the vendor has gone bust. Periodically there will be outages, and one of the principle causes of these is - yes you guessed it - upgrading the operating systems.

New entrants don't have this problem. They can use cloud services, software as a service, spin up applications on hosted machines and massively reduce their software costs. But slowly over time they get legacy systems and processes, confusion about what-flows-where, bogged down in upgrades, and all the time there are new entrants coming in with clean slates.

You cannot run large industries on the basis of no company being able to thrive longer than 5 years. At some point the numbers don't add up. so the industries will have to sort out how their software runs. My guess is that in twenty years time no large organisation will have a data centre, or evan a cpu within the building. Everything will be outsourced, providers like HP, IBM, Microsoft, are going to run most of the hardware in there world and manage the data security side and upgrades, and everyone else will just plug in, the NHS included.

gastro george

@Dipper

I actually agree with most of what you say here. Especially the diagnosis side. But this:

"Increased privatisation and outsourcing clearly the way to go."

Might be true for resources (cloud, etc.) but definitely not true for software processes. Take, for example, the New Labour NHS dev project. An absolute disaster because it was all "vision" but just handed over to outsourcers. In the end, if you don't retain knowledge in-house, you have no control. So, by all means, outsource the development work and the physical resources, but internally you have to keep intellectual control and define standards that must be driven into the suppliers.

Re the private sector. Isn't banking a prime example? The regular outages of bank systems during upgrades is a result of previous cost-cutting (leading to old cobbled systems) and now poor management and control.

Re the NHS and security. Rudd was notably trying to shift blame on the trusts. This is appalling rubbish. In any organisation with mission-critical computer systems, it has to be board-level person who has overall responsibility for the integrity of the system, and enforces that on all users. The technical infrastructure is available to do that, and any *serious* organisation ensures that. This debacle should be a sacking matter for several people.

Dipper

@gastro george - banking IT was my thing. There are lots of legacy problems, lots of past decisions coming back to haunt you, lots of questionable management, but that doesn't mean that the problems aren't real, hence my second post. It is easy when you are on the outside to criticise, but when you are on the inside and faced with the landscape it is a really tricky problem with no guarantee of an acceptable solution. New entrants can cut the Gordian Knot and go to the cloud but no such opportunity when you are established.

The point about outsourcing is that (in principle) you hold people accountable. If It systems were provided by a major IT provider I think there is little doubt that right now Jeremy Hunt would be calling for heads to roll. As it stands now the person he has to hold responsible is himself.

gastro george

"but that doesn't mean that the problems aren't real"

Oh, I agree. I've been involved in many migrations from legacy systems. They have to be handled with great care. I've seen a large company taken over and told - we work on MS platform, you have to migrate to this in 6 months. That was 4 years ago. Hasn't happened yet.

"The point about outsourcing is that (in principle) you hold people accountable."

I have experience of outsourcing within my own company. IMHO it only works if you can propagate your standards and quality into the outsourcers. How can you rely on it if it doesn't. Look at car manufacturers. They outsource a lot of sub-assembly. It's only works if they become effectively part of the company. Any drop in standards or quality has an immediate impact.

"As it stands now the person he has to hold responsible is himself."

Absolutely.

Paul Carlton

If I'm not mistaken, Windows Vista took over from Windows XP as the OEM on new PCs. You don't use ten year old PCs in large organisations so IT in the Health Service have either been installing an outdated operating system or using it to replace more modern software. Whose fault is that

Thaomas

The BP skimping on infrastructure is an instance of the failure of management to maximize shareholder value. Regardless of how difficult or not it is to update IT systems, it does not get harder when borrowing rates are low, so "austerity" gets the decision wrong.

Dave Timoney

Austerity has clearly played a part, in terms of a failure to invest in system upgrades, but the larger error is one of commission rather than omission, namely the decision to reject proprietary systems for commerical off-the-shelf software.

This reflected the public sector adoption of supposed private sector best practice ("buy not build") in the 90s and was driven as much by New Labour, after its u-turn on marketisation and embrace of the privatisation of ancillary services, as by the Tories.

At the time, this strategy was justified on cost-benefit grounds: it was cheaper than bespoke development, it used proven technologies, it would commodify IT skill needs and thus lower ongoing costs etc. The "legacy mish-mash" problem outlined by Dipper was also a factor: we have an old mainframe "black box" but nobody knows how it works (ironically, a black box is highly secure precisely because there is no malware that targets it).

Factors that weren't considered (because the case was usually made by business consultants, not techies) included the risks posed by sunsetting (i.e. inertia leading to unsupported systems, such as XP), data isolation (creating auxiliary systems, e.g. spreadsheets, that aren't backed-up over the network), and malware (more is written for Windows not because it is a less secure OS but because it has a massive installed base).

The NHS IT landscape is typical of the public sector but it is also typical of much of the private sector, particularly those businesses where IT strategy has been set by the CFO rather than the CTO. The besetting sin is an obsession with short-term cost control that simply builds up greater costs down the road. The root cause is a naivety over markets and commodification.

gastro george

@Paul Carlton

Business purchasing is not like retail. Often machines are offered with earlier OS versions - for example the last time I looked on Dell's business site Win 7 was still on offer.

And in any case, large organisations will often re-install the OS before distribution.

Upgrading to new OS versions in a large organisation is a large-scale operation and requires careful management. For the reasons we have seen, but mainly control of risk.

AndrewD

All, you might find the current thread at Charlie Stross's place of interest-especially the comments -linky below,
http://www.antipope.org/charlie/blog-static/2017/05/rejection-letter.html

odeboyz

Engineers used built redundancy into their systems to allow for capacity errors and/or unexpected growth. IT systems rely on bolt-ons which almost always have migration problems at the interface of the original and post-original systems. These then become cumulative legacy 'Heath Robinson' problems.

cm

F.A.T.E.: "This reflected the public sector adoption of supposed private sector best practice ("buy not build") in the 90s ...

At the time, this strategy was justified on cost-benefit grounds: it was cheaper than bespoke development, it used proven technologies, it would commodify IT skill needs and thus lower ongoing costs etc. ..."

This is undoubtedly true, but I would extend this point to an even bigger picture: the "productivity increases" and "cost savings" of IT that "we" have come to expect can only be realized by the economies of scale of mass produced, commodity monoculture products and ecosystems.

This doesn't necessarily mean the same underlying hardware and operating system architectures. The truly hard problems in IT have to do with interoperation at the data format (files as well at database schemas, i.e. the structure in which data is organized) and protocol levels (how data is exchanged between parties - technical, organizational, legal etc. aspects).

There have been pushes for "open standards" or "industry standards" but they suffer from two problems:

* In a complex enough environment, it is hard to foresee all needs that need to be addressed, close to impossible to address non-public needs that organizations cannot be compelled to disclose, and truly impossible to anticipate future needs.

* Nobody wants to be a commodity vendor, creating a motivation to create proprietary "extensions" and other "innovations". Also nobody wants their legitimate extensions/modifications be held hostage by outside organization that are necessarily political.

Both apply at the vendor as well as customer side. The results fall in roughly two categories:

* A succession of ever more complex (or contradictory) standards, which leads to ever escalating complexity in legacy or mixed-use systems that can only be successfully handled by large companies, creating a barrier to entry for newcomers - a major source of consolidation as newcomers will rather be bought out than become serious rivals.

* A similarly evolving complexity of one or a few de-facto "standards" but which are not documented even at the level of an open standard, where expertise basically amounts to a black art. This, more so than it former, is what we have here - and it is of course highly resistant to security or even flawless operation.

cm

At a smaller level, vendors are reluctant to disclose too much detail not only because knowledge is power, but also because committing to specific details constrains their own freedom to make changes. Perhaps some details had to be added in a hurry, the vendor doesn't like them itself, but disclosing them will cement them.

This doesn't work very well overall, as it is standard practice in IT that in the absence of documentation people are very good at figuring out "what appears to work", and come to rely and depend on it anyway. Or they come up with even worse workarounds.

gastro george

@cm

This is why, IMHO, standards and knowledge have to be preserved in-house, and then propagated to suppliers. It's like any outsourcing. Unless the commissioning company defines and enforces standards, then they will be defined by the suppliers, and the system rapidly becomes complex and out of control. Look at the experience of car sub-system suppliers when the Japanese arrived in the UK. A real shock as they enforced propagation of their IT and quality systems. The principles are the same.

gastro george

Regarding the complexity of data and protocols, this is well known, because large companies have to handle data from other companies as well as their own. It's a technical task, but with properly designed systems, and good management, is not a problem.

But, as has been noted, this complexity means a high level of management and care for any upgrade process, involving many elaborate test cycles before the upgrade would be allowed to go live.

As we have seen with some banks, the management of this process is not always the best ...

ChrisA

Gigantic organisations lack local knowledge and often times consequently make the wrong decisions consequently. We know this, but large organisations also have benefits including scale which hopefully outweigh their mistakes. The advantage of the private sector is that companies that don't have this balance correct, go out of business. The NHS can't of course as it is a state enterprise. So probably it won't react in the right way, I would guess probably it will over-react, costing us all lots of money for no real benefit. Myself privitisation would be the obvious answer, breaking the NHS into smaller competing entities. This way it becomes less fragile, if one hospital feels the right answer is more IT and another doesn't, we can afford to wait and see which is the right approach.

By the way in this case of this recent cyber attack it wasn't just the NHS that got hit, many other companies small and large also got hit. So I think it is hard to say it is a directly stupid decision when so many people made the same decision.

reason

"In saying all this, I’m not arguing against restraining government spending. There is of course a case for rooting out waste and making sustainable efficiency gains. Identifying such gains, however, can’t be done by here-today, gone-tomorrow politicians and managers who care only about tough talk and impressing journalists or hitting short-term targets."

I think it not so much the short-termism here that is the biggest problem (although it is a problem), it is that the targets are grabbed out of thin air rather than being the basis of a thorough analysis. We need to develop (AGAIN) a facts first, ideology second ethos.

reason

ChrisA
"The advantage of the private sector is that companies that don't have this balance correct, go out of business."

Why is this an advantage exactly (maybe it is a disadvantage). It is only an advantage if the system as a whole learns from the experience and I don't see that that is necessarily true for the private sector and not for the public sector (especially if consequences are partly a result of luck).

gastro george

@Chris A - "Myself privitisation would be the obvious answer, breaking the NHS into smaller competing entities."

This doesn't stop the IT complexity problem. For example, what if I fall ill in another part of the country - how do they get my records? Also, GPs are already separate entities. How do they and myriad hospitals/services, all with different IT strategies under your scheme, share data?

Jim

"Genuine, lasting restraint requires empowering public sector workers (who have both knowledge of ground truth and skin in the game to ensure efficient services in the future) "

What 'skin in the game' do public sector workers have? Are they going to be fired for failure? No. Are they going to be made redundant because their decisions have lost the organisation customers and revenue? No. What incentive does the public sector worker have to reduce costs and increase efficiency at all? None, indeed to do so could well put themselves out of a job.

Dipper

@ gastro george - the problem here is not necessarily one of complexity, it is one of accountability. If service providers to the NHS experienced significant penalties due to failure to provide scheduled services then they would make sure their systems were up to date.

The model of free markets within regulation is one that has been developed over the last thirty years and has worked generally well. It makes discussion of national ownership at best irrelevant and at worse positively damaging. Regulators can be as heavy handed or as light touch as is appropriate. From my own experience in banking heavy regulation can be appropriate, for instance I would guess the major part of investment in banking over the last few years has been at the direct instruction of government. The government via the regulators requires proof of adequate steps to ensure service availability, adequate steps to handle incidents, and through derivatives regulation has forced common standards of technology across many banks. Failure to meet these standards results in requirements for detailed plans and regular progress updates (these normally being assessed by the bank's Audit team who report directly to the regulator).

Folks interested in the way government and business can interact in a regulated environment should be spending a lot of time looking at banking. Particularly people on the left should be looking how governments can control major components of the national commercial landscape without taking direct control. Needless to say they are not, but have instead indulged themselves in sloganeering and direct personal attacks.

Dicky

We need a 'counterproductive austerity' top-level anti-pattern. some suggestions for underlying taxonomical sub-divisions?
Responsibility Shifting: e.g. we are no longer responsible for maintaining roads to a standard: just fixing problems when they escalate to some risk level (claims? complaints?)
Deferred Risk Shifting: we no longer pay to manage risk that we know about but has no defined timescale of occurrence likelihood

gastro george

@Dipper

I don't think we're disagreeing (at least much) here. I can imagine banks offer many parallels to the NHS, including management of legacy systems and the struggle to merge datasets after takeovers/mergers, etc. At the risk of being wrong, I would imagine that the number of data interfaces between banks is less than that between components of the NHS, but the principles would be the same.

But do banks outsource their IT intellectual property - in terms of design - to suppliers? That's my main point. I've no problem having a market in dev, running or resources. Just that the the top-level organisation needs control over intellectual knowledge, standards and design. This can actually open up a market, as open standards are more amenable to multiple suppliers than off-the-shelf black-box systems.

Guano

This kind of event is a low probability, very high impact event. We don't know what the probability actually is, because the time-series isn't long enough to calculate it. We don't know who will get blamed or where the costs will be apportioned. There is a tendency to proceed as if the probability is zero, even though it isn't.

Result = misery.

Dipper

@ gastro george - in my area of derivatives there was increasing commoditisation, and along with it less of an advantage of having your own software. Also suppliers are increasingly turning to hosting systems rather than rolling them out on bank hardware.

The trend will continue. Attention will turn to how banks report the risk on their portfolios, and why each bank does it differently. I can't see any reason why the world will not end up with all banks derivatives and fixed income desks having their positions marked by external bodies (presumably some mash-up of large companies such as HP/MS/IBM/Accenture etc and bodies such as exchanges and clearing houses) and their standardised risk information made available to them for them to investigate. That also reduces barriers to entry for new smaller entrants.

Blissex

«The point about outsourcing is that (in principle) you hold people accountable.»

That is such a stupid illusion: a lot of outsourcing deals are structured so that nobody is accountable, and it is very easy to do so. Finger pointing and the merry-go-round are the almost inevitable result, and often the goal, of outsourcing.

Of course it is *possible* to structure an outsourcing deal so that there is effective accountability and correct incentives.

But this requires such a strength of executive will and kill, to manage the deal for the best outcome that if they were available outsourcing would not be needed to ensure accountability.

Blissex

«This kind of event is a low probability, very high impact event.»

That is a frequent situation, and one of the way of becoming rich and powerful in an organization is to look very cost-effective by creating a lot of "low probability, very high impact" risks where the risk is not accounted for formally. In finance it is called the "picking up pennies in front of steamrollers" strategy.
At places like Enron, the greek ministry of finance, TEPCO, RBS, or the departmenmt of health it is called "everyday business".

My way of describing it is the principle that every non-trivial financial fraud is based on under-depreciation (of plant, risk, ...).

Blissex

«every non-trivial financial fraud is based on under-depreciation»

And usually, even if not always, viceversa: most cases of under-depreciations are deliberate fraud. It is called the "sharecropper problem".

Outsourcing is a particularly clever way of under-depreciating risk and assets: it gives the illusion of passing risk to the outsourcer, while any wise outsourcer will attach the outsourcing contract to some empty off-balance-sheet vehicle that will just go bankrupt when the risk happens.

Blissex

The delusion that outsourcing bring accountability seems to me based on the usual "have your cake and eat it" illusion: that accountability costs nothing, being entirely at the contractor's cost, and to get it all the minister has to do is to tick the box "full accountability for free included" before signing.

In the real world explicitly or implicitly the contractor will tell the minister "this is the price with pretend accountability, with real accountability it it 70% extra", and then the minister will make a choice thinking "you will be, I will be gone".

Guano

"The delusion that outsourcing bring accountability seems to me based on the usual ... illusion .... that accountability costs nothing."


Yes, quite. There's nothing here that suggests that outsourcing is the way to deal with problem.

https://en.wikipedia.org/wiki/Taleb_distribution

I suppose that somebody from another organisation will point out the risks that have been overlooked by the first organisation, but in practice that doesn't seem to be the way that it works. The international mining industry is adept at putting away its long-term environmental risks in small companies that don't have the capacity to pay up when those risks turn into reality.

Blissex

«somebody from another organisation will point out the risks that have been overlooked by the first organisation, but in practice that doesn't seem to be the way that it works»

Indeed, I read recently this very nice example from a commenter on another blog on the english tendency to "worry later":

“I recall working on an (ill-fated, of course) project which had a largely German team from the vendor’s side.
We had the usual disorganised make-it-up-as-you-go-along mentality which caused utter incomprehension and bemusement. As we were the clients and they were the suppliers, they had little leverage which skewed how much say they had, but they made continuous attempts to instil more rigour and efficiency into the design and decision-making process. To no avail, I hasten to add. It was like trying to push the North Poles of two magnets together. The harder one side shoved, the more resistance was generated. I’ve never encountered anything quite like it, before or since.”

Dipper

@Blissex

I've heard lots of stories and had lots of experience of different organisations approach to implementing large projects. The most extreme one was an organisation that simply rammed in the system in a few months and spent the next year picking up the pieces. That was a Dutch organisation.

Lots of projects are "ill-fated". There are organisations that plan and test so much they cannot get systems in and projects are constantly being reset and relaunched. In my experience the one thing that determines how successful a large project will be is the degree of accountability and ownership of the department commissioning the project and their degree of influence over IT.

The comments to this entry are closed.

blogs I like

Blog powered by Typepad