Is austerity counter-productive even in microeconomic terms? I ask because the ransomware attack on the NHS seems to have been dues at least in part to a lack of spending on IT. The Register says:
The NHS is thought to have been particularly hard hit because of the antiquated nature of its IT infrastructure. A large part of the organization's systems are still using Windows XP, which is no longer supported by Microsoft, and Health Secretary Jeremy Hunt cancelled a pricey support package in 2015 as a cost-saving measure.
That “cost saving” might well turn out to have been no such thing. It’s possible that the cost of repairing the attack, cancelled operations and updating IT systems will offset the initial saving.
This is by no means an isolated example. Cutting prison staff looks like a “saving”, until prisoners either riot or re-offend upon release because of a lack of rehabilitation. “Savings” on flood defences become no such thing when we need to repair the damage done by flooding. Low spending on social care adds to NHS costs by keeping people in hospital. Cuts in psychiatric care add to police spending. A lack of spending on schools isn’t a saving if parents pay a hidden tax by donating money to schools. And so on.
In ways like these, austerity is counter-productive. It either means shifting the burden onto other public services, or increased spending later to pay for the damage done by cuts.
This is an entirely separate point from the Keynesian one, that austerity means weaker economic activity and hence lower tax revenues. Putting the two together, though, merely reinforces the point – that austerity fails, in the long-term, even in its own terms.
Now, I’m not complaining here only of Tory stupidity. A similar thing happens in the private sector. In the early 00s BP’s CEO John Browne cut maintenance spending. That raised profits. But only temporarily. The savings he made were swamped by the costs incurred when the Texas City oil refinery blew up.
Browne had an excuse. He was trying to impress shareholders who were daft enough to pay more attention to quarterly earnings reports than to the ground truth of dangerous refineries.
But who is the government trying to impress? It’s certainly not bond markets, who are happy to finance government borrowing at negative real interest rates. Instead, it’s political journalists with their silly mediamacro obsession with the “nation’s finances” and ignorance of the ground truth of whether austerity is in fact feasible in the long run.
In saying all this, I’m not arguing against restraining government spending. There is of course a case for rooting out waste and making sustainable efficiency gains. Identifying such gains, however, can’t be done by here-today, gone-tomorrow politicians and managers who care only about tough talk and impressing journalists or hitting short-term targets. Genuine, lasting restraint requires empowering public sector workers (who have both knowledge of ground truth and skin in the game to ensure efficient services in the future) to identify genuine waste. Austerity, when combined with managerialism, is just daft.
I think as well there is an issue with the way that IT is viewed in large organisations. Do senior managers have the knowledge to allow them to make decisions about prioritising upgrades or making sure that the organisation isn't reliant on outdated systems or applications?
I worked in a very junior admin role in the NHS when I was a student and even in that role the IT was a shambles. I've since moved into working in IT in large organisations and many of the systems and services the public rely on in everyday life are based in software and systems written decades ago. In some cases those who can support these systems are retiring, leaving big gaps in support. You can see the results of this in some of the big IT failures seen in recent years.
What is needed to deal with this is to have leaders in organisations, both government and private sector, who have a background in IT and an appreciation of how to properly support and maintain systems and software. That might (a big might) prevent IT being a prime target for cuts and 'savings'.
Posted by: Chris | May 13, 2017 at 01:21 PM
Almost all economic problems are caused by just two factors, which may in fact condense down to one.
1. Treating a nation as a mix of household and business.
2. Shareholder value above all else (Friedman.
It really is as simple as that.
Posted by: bill40 | May 13, 2017 at 01:22 PM
What's the name for this kind of mistake where "cost saving" and cuts on spending/investment are taken as an unambiguous good no matter if they cut muscle along with the fat?
It's similar to how lower consumer prices are always considered to be an unambiguously good thing for society no matter if the product is made by slave labor or by unionized happy workers who might not scapegoat immigrants and vote Trump since they are making a living wage and can support their families.
Mainstream economists have let us down. It's not just the corporate media and Mediamacro.
Posted by: Peter K. | May 13, 2017 at 02:55 PM
Too few bean counters understand, too few running IT have clout to override. And notwithstanding, rampant mistrust and anxiety born of competition based secrecy undermines efforts to build robust architectures. As with much else we seem to be increasingly useless.
Posted by: e | May 13, 2017 at 02:58 PM
it is hard to make general points from something like this. It is tempting to blame it on underfunding the NHS but the breadth of scale and type of organisations hit by this make such generalisations not valid.
Large scale IT is an immensely complex evolved uncontrolled ecosystem with lots of incompatible software and hardware and legacy systems dotted all over the place. Upgrading operating systems in such environments can be a risk in itself. Everyone is wise after the event, but its a rare person who can be responsible for corporate IT for any length of time and maintain their sanity.
Posted by: Dipper | May 13, 2017 at 04:27 PM
@ Dipper “legacy systems dotted all over the place” Lets be generous and say, only for over four decades. Equally generously, let's on this occasion ignore the fact hindsight is the politicians stock-in-trade.
Posted by: e | May 13, 2017 at 05:10 PM
I don't think that austerity is anything to do with this. It is just incompetence plain and simple. No serious organisation would risk leaving itself open to obvious cyber security threats or fail to have an adequate disaster recovery solution. We don't know to what extent these risks were understood internally and whether this was adequately communicated. Or perhaps the IT function did not have sufficient clout within the hierarchy. Leaving yourself open to attack should not have been an option even if this meant reducing some medical procedures. We can agree that the NHS is underfunded but who is to say that an extra few billion would have solved the IT problems (until now perhaps). It doesn't matter how much you spend on the NHS , the demand is insatiable and there will always be pressure to skimp on basics to meet the targets.
Posted by: Stewart S | May 13, 2017 at 05:25 PM
if there is a political lesson for this it is that the conflict of interest of being both purchaser and provider is a toxic one. Increased privatisation and outsourcing clearly the way to go. The threat of action over a break in service like this would cause the service provider to invest in proper infrastructure, and the NHS would just have to pay the cost.
Posted by: Dipper | May 13, 2017 at 05:30 PM
People are barking up the wrong tree here. This is a fork-in-the-road, everything changes, moment. Its huge.
For those who haven't worked in large IT departments, they are extraordinarily difficult and expensive places to operate in. Large organisations have infrastructure that no-one understands, that works but no-one is quite sure how. People have put processes in that take data from point a and place it on drive b with no record of who uses it. So you cannot take them out, and are never sure if you replace a system whether you have to build a new feed from this drive. There are systems with no access to the source code, where the vendor has gone bust. Periodically there will be outages, and one of the principle causes of these is - yes you guessed it - upgrading the operating systems.
New entrants don't have this problem. They can use cloud services, software as a service, spin up applications on hosted machines and massively reduce their software costs. But slowly over time they get legacy systems and processes, confusion about what-flows-where, bogged down in upgrades, and all the time there are new entrants coming in with clean slates.
You cannot run large industries on the basis of no company being able to thrive longer than 5 years. At some point the numbers don't add up. so the industries will have to sort out how their software runs. My guess is that in twenty years time no large organisation will have a data centre, or evan a cpu within the building. Everything will be outsourced, providers like HP, IBM, Microsoft, are going to run most of the hardware in there world and manage the data security side and upgrades, and everyone else will just plug in, the NHS included.
Posted by: Dipper | May 13, 2017 at 07:03 PM
@Dipper
I actually agree with most of what you say here. Especially the diagnosis side. But this:
"Increased privatisation and outsourcing clearly the way to go."
Might be true for resources (cloud, etc.) but definitely not true for software processes. Take, for example, the New Labour NHS dev project. An absolute disaster because it was all "vision" but just handed over to outsourcers. In the end, if you don't retain knowledge in-house, you have no control. So, by all means, outsource the development work and the physical resources, but internally you have to keep intellectual control and define standards that must be driven into the suppliers.
Re the private sector. Isn't banking a prime example? The regular outages of bank systems during upgrades is a result of previous cost-cutting (leading to old cobbled systems) and now poor management and control.
Re the NHS and security. Rudd was notably trying to shift blame on the trusts. This is appalling rubbish. In any organisation with mission-critical computer systems, it has to be board-level person who has overall responsibility for the integrity of the system, and enforces that on all users. The technical infrastructure is available to do that, and any *serious* organisation ensures that. This debacle should be a sacking matter for several people.
Posted by: gastro george | May 13, 2017 at 08:33 PM
@gastro george - banking IT was my thing. There are lots of legacy problems, lots of past decisions coming back to haunt you, lots of questionable management, but that doesn't mean that the problems aren't real, hence my second post. It is easy when you are on the outside to criticise, but when you are on the inside and faced with the landscape it is a really tricky problem with no guarantee of an acceptable solution. New entrants can cut the Gordian Knot and go to the cloud but no such opportunity when you are established.
The point about outsourcing is that (in principle) you hold people accountable. If It systems were provided by a major IT provider I think there is little doubt that right now Jeremy Hunt would be calling for heads to roll. As it stands now the person he has to hold responsible is himself.
Posted by: Dipper | May 13, 2017 at 09:26 PM
"but that doesn't mean that the problems aren't real"
Oh, I agree. I've been involved in many migrations from legacy systems. They have to be handled with great care. I've seen a large company taken over and told - we work on MS platform, you have to migrate to this in 6 months. That was 4 years ago. Hasn't happened yet.
"The point about outsourcing is that (in principle) you hold people accountable."
I have experience of outsourcing within my own company. IMHO it only works if you can propagate your standards and quality into the outsourcers. How can you rely on it if it doesn't. Look at car manufacturers. They outsource a lot of sub-assembly. It's only works if they become effectively part of the company. Any drop in standards or quality has an immediate impact.
"As it stands now the person he has to hold responsible is himself."
Absolutely.
Posted by: gastro george | May 13, 2017 at 11:28 PM
If I'm not mistaken, Windows Vista took over from Windows XP as the OEM on new PCs. You don't use ten year old PCs in large organisations so IT in the Health Service have either been installing an outdated operating system or using it to replace more modern software. Whose fault is that
Posted by: Paul Carlton | May 14, 2017 at 12:12 AM
The BP skimping on infrastructure is an instance of the failure of management to maximize shareholder value. Regardless of how difficult or not it is to update IT systems, it does not get harder when borrowing rates are low, so "austerity" gets the decision wrong.
Posted by: Thaomas | May 14, 2017 at 11:18 AM
Austerity has clearly played a part, in terms of a failure to invest in system upgrades, but the larger error is one of commission rather than omission, namely the decision to reject proprietary systems for commerical off-the-shelf software.
This reflected the public sector adoption of supposed private sector best practice ("buy not build") in the 90s and was driven as much by New Labour, after its u-turn on marketisation and embrace of the privatisation of ancillary services, as by the Tories.
At the time, this strategy was justified on cost-benefit grounds: it was cheaper than bespoke development, it used proven technologies, it would commodify IT skill needs and thus lower ongoing costs etc. The "legacy mish-mash" problem outlined by Dipper was also a factor: we have an old mainframe "black box" but nobody knows how it works (ironically, a black box is highly secure precisely because there is no malware that targets it).
Factors that weren't considered (because the case was usually made by business consultants, not techies) included the risks posed by sunsetting (i.e. inertia leading to unsupported systems, such as XP), data isolation (creating auxiliary systems, e.g. spreadsheets, that aren't backed-up over the network), and malware (more is written for Windows not because it is a less secure OS but because it has a massive installed base).
The NHS IT landscape is typical of the public sector but it is also typical of much of the private sector, particularly those businesses where IT strategy has been set by the CFO rather than the CTO. The besetting sin is an obsession with short-term cost control that simply builds up greater costs down the road. The root cause is a naivety over markets and commodification.
Posted by: Dave Timoney | May 14, 2017 at 11:26 AM
@Paul Carlton
Business purchasing is not like retail. Often machines are offered with earlier OS versions - for example the last time I looked on Dell's business site Win 7 was still on offer.
And in any case, large organisations will often re-install the OS before distribution.
Upgrading to new OS versions in a large organisation is a large-scale operation and requires careful management. For the reasons we have seen, but mainly control of risk.
Posted by: gastro george | May 14, 2017 at 04:27 PM
All, you might find the current thread at Charlie Stross's place of interest-especially the comments -linky below,
http://www.antipope.org/charlie/blog-static/2017/05/rejection-letter.html
Posted by: AndrewD | May 14, 2017 at 06:05 PM
Engineers used built redundancy into their systems to allow for capacity errors and/or unexpected growth. IT systems rely on bolt-ons which almost always have migration problems at the interface of the original and post-original systems. These then become cumulative legacy 'Heath Robinson' problems.
Posted by: odeboyz | May 14, 2017 at 06:51 PM
F.A.T.E.: "This reflected the public sector adoption of supposed private sector best practice ("buy not build") in the 90s ...
At the time, this strategy was justified on cost-benefit grounds: it was cheaper than bespoke development, it used proven technologies, it would commodify IT skill needs and thus lower ongoing costs etc. ..."
This is undoubtedly true, but I would extend this point to an even bigger picture: the "productivity increases" and "cost savings" of IT that "we" have come to expect can only be realized by the economies of scale of mass produced, commodity monoculture products and ecosystems.
This doesn't necessarily mean the same underlying hardware and operating system architectures. The truly hard problems in IT have to do with interoperation at the data format (files as well at database schemas, i.e. the structure in which data is organized) and protocol levels (how data is exchanged between parties - technical, organizational, legal etc. aspects).
There have been pushes for "open standards" or "industry standards" but they suffer from two problems:
* In a complex enough environment, it is hard to foresee all needs that need to be addressed, close to impossible to address non-public needs that organizations cannot be compelled to disclose, and truly impossible to anticipate future needs.
* Nobody wants to be a commodity vendor, creating a motivation to create proprietary "extensions" and other "innovations". Also nobody wants their legitimate extensions/modifications be held hostage by outside organization that are necessarily political.
Both apply at the vendor as well as customer side. The results fall in roughly two categories:
* A succession of ever more complex (or contradictory) standards, which leads to ever escalating complexity in legacy or mixed-use systems that can only be successfully handled by large companies, creating a barrier to entry for newcomers - a major source of consolidation as newcomers will rather be bought out than become serious rivals.
* A similarly evolving complexity of one or a few de-facto "standards" but which are not documented even at the level of an open standard, where expertise basically amounts to a black art. This, more so than it former, is what we have here - and it is of course highly resistant to security or even flawless operation.
Posted by: cm | May 14, 2017 at 08:52 PM
At a smaller level, vendors are reluctant to disclose too much detail not only because knowledge is power, but also because committing to specific details constrains their own freedom to make changes. Perhaps some details had to be added in a hurry, the vendor doesn't like them itself, but disclosing them will cement them.
This doesn't work very well overall, as it is standard practice in IT that in the absence of documentation people are very good at figuring out "what appears to work", and come to rely and depend on it anyway. Or they come up with even worse workarounds.
Posted by: cm | May 14, 2017 at 08:57 PM
@cm
This is why, IMHO, standards and knowledge have to be preserved in-house, and then propagated to suppliers. It's like any outsourcing. Unless the commissioning company defines and enforces standards, then they will be defined by the suppliers, and the system rapidly becomes complex and out of control. Look at the experience of car sub-system suppliers when the Japanese arrived in the UK. A real shock as they enforced propagation of their IT and quality systems. The principles are the same.
Posted by: gastro george | May 14, 2017 at 10:25 PM
Regarding the complexity of data and protocols, this is well known, because large companies have to handle data from other companies as well as their own. It's a technical task, but with properly designed systems, and good management, is not a problem.
But, as has been noted, this complexity means a high level of management and care for any upgrade process, involving many elaborate test cycles before the upgrade would be allowed to go live.
As we have seen with some banks, the management of this process is not always the best ...
Posted by: gastro george | May 14, 2017 at 10:34 PM
Gigantic organisations lack local knowledge and often times consequently make the wrong decisions consequently. We know this, but large organisations also have benefits including scale which hopefully outweigh their mistakes. The advantage of the private sector is that companies that don't have this balance correct, go out of business. The NHS can't of course as it is a state enterprise. So probably it won't react in the right way, I would guess probably it will over-react, costing us all lots of money for no real benefit. Myself privitisation would be the obvious answer, breaking the NHS into smaller competing entities. This way it becomes less fragile, if one hospital feels the right answer is more IT and another doesn't, we can afford to wait and see which is the right approach.
By the way in this case of this recent cyber attack it wasn't just the NHS that got hit, many other companies small and large also got hit. So I think it is hard to say it is a directly stupid decision when so many people made the same decision.
Posted by: ChrisA | May 15, 2017 at 10:11 AM
"In saying all this, I’m not arguing against restraining government spending. There is of course a case for rooting out waste and making sustainable efficiency gains. Identifying such gains, however, can’t be done by here-today, gone-tomorrow politicians and managers who care only about tough talk and impressing journalists or hitting short-term targets."
I think it not so much the short-termism here that is the biggest problem (although it is a problem), it is that the targets are grabbed out of thin air rather than being the basis of a thorough analysis. We need to develop (AGAIN) a facts first, ideology second ethos.
Posted by: reason | May 15, 2017 at 10:48 AM
ChrisA
"The advantage of the private sector is that companies that don't have this balance correct, go out of business."
Why is this an advantage exactly (maybe it is a disadvantage). It is only an advantage if the system as a whole learns from the experience and I don't see that that is necessarily true for the private sector and not for the public sector (especially if consequences are partly a result of luck).
Posted by: reason | May 15, 2017 at 10:51 AM
@Chris A - "Myself privitisation would be the obvious answer, breaking the NHS into smaller competing entities."
This doesn't stop the IT complexity problem. For example, what if I fall ill in another part of the country - how do they get my records? Also, GPs are already separate entities. How do they and myriad hospitals/services, all with different IT strategies under your scheme, share data?
Posted by: gastro george | May 15, 2017 at 11:12 AM
"Genuine, lasting restraint requires empowering public sector workers (who have both knowledge of ground truth and skin in the game to ensure efficient services in the future) "
What 'skin in the game' do public sector workers have? Are they going to be fired for failure? No. Are they going to be made redundant because their decisions have lost the organisation customers and revenue? No. What incentive does the public sector worker have to reduce costs and increase efficiency at all? None, indeed to do so could well put themselves out of a job.
Posted by: Jim | May 15, 2017 at 01:57 PM
@ gastro george - the problem here is not necessarily one of complexity, it is one of accountability. If service providers to the NHS experienced significant penalties due to failure to provide scheduled services then they would make sure their systems were up to date.
The model of free markets within regulation is one that has been developed over the last thirty years and has worked generally well. It makes discussion of national ownership at best irrelevant and at worse positively damaging. Regulators can be as heavy handed or as light touch as is appropriate. From my own experience in banking heavy regulation can be appropriate, for instance I would guess the major part of investment in banking over the last few years has been at the direct instruction of government. The government via the regulators requires proof of adequate steps to ensure service availability, adequate steps to handle incidents, and through derivatives regulation has forced common standards of technology across many banks. Failure to meet these standards results in requirements for detailed plans and regular progress updates (these normally being assessed by the bank's Audit team who report directly to the regulator).
Folks interested in the way government and business can interact in a regulated environment should be spending a lot of time looking at banking. Particularly people on the left should be looking how governments can control major components of the national commercial landscape without taking direct control. Needless to say they are not, but have instead indulged themselves in sloganeering and direct personal attacks.
Posted by: Dipper | May 15, 2017 at 02:05 PM
We need a 'counterproductive austerity' top-level anti-pattern. some suggestions for underlying taxonomical sub-divisions?
Responsibility Shifting: e.g. we are no longer responsible for maintaining roads to a standard: just fixing problems when they escalate to some risk level (claims? complaints?)
Deferred Risk Shifting: we no longer pay to manage risk that we know about but has no defined timescale of occurrence likelihood
Posted by: Dicky | May 15, 2017 at 04:19 PM
@Dipper
I don't think we're disagreeing (at least much) here. I can imagine banks offer many parallels to the NHS, including management of legacy systems and the struggle to merge datasets after takeovers/mergers, etc. At the risk of being wrong, I would imagine that the number of data interfaces between banks is less than that between components of the NHS, but the principles would be the same.
But do banks outsource their IT intellectual property - in terms of design - to suppliers? That's my main point. I've no problem having a market in dev, running or resources. Just that the the top-level organisation needs control over intellectual knowledge, standards and design. This can actually open up a market, as open standards are more amenable to multiple suppliers than off-the-shelf black-box systems.
Posted by: gastro george | May 15, 2017 at 04:37 PM
This kind of event is a low probability, very high impact event. We don't know what the probability actually is, because the time-series isn't long enough to calculate it. We don't know who will get blamed or where the costs will be apportioned. There is a tendency to proceed as if the probability is zero, even though it isn't.
Result = misery.
Posted by: Guano | May 15, 2017 at 04:38 PM
@ gastro george - in my area of derivatives there was increasing commoditisation, and along with it less of an advantage of having your own software. Also suppliers are increasingly turning to hosting systems rather than rolling them out on bank hardware.
The trend will continue. Attention will turn to how banks report the risk on their portfolios, and why each bank does it differently. I can't see any reason why the world will not end up with all banks derivatives and fixed income desks having their positions marked by external bodies (presumably some mash-up of large companies such as HP/MS/IBM/Accenture etc and bodies such as exchanges and clearing houses) and their standardised risk information made available to them for them to investigate. That also reduces barriers to entry for new smaller entrants.
Posted by: Dipper | May 15, 2017 at 06:06 PM
«The point about outsourcing is that (in principle) you hold people accountable.»
That is such a stupid illusion: a lot of outsourcing deals are structured so that nobody is accountable, and it is very easy to do so. Finger pointing and the merry-go-round are the almost inevitable result, and often the goal, of outsourcing.
Of course it is *possible* to structure an outsourcing deal so that there is effective accountability and correct incentives.
But this requires such a strength of executive will and kill, to manage the deal for the best outcome that if they were available outsourcing would not be needed to ensure accountability.
Posted by: Blissex | May 16, 2017 at 12:26 PM
«This kind of event is a low probability, very high impact event.»
That is a frequent situation, and one of the way of becoming rich and powerful in an organization is to look very cost-effective by creating a lot of "low probability, very high impact" risks where the risk is not accounted for formally. In finance it is called the "picking up pennies in front of steamrollers" strategy.
At places like Enron, the greek ministry of finance, TEPCO, RBS, or the departmenmt of health it is called "everyday business".
My way of describing it is the principle that every non-trivial financial fraud is based on under-depreciation (of plant, risk, ...).
Posted by: Blissex | May 16, 2017 at 12:35 PM
«every non-trivial financial fraud is based on under-depreciation»
And usually, even if not always, viceversa: most cases of under-depreciations are deliberate fraud. It is called the "sharecropper problem".
Outsourcing is a particularly clever way of under-depreciating risk and assets: it gives the illusion of passing risk to the outsourcer, while any wise outsourcer will attach the outsourcing contract to some empty off-balance-sheet vehicle that will just go bankrupt when the risk happens.
Posted by: Blissex | May 16, 2017 at 12:49 PM
The delusion that outsourcing bring accountability seems to me based on the usual "have your cake and eat it" illusion: that accountability costs nothing, being entirely at the contractor's cost, and to get it all the minister has to do is to tick the box "full accountability for free included" before signing.
In the real world explicitly or implicitly the contractor will tell the minister "this is the price with pretend accountability, with real accountability it it 70% extra", and then the minister will make a choice thinking "you will be, I will be gone".
Posted by: Blissex | May 16, 2017 at 03:19 PM
"The delusion that outsourcing bring accountability seems to me based on the usual ... illusion .... that accountability costs nothing."
Yes, quite. There's nothing here that suggests that outsourcing is the way to deal with problem.
https://en.wikipedia.org/wiki/Taleb_distribution
I suppose that somebody from another organisation will point out the risks that have been overlooked by the first organisation, but in practice that doesn't seem to be the way that it works. The international mining industry is adept at putting away its long-term environmental risks in small companies that don't have the capacity to pay up when those risks turn into reality.
Posted by: Guano | May 16, 2017 at 04:17 PM
«somebody from another organisation will point out the risks that have been overlooked by the first organisation, but in practice that doesn't seem to be the way that it works»
Indeed, I read recently this very nice example from a commenter on another blog on the english tendency to "worry later":
“I recall working on an (ill-fated, of course) project which had a largely German team from the vendor’s side.
We had the usual disorganised make-it-up-as-you-go-along mentality which caused utter incomprehension and bemusement. As we were the clients and they were the suppliers, they had little leverage which skewed how much say they had, but they made continuous attempts to instil more rigour and efficiency into the design and decision-making process. To no avail, I hasten to add. It was like trying to push the North Poles of two magnets together. The harder one side shoved, the more resistance was generated. I’ve never encountered anything quite like it, before or since.”
Posted by: Blissex | May 16, 2017 at 09:59 PM
@Blissex
I've heard lots of stories and had lots of experience of different organisations approach to implementing large projects. The most extreme one was an organisation that simply rammed in the system in a few months and spent the next year picking up the pieces. That was a Dutch organisation.
Lots of projects are "ill-fated". There are organisations that plan and test so much they cannot get systems in and projects are constantly being reset and relaunched. In my experience the one thing that determines how successful a large project will be is the degree of accountability and ownership of the department commissioning the project and their degree of influence over IT.
Posted by: Dipper | May 17, 2017 at 07:54 AM