Could AI replace managers and even politicians? In their now-famous paper (pdf) describing how half of American jobs could be replaced by computers, Carl Frey and Michael Osborne say no: they estimate that chief executives, line managers and HR managers are among the 10% of occupations least likely to be computerized.
From the perspective of neoclassical economics, this is weird. It pretends that the job of bosses is to maximize profits, given a production function and prices of inputs. This is a constrained optimization problem which can easily be done by a computer.
Similarly, if you believe, Sunstein-stylee, that policy-making is a technocratic function of choosing optimal fiscal or monetary policy or the right choice architecture, the job can be delegated to computers, which have the benefit of being immune – in principle – to cognitive biases.
Which poses the question: what is it about bosses and politicians that make their jobs so unlikely to be replaced by tech? There are, I suspect, four things.
One is that the choice of what technology to adopt is not merely a matter of objective efficiency. It is also about power. In his excellent new book The Technology Trap, Frey describe how pre-industrial governments often forbade the use of labour-saving techniques, fearing they would cause a backlash and disrupt established hierarchies. And Joel Mokyr has written:
Resistance to technological change is not limited to labour unions and Olsonian lobbies defending their turf and skills against the inexorable obsolescence that new techniques will bring about. In a centralized bureaucracy there is a built-in tendency for conservatism. Sometimes the motives of technophobes are purely conservative in the standard sense of the word. This is equally true for corporate and government bureaucracies, and cases in which corporations, presumably trying to maximize profits, resisted innovations are legend. (The Gifts of Athena, p238)
A second problem is that knowledge is not necessarily codifiable. AI works where all possible options can in principle be listed. Alphazero, for example, learned chess and Go by being programmed with the laws of the games and then playing millions of games against itself. In other contexts, such an approach doesn’t apply, and not just because what we are trying to achieve is sometimes not as simple or articulable as winning a game.
In many contexts there are what Donald Rumsfeld called unknown knowns – things we don’t know that we know. Tacit knowledge – hunches, gut feelings, things we have forgotten that we knew – matters. One reason why management is not merely a constrained optimization task is that even with given technology, good managers can tweak the production possibility curve outwards (pdf) by incremental improvements based upon hunches. Maybe brute force algorithms can replicate all these possible hunches and find the optimal one. But this is most easily done where the environment can be completely described – which is true of chess but not the real world.
There’s something else. An algorithm is, by definition, a set of rules. Sometimes, though, we succeed by breaking rules. For Israel Kirzner, this is in fact the very essence of entrepreneurship:
The Schumpeterian entrepreneur does not passively operate in a given world, he creates a world different from which he finds. He introduces hitherto undreamt of products, he pioneers hitherto unthought of methods of production, he opens up a new market in hitherto undiscovered territory.
Entrepreneurship, he says, “is the process of discovering new knowledge and possibilities that no one has either previously imagined or noticed."
Herein lies one reason why AI can’t replace politicians. The success of Trump, Farage and Johnson has come from breaking conventions, be it the idea that leaders should conform to a particular ideal of good character, or that policies be evidence-based and compatible with the interests of business and the median voter. As Will Davies says:
Come November this year, Farage, Johnson and their allies may well have achieved a far greater disruption of the political and economic status quo than Thatcher or Blair ever managed, with a smaller popular mandate and far less effort. They don’t need think tanks, policy breakfasts, the CBI or party discipline. They don’t even need ideas. All they have to do, in pursuit of their goal, is to carry on being themselves.
There’s one final thing. Even if algorithms weren’t racist, they’d be only indifferent managers for one other reason. Sometimes, we want the human touch – the arm round the shoulder, the jolly-up, or the understanding that we’re having an off day.
It’s for a similar reason that AI will, I suspect, never write great music. Yes, it can produce passable if derivative tunes. But it has not yet given us truly original songs or insightful lyrics, nor what great music gives us, the sense of one soul speaking to another.
The same thing applies in journalism. AI can, in a fashion write news stories: these follow articulable rule. But it cannot (yet) produce marketable opinion columns: readers want the prejudices, biases and errors that only humans can provide.
What they also want are stories, even, or especially, if they are nonsense. We’ve good evidence – corroborated by the fact that so many of them recommended Woodford’s funds – that financial advice is poor. And yet robo-advisors have not expanded as much as they should. A big reason for this is that people don’t want to believe that simple rules work (such as buy cheap tracker funds). They want narratives and great investors they can trust. Only humans offer these.
And this is why AI cannot replace politicians. We prefer bad decisions taken by humans to good ones taken by machines. As Ben Jackson says:
Much of our politics today revolves around the perception that decisions are being taken elsewhere, whether in Westminster, Brussels or Washington. Passing off the work of decision-making to the ultimate aloof elite, a computer, is not a serious way of confronting this issue. Sometimes it’s important to decide things for ourselves, and to feel like we’re deciding, even if we often go astray.
What I’m driving at here is not merely a version of Moravec’s paradox. Nor is it a story about computers, especially as we might be on the verge of an era of super-powerful ones. Instead, my point is about the nature of power and decision-making. These are not merely technocratic algorithmic processes but are instead essentially and inherently human. Which is what conventional neoclassical economics has traditionally under-estimated.
The first reason you offer is power. You could have stopped there. The other three are essentially exceptions and marginal situations that wouldn't necessarily impede the rollout of AI (most business decisions aren't based on innovation, nor do they require a compelling story).
A hunch, or breaking the rules, might possibly produce stellar results, but most businesses will be aiming simply to be better than average - to avoid the drop rather than win the league, so to speak. That means bureaucratic risk-averseness (that broken rule might kill the business) may actually encourage the use of AI.
As regards the "fallible entertainers" point, that may well be true for politicians and commentators, but it is less likely to be the case for CEOs, as Woodford shows, and it largely applies only at the very top of the professional tree. Tory MPs who witnessed Boris Johnson's shortcomings as Foreign Secretary at first hand aren't being entirely irrational in thinking he should be PM. They believe the job demands personality and will, rather than mastery of a brief.
NB: The c-suite role that is most suitable to a constrained optimisation approach, and in which hunches and breaking the rules are most frowned upon, is the CFO. This is quite a simple "algorithm", which can be implemented via a spreadsheet (1980s vintage) rather than a GAN, so it ought to be more vulnerable to AI than other roles, yet there has been no meaningful attempt to automate it.
Posted by: Dave Timoney | June 26, 2019 at 02:06 PM
Whereas I'm in Charlie Stross' camp. He argues that "slow AI" has been running corporations for probably centuries now, it's just that we haven't necessarily noticed because of the speed e.g. banks have used pretty much the same criteria for making loans ever since they started, it's just that today the calculation can be made in nanoseconds instead of taking days and reams of paper.
In other words, we were all actually replaced long ago, we just haven't noticed yet.
(This is very different to the "small business" model where priorities are not remotely the same and decisions about assets may actually take into account the idea that e.g. people are not the same as widgets... And you can sometimes even spot when there is a tipping point and the small business becomes a corporate slow AI.)
Posted by: Scurra | June 27, 2019 at 02:52 PM
The point I would make here is that what "AI" as the phrase is currently used - deep-learning neural networks - gives you is either the detection of correlations between arbitrary input data and a user-defined target data set, or else the detection of autocorrelations (clusters) within an arbitrary data set.
Applying this to strategy, this will tend to advise you to double down on whatever you're doing that is successful and cut whatever is not successful. In financial terms, this would be a momentum-investing strategy - buy things because they are going up. Notoriously, this works very well until it doesn't and then you lose your shirt (ie it has Taleb fragility).
In an industrial example, an ML model advising the management of Motorola in the 2000s would have strongly recommended producing lots and lots of RAZR V3 phones with more marketing options and signing up more contract manufacturers, and cutting its investment in R&D. The signal "more RAZRs" was very strongly correlated with "more sales". This is precisely what Moto actually did right up until it failed horribly, went bankrupt, and ended up being sold piecemeal to its competitors.
Another really important issue here is that one example of a non-obvious, strong, but spurious and harmful correlation is the one between "being one of the assets involved in a bubble" and "absolute return". An ML model doing asset allocation in 2006 would very likely have identified that the best CDOs were ones based on mortgages from Florida, Ohio, Las Vegas, and Ireland, with the results we all know.
Posted by: Alex | June 28, 2019 at 08:28 AM
Psychologist Robin Hogarth has done some interesting work regarding decision making in terms of inference in different learning environments. In 'kind' environments, previous experience is a good guide to the future - including rules based games. In 'wild' environments, there are often mismatches between what experience would suggest vs reality - including complex, contested systems.
Machines really struggle in wild learning environments, but so do humans. However, decisions have to be made and politics/power is about who gets to make them.
Perhaps the difference between machines and humans is the ability to capture the private benefits of being a decision maker, most notably through culture, "leadership", "narratives" etc.
Posted by: Ephron | June 28, 2019 at 09:18 AM
As someone who has spent years working in Business Intelligence automating the reporting and analysis activities supporting decision making, sometimes making roles redundant in the process, it's always struck me as ironic that the interchangeable, rotating executives I've worked for never see the benefit in automating their own repetive, inefficient and highly predictable activities, even when they have a voracious appetite for doing it everywhere else in the organisation. 'Managerialist' approaches should be ideal for automation, because they are based on generic approaches, not tacit understanding.
Posted by: MJW | June 28, 2019 at 09:28 AM
There's a lot of handwavy assertion here.
Algorithms are a set of rules and entrepreneurs succeed by breaking rules". That's really, really vague.
-- Not all rules can be profitably broken. No entrepreneur as far as I know has got rich by trying to break Ohm's law.
-- Not all entrepreneurs "break rules". Business school favourite Julian Richer - what rules did he break? What tremendous, unthinkable innovation did he produce? He runs a lot of shops selling stereos. He runs them _really well_, he is a good manager and a good leader. But he hasn't had some dazzling counterintuitive innovation; he got a shop and filled it full of stereos, which he sold to people for money. Then he got another shop.
Your assertion that AI music somehow lacks that ineffable soul-to-soul connection that we get from great music doesn't stand up to scrutiny either. I guarantee you that you will be unable to tell the difference, even with your soul on full alert, between music produced by JS Bach, the greatest composer who ever lived, and music produced by an AI emulating JS Bach.
This kind of thing
"An ML model doing asset allocation in 2006 would very likely have identified that the best CDOs were ones based on mortgages from Florida, Ohio, Las Vegas, and Ireland, with the results we all know"
is a bit dubious too. Loan-level analysis of these MBSs showed very clearly that they were not a good investment - it's just that almost no one bothered to do it. This kind of grinding tedious work is exactly what a software system would be good at...
Posted by: ajay | June 28, 2019 at 09:52 AM
Non-Ohmic regimes of conductivity are a live area of solid-state research, which does make that particular choice of example a bit unfortunate.
Posted by: DDOwen | June 30, 2019 at 10:43 AM