« The recrudescence of zero-sum thinking | Main | The Tories' imaginary world »

June 26, 2019


Dave Timoney

The first reason you offer is power. You could have stopped there. The other three are essentially exceptions and marginal situations that wouldn't necessarily impede the rollout of AI (most business decisions aren't based on innovation, nor do they require a compelling story).

A hunch, or breaking the rules, might possibly produce stellar results, but most businesses will be aiming simply to be better than average - to avoid the drop rather than win the league, so to speak. That means bureaucratic risk-averseness (that broken rule might kill the business) may actually encourage the use of AI.

As regards the "fallible entertainers" point, that may well be true for politicians and commentators, but it is less likely to be the case for CEOs, as Woodford shows, and it largely applies only at the very top of the professional tree. Tory MPs who witnessed Boris Johnson's shortcomings as Foreign Secretary at first hand aren't being entirely irrational in thinking he should be PM. They believe the job demands personality and will, rather than mastery of a brief.

NB: The c-suite role that is most suitable to a constrained optimisation approach, and in which hunches and breaking the rules are most frowned upon, is the CFO. This is quite a simple "algorithm", which can be implemented via a spreadsheet (1980s vintage) rather than a GAN, so it ought to be more vulnerable to AI than other roles, yet there has been no meaningful attempt to automate it.


Whereas I'm in Charlie Stross' camp. He argues that "slow AI" has been running corporations for probably centuries now, it's just that we haven't necessarily noticed because of the speed e.g. banks have used pretty much the same criteria for making loans ever since they started, it's just that today the calculation can be made in nanoseconds instead of taking days and reams of paper.
In other words, we were all actually replaced long ago, we just haven't noticed yet.

(This is very different to the "small business" model where priorities are not remotely the same and decisions about assets may actually take into account the idea that e.g. people are not the same as widgets... And you can sometimes even spot when there is a tipping point and the small business becomes a corporate slow AI.)


The point I would make here is that what "AI" as the phrase is currently used - deep-learning neural networks - gives you is either the detection of correlations between arbitrary input data and a user-defined target data set, or else the detection of autocorrelations (clusters) within an arbitrary data set.

Applying this to strategy, this will tend to advise you to double down on whatever you're doing that is successful and cut whatever is not successful. In financial terms, this would be a momentum-investing strategy - buy things because they are going up. Notoriously, this works very well until it doesn't and then you lose your shirt (ie it has Taleb fragility).

In an industrial example, an ML model advising the management of Motorola in the 2000s would have strongly recommended producing lots and lots of RAZR V3 phones with more marketing options and signing up more contract manufacturers, and cutting its investment in R&D. The signal "more RAZRs" was very strongly correlated with "more sales". This is precisely what Moto actually did right up until it failed horribly, went bankrupt, and ended up being sold piecemeal to its competitors.

Another really important issue here is that one example of a non-obvious, strong, but spurious and harmful correlation is the one between "being one of the assets involved in a bubble" and "absolute return". An ML model doing asset allocation in 2006 would very likely have identified that the best CDOs were ones based on mortgages from Florida, Ohio, Las Vegas, and Ireland, with the results we all know.


Psychologist Robin Hogarth has done some interesting work regarding decision making in terms of inference in different learning environments. In 'kind' environments, previous experience is a good guide to the future - including rules based games. In 'wild' environments, there are often mismatches between what experience would suggest vs reality - including complex, contested systems.

Machines really struggle in wild learning environments, but so do humans. However, decisions have to be made and politics/power is about who gets to make them.

Perhaps the difference between machines and humans is the ability to capture the private benefits of being a decision maker, most notably through culture, "leadership", "narratives" etc.


As someone who has spent years working in Business Intelligence automating the reporting and analysis activities supporting decision making, sometimes making roles redundant in the process, it's always struck me as ironic that the interchangeable, rotating executives I've worked for never see the benefit in automating their own repetive, inefficient and highly predictable activities, even when they have a voracious appetite for doing it everywhere else in the organisation. 'Managerialist' approaches should be ideal for automation, because they are based on generic approaches, not tacit understanding.


There's a lot of handwavy assertion here.

Algorithms are a set of rules and entrepreneurs succeed by breaking rules". That's really, really vague.

-- Not all rules can be profitably broken. No entrepreneur as far as I know has got rich by trying to break Ohm's law.
-- Not all entrepreneurs "break rules". Business school favourite Julian Richer - what rules did he break? What tremendous, unthinkable innovation did he produce? He runs a lot of shops selling stereos. He runs them _really well_, he is a good manager and a good leader. But he hasn't had some dazzling counterintuitive innovation; he got a shop and filled it full of stereos, which he sold to people for money. Then he got another shop.

Your assertion that AI music somehow lacks that ineffable soul-to-soul connection that we get from great music doesn't stand up to scrutiny either. I guarantee you that you will be unable to tell the difference, even with your soul on full alert, between music produced by JS Bach, the greatest composer who ever lived, and music produced by an AI emulating JS Bach.

This kind of thing
"An ML model doing asset allocation in 2006 would very likely have identified that the best CDOs were ones based on mortgages from Florida, Ohio, Las Vegas, and Ireland, with the results we all know"

is a bit dubious too. Loan-level analysis of these MBSs showed very clearly that they were not a good investment - it's just that almost no one bothered to do it. This kind of grinding tedious work is exactly what a software system would be good at...


Non-Ohmic regimes of conductivity are a live area of solid-state research, which does make that particular choice of example a bit unfortunate.

The comments to this entry are closed.

blogs I like

Blog powered by Typepad