… which way I ought to go from here?

Dave Snowden – Cognitive Edge

This is close to the beginning of what is billed as series of indefinite length on agility and Agility, which we are promised will at times be polemical and curmudgeonly, and are tangentially illustrated with references to Alice (the one in Wonderland, not the cryptographic exemplar). The first post in the series set some context; this second one focuses on the question of whether short-cycle software production techniques translate to business strategy. In particular, the argument is that scrum-based approaches to agile work best when the problem space is reasonably well understood and that this will be the case to different extents and different stages of an overall development cycle.

Dave Snowden is best known as the originator of the Cynefin framework, which is probably enough to guarantee that this series will be thought provoking. He positions scrum approaches within the Cynefin complex domain and as a powerful approach – but not the only or uniquely appropriate one. It will be well worth watching his arguments develop.

Eight things I’ve learnt from designing services for colleagues

Steve Borthwick – DWP Digital

Civil servants are users too. Indeed, as Steph Gray more radically claims, civil servants are people too. And as users, and even more so as people, they have needs. Some of those needs are for purely internal systems and processes, others are as users of systems supporting customer services.

In the second category, the needs of the internal user are superficially similar to the needs of the external user – to gather and record the information necessary to move the service forward. That for a time led to a school of thought that the service for internal and external users should be identical, to the greatest possible extent. But as this post recognises, there is a critical difference between somebody who completes a transaction once a year or once in a blue moon and somebody who completes that same transaction many times a day.

That shouldn’t be an excuse for complexity and confusion: just because people on the payroll can learn to fight their way through doesn’t mean it’s a good idea to make them. But it is one good reason for thinking about internal user needs in their own right – and this excellent post provides seven more reasons why that’s a good thing to do.

Meanwhile, the cartoon here remains timeless – it prompted a blog post almost exactly ten years ago arguing that there is a vital difference between supporting expert users (good) and requiring users to be expert (bad). We need to keep that difference clearly in sight.

YouTube, the Great Radicalizer

Zeynep Tufekci – New York Times

This article has been getting extensive and well-deserved coverage over the last few days. Essentially, it is demonstrating that the YouTube recommendation engine tends to lead to more extreme material, more or less whatever your starting point. In  short, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

The reason for including it here is not because of the specific algorithm or the specific behaviour it generates. It is because it’s a very clear example of a wider phenomenon. It’s a pretty safe assumption that the observed behaviour is not the result of a cabal of fringe conspirators deep in the secret basements of Google setting out a trail to recruit people into extremist groups or attitudes. The pretty obvious motivation is that what they are actually trying to do is to tempt people into spending as long as possible watching YouTube videos, because that’s the way they can put most advertising in front of most eyeballs.

In other words, algorithmic tools can have radically unintended consequences. That’s made worse in this case because the unintended consequences are not a sign of the intended goal not being achieved; on the contrary, they are the very means by which that intended goal is being achieved. So it is not just the case that YouTube has some strong incentives not to fix the problem, the problem may not be obvious to them in the first place.

This is a clear example. But we need to keep asking the same questions about other systems: what are the second order effects, will we recognise them when we see them, and will we be ready to – and able to – address them?

Strategic thinking with blog posts and stickers

Giles Turnbull

Strategic thinking is best done by thinking out loud, on your blog, over a long period of time.

As someone clocking in with over a thousand blog posts of various shapes and sizes since 2005, that feels like a box well and truly ticked. Whether that makes up something which might be called strategic thinking is a rather different question – but that may be because all those blog posts have not yet generated a single sticker.

There’s an important point being made here. Even in a more traditional approach to strategy development, the final document is never the thing which carries the real value: it’s the process of development, and the engagement and debate that that entails which makes the difference. The test of a good strategy is that it helps solve problems, so as the problems change, so should the strategy. Whether that makes blog posts and stickers a sufficient approach to strategy development is a slightly different question. There might be a blog post in that.

Evidence-based policymaking: is there room for science in politics?

Jennifer Guay – Apolitical

To describe something as ‘policy based evidence making’ is to be deliberately rude at two levels, first because it implies the use of evidence to conceal rather than illuminate, but secondly because it implies a failure to recognise that evidence should drive policy (and thus, though often less explicitly, politics).

Evidence based policy, on the other hand, is a thing of virtue for which we should all be striving. That much is obvious to right-thinking people. In recent times, the generality of that thought has been reinforced by very specific approaches. If you haven’t tested your approach through randomised controlled trials, how can you know that your policy making has reached the necessary level of objective rigour?

This post is a thoughtful critique of that position. At one level, the argument is that RCTs tell you less than they might first appear to. At another level, that fact is a symptom of a wider problem, that human life is messy and multivariate and that optimising a current approach may at best get you to a local maximum. That is of course why the social sciences are so much harder than the so-called hard sciences, but that is probably a battle for another day.

Digital government: reasons to be cheerful

Janet Hughes

This is an energetic and challenging presentation on the state of digital government – or rather of digital government in the UK. It’s available in various formats, the critical thing is to make sure you read the notes as well as look at the slides.

The first part of the argument is that digital government has got to a critical mass of inexorability. That doesn’t mean that progress hasn’t sometimes been slow and painful and it doesn’t mean that individual programmes and even organisations will survive, or even that today’s forecasts about the future of government will be any more accurate in their detail than those of twenty years ago. It does though mean that the questions then and now were basically the right ones even if it has been – and is – a struggle to work towards good answers.

The second part of the argument introduces a neat taxonomy of the stages of maturity of digital government, with the argument that the UK is now somewhere between the integrate and reboot phases. That’s clearly the direction of travel, but it’s perhaps more debatable how much of government even now is at that point of inflexion. The present, like the future, remains unevenly distributed.

Forget policy — start with people

Beatrice Karol Burks – Designing Good Things

This is a short polemic against the idea of policy, and by extension against the (self) importance of those who make it. It clearly and strongly makes an important point – but in doing so misses something important about policy and politics.

It is certainly true that starting with people and their needs is a good way of approaching problems. But it doesn’t follow that anything called policy is necessarily vacuous or redundant. Policy making, and indeed politics, is all about making choices, and those choices would still be there even if the options to be considered were better grounded.

None of that makes the practical suggestions in this post wrong. But if we forget policy, we forget something important.

A roadmap for AI: 10 ways governments will change (and what they risk getting wrong)

Geoff Mulgan – NESTA

This is a great summary of where AI stands in the hype cycle. Its focus is the application to government, but most of it is more generally relevant. It’s really helpful in drawing out what ought to be the obvious point that AI is not one thing and that it therefore doesn’t have a single state of development maturity.

The last of the list of ten is perhaps the most interesting. Using AI to apply more or less current rules in more or less current contexts and systems is one thing (and is a powerful driver of change in its own right). But the longer term opportunity is to change the nature of the game. That could be a black box dystopia, but it could instead be an opportunity to break away from incremental change and find more radical opportunities to change the system. But that depends, as this post rightly concludes, on not getting distracted by the technology as a goal in its own right, but focusing instead on what better government might look like.

Management vs managerialism

Chris Dillow – Stumbling and Mumbling

And along comes another one, on similar lines to the previous post on strategies, this time decrying managerialism. Management is good, managerialism tends to unjustified and unbounded faith in management as a generic skill, to imposing direction and targets from above – and to abstract concepts of strategy and vision. As ever, Chris Dillow hits his targets with gusto.

Another way of putting that is that there is good management and bad management, and that there is not enough of the former and too much of the latter. That sounds trivial, but it’s actually rather important: is there a Gresham’s law of management where bad displaces good, and if there is, what would it take to break it?

Why strategy directors shouldn’t write strategies

Simon Parker – Medium

This post is fighting talk to a blog with the title and background of this one. Having a strategy – or at least having a document called a strategy – is an indication of institutional failure: once you get to the stage of having to pay people to describe the organisation to itself and to work out how the pieces fit together, something is already going badly wrong.

At its worst, strategy becomes about attempts to engineer reality to fit a top down narrative through the medium of graphs. … So don’t write strategies. At best they give institutions the time they need to mobilise against the change you want to create

Instead, strategists should go and do something more useful, more concrete, with a much better chance of making real improvements happen.

And yet. The answer to the co-ordination problem can’t in the short term (and the short term is likely to be pretty long) be to fragment organisations to the point where co-ordination is not needed. Even if that were practically and politically feasible, it might just redraw the boundaries of Coasian space leaving the underlying co-ordination problem unchanged, at the cost of sustained distraction from the real purpose. It’s not obvious how small an organisation has to be (or even whether smallness is the key factor) to avoid needing something you might want to call a strategy.

So perhaps the distinction is not that organisations shouldn’t need a strategy, it is that that need shouldn’t degenerate into the endless production of strategies as a self-perpetuating industry. That takes me back to Sophie Dennis’s approach, and in particular to her definition of strategy:

Strategy is a coherent plan to achieve a goal that will lead to significant positive change

That’s something which should have real value – without there needing to be a graph in sight. I’d be pretty confident that Simon has got one of those.

UK police are using AI to make custodial decisions – but it could be discriminating against the poor

Matt Burgess – Wired

In abstract, AI is a transformational technology. It may bring perfect and rigorous decision analysis, sweeping away human foibles. Or it may displace human sensitivity and judgement – and indeed the humans themselves – and usher in an era of opaque and arbitrary decision making.

This article, which focuses on the introduction of AI to Durham Constabulary, is a good antidote to those caricature extremes. Reality is, as ever, messier than that. Predictability and accountability are not straightforward. Humans tend to revert, perhaps unwisely, to confidence in their own judgements. It is not clear that some kinds of data are appropriately used in prediction models at all (though the black boxes of human brains are equally problematic). In short, the application of AI to policing decisions isn’t simple and clear cut, it is instead a confused and uncertain set of policy problems. That shouldn’t be surprising.

Pivoting ‘the book’ from individuals to systems

Pia Waugh – Pipka

It’s a sound generalisation that people do the best they can within the limits of the systems they find themselves in. That best may include pushing at those limits, but even if it does, that doesn’t make them any less real. Two things follow from that. The first is that it is pointless blaming individuals for operating within the constraints of the system. The second is that if you want to change the system, you have to change the system.

That’s not to say that people are powerless or that we can all resign personal and moral accountability. On the contrary, the systems are themselves human constructs and can only be challenged and changed by the humans who are actors within them. That’s where this post comes in, which is in effect a prospectus for a not yet written book. What different systems do changes in social, economic and technological contexts demand, where are the contradictions which need to be resolved? The book, when it comes, promises to be fascinating; the post is well worth reading in its own right in the meantime.

Why can’t we make the trains run on time?

Paul Clarke – honestlyreal

On a morning where transport is disrupted across the UK by snow and cold winds, it’s worth returning to this post from a few years ago, which explains why small amounts of snow here are so much more disruptive than the much larger amounts which are easily managed elsewhere. In short, the marginal cost of being ready for severe weather, when there isn’t very much of it, isn’t justified by the benefits from another day or two a year of smooth operations. That is a very sensible trade off – the existence of which is immediately forgotten when the bad weather arrives.

It’s a trade off with much wider application than snow-covered railway tracks. Once you start looking, it can be seen in almost every area of public policy, culminating in the macro view that everybody (it is asserted) wants both lower taxes and better services. Being more efficient is the way of closing the gap which is simultaneously both clearly the right thing to do and an excellent way of ducking the question, but at best shifts the parameters without fundamentally changing the nature of the problem. Hypothecation is a related sleight of hand – let’s have more money, but only for virtuous things. In the end, though, public policy is about making choices. And letting the trains freeze up from time to time is a better one than it appears in the moment to the people whose trains have failed to come.

How AI will transform the Digital Workplace (and how it already is)

Sharon O’Dea – Intranetizen

AI is often written about in terms of sweeping changes resulting in the wholesale automation of tasks and jobs. But as this post sets out, there is also a lower key version, where forms of AI appear as feature enhancements (and thus may not be apparent at all). Perhaps self-generating to do lists are the real future – though whether that will be experienced as liberation or enslavement is very much a matter of taste. Either way, AI won’t be experienced as robots, breaking into the building to take our jobs; instead tasks will melt away, enhanced in ways which never quite feel revolutionary.

Pointing at the Wrong Villain: Cass Sunstein and Echo Chambers

David Weinberger – Los Angeles Review of Books

At one level, this is an entertainingly polite but damning book review. At another, it is a case study in how profound expertise in one academic domain does not automatically translate into the distillation of wisdom in another. But beyond both of those, the real value of this piece is in drawing out the point that in the realm of ideas, as with so many others, the internet is a place where new things are happening, not just the old things being done a bit better. We need to get better not just at knowing things, but at how to know things. How, in this new world, do we take advantage of its strengths to come at knowledge in different ways?

I had got to the end of reading this before noticing that it was by David Weinberger. That would have been endorsement enough – he has been sharing deep insights about how all this works for many years and is always a name to look out for.

Ending The Myth Of Collaboration

Paul Taylor

Another good provocation from Paul Taylor, arguing this time that solitary thinking is a better source of creative breakthroughs than collaborative activities. It’s not all or nothing – there is a useful distinction drawn between problems where collaboration is valuable (complex, strategic, needing engagement) and those where it isn’t (deep, radical, disruptive, urgent).

But almost more important than that is the observation that few organisations actually value purposeful thinking in the first place – or at least, they don’t create the conditions in which such thinking can readily take place.

10 Principles for Public Sector use of Algorithmic Decision Making

Eddie Copeland – NESTA

This is a really interesting attempt to set out a a set of regulatory principles for the use of algorithms in the public sector. It brings what can easily be quite an abstract debate down to earth: we can talk about open data and open decision making, but what actually needs to be open (and to whom) to make that real?

The suggested principles mostly look like a sensible starting point for debate. Two of them though seem a little problematic, one trivially, the other much more significantly. The trivial one is principle 9, that public sector organisations should insure against errors, which isn’t really a principle at all, though the provision of compensation might be. The important one is principle 5, “Citizens must be informed when their treatment has been informed wholly or in part by an algorithm”. On the face of it, that’s innocuous and reasonable. Arguably though, it’s the equivalent of having a man with a red flag walking in front of a car. Government decisions are already either based on algorithms (often called “laws” or “regulations”) or they are based on human judgements, likely to be more opaque than any computer algorithm. Citizens should absolutely be entitled to an explanation and a justification for any decision affecting them – but the means by which the decision at issue was made should have no bearing on that right.

The Parts of Customer Service That Should Never Be Automated

Ryan Buell – Harvard Business Review

This is a useful summary of the limitations of automation in service design. Only humans can be genuinely emotional, humans are still preferred to resolve problems, and automation doesn’t always stop human work, it can just shift it from provider to customer. So far, so good. But this has the feel of an article which could have been written almost at any time in the last decade or more and it does not touch at all on whether these attributes are absolute, situational or (for example) generational. People who design services are always at risk of over-representing their personal preferences, which are often to automate and streamline. Conversely though, there is no doubt that what it widely seen as normal changes over time and there is no very obvious reason to think that the balance of preferences has become more stable than it was in the past.