Beyond even the bonus points for talking about laws being ‘intertwingled’, this is an important and interesting post at the intersection of law, policy and automation. It neatly illustrates why the goal of machine-interpretable legislation,such as the recent work by the New Zealand government, is a much harder challenge than it first appears – law can have tacit external interpretation rules, which means that the highly structured interpretation which is normal, and indeed necessary, for software just doesn’t work. Which is why legal systems have judges and programming languages generally don’t – and why the New Zealand project is so interesting.
The rather dry title of this post belies the importance and interest of its content. Lots of people have spotted that laws are systems of rules, computer code is systems of rules and that somehow these two fact should illuminate each other. Quite how that should happen is much less clear. Ideas have ranged from developing systems to turn law into code to adapting software testing tools to check legislative compliance. This post records an experiment with a different approach again, exploring the possibility of creating legislative rules in a way which is designed to make them machine consumable. That’s an approach with some really interesting possibilities, but also some very deep challenges. As John Sheridan has put it, law is deeply intertwingled: the meaning of legislation is only partly conveyed by the words of a specific measure, which means that transcoding the literal letter of the law will never be enough. And beyond that again, the process of delivering and experiencing a service based on a particular set of legal rules will include a whole set of rules and norms which are not themselves captured in law.
That makes it sensible to start, as the work by the New Zealand government reported here has done, with exploratory thinking, rather than jumping too quickly to assumptions about the best approach. The recommendations for areas to investigate further set out in their full report are an excellent set of questions, which will be of interest to governments round the world.
This is a good post on the very practical difficulties in establishing secure digital identity, in this case for the purpose of voting in elections. It’s included here mainly as a timely but inadvertent illustration of the point in the previous post that even technology fixes are harder than they look. Implementing some form of online voting wouldn’t be too difficult; implementing a secure and trustworthy electoral system would be very hard indeed.
Digital identity (like digital voting) sounds as though it ought to be a problem with a reasonably straightforward solution, but which looks a lot more complicated when it comes to actually doing it. Like everything with the word ‘digital’ attached to it, that’s partly a problem of technical implementation. But also like everything with the word ‘digital’ attached to it, particularly in the public and political space, it’s a problem with many social aspects too.
This post makes a brave attempt at offering a solution to some of the technical challenges. But the reason why the introduction of identity cards has been highly politically contentious in the UK, but not in other countries, has a lot to do with history and politics and very little to do with technology. So better technology may indeed to be better, but that doesn’t in itself constitute a new approach to identity. Even if the better technology is in fact better (and as Paul Clarke spotted, ‘attestation’ is doing a lot more work as a word than it first appears), there are some much wider issues (some flagged by Peter Wells) which would also need to be addressed as part of an overall approach.
This is close to the beginning of what is billed as series of indefinite length on agility and Agility, which we are promised will at times be polemical and curmudgeonly, and are tangentially illustrated with references to Alice (the one in Wonderland, not the cryptographic exemplar). The first post in the series set some context; this second one focuses on the question of whether short-cycle software production techniques translate to business strategy. In particular, the argument is that scrum-based approaches to agile work best when the problem space is reasonably well understood and that this will be the case to different extents and different stages of an overall development cycle.
Dave Snowden is best known as the originator of the Cynefin framework, which is probably enough to guarantee that this series will be thought provoking. He positions scrum approaches within the Cynefin complex domain and as a powerful approach – but not the only or uniquely appropriate one. It will be well worth watching his arguments develop.
Another good provocation from Paul Taylor, arguing this time that solitary thinking is a better source of creative breakthroughs than collaborative activities. It’s not all or nothing – there is a useful distinction drawn between problems where collaboration is valuable (complex, strategic, needing engagement) and those where it isn’t (deep, radical, disruptive, urgent).
But almost more important than that is the observation that few organisations actually value purposeful thinking in the first place – or at least, they don’t create the conditions in which such thinking can readily take place.
If you had to write down a list of innovation methods and techniques, how many could you come up with? However long your list, it’s a fair bit that it won’t have as much on it as this landscape of innovation approaches (also available as a more legible PDF to cut out and keep).
Methods are grouped into four overlapping ‘spaces’. There’s room for debate about what best fits where and there is a broad range from mainstream to eclectic – but that in itself is a good start in challenging assumptions about methods which appear natural and obvious and indeed about the kind of innovation being sought.
How many design innovation toolkits are there? The answer seems to be that there are more than you might think possible. Over a hundred are brought together on this page, which makes it an extraordinarily rich collection. There are lots of interesting-looking things here, some well known, others more obscure – though it’s hard not to come away with the thought that the world’s need for innovation toolkits has now over abundantly been met.
The bigger the underlying change, the bigger the second (and higher) order effects. Those effects often get overlooked in looking at the impact of change (and in trying to understand why expected impacts haven’t happened). Benedict Evans has always been good at spotting and exploring the more distant consequences of technology-driven change, for example in his recent piece on ten-year futures. ‘Cascading collapse’ is a good way of putting it: if the long-heralded but slow to materialise collapse of physical retail is beginning to appear, what consequences flow from that?
Today HMRC announced that 92.5% of this year’s tax returns were submitted online. That too has been a slow but inexorable growth, taking twenty years to go from expensive sideshow to near complete dominance. There is more to do to reflect on the cascading collapses that that and other changes will wreak not just on government, but through government to society and the economy more widely.
Interesting ideas on how to think about the future seem to come in clumps. So alongside Ben Hammersley’s reflections, it’s well worth watching and listening to this presentation of a ten year view of emerging technologies and their implications. The approaches of the two talks are very different, but interestingly, they share the simple but powerful technique of looking backwards as a good way of understanding what we might be seeing when we look forwards.
They also both talk about the multiplier effect of innovation: the power of steam engines is not that they replace one horse, it is that each one replaces many horses, and in doing so makes it possible do things which would be impossible for any number of horses. In the same way, machine learning is a substitute for human learning, but operating at a scale and pace which any number of humans could not imitate.
This one is particularly good at distinguishing between the maturity of the technology and the maturity of the use and impact of the technology. Machine learning, and especially the way it allows computers to ‘see’ as well as to ‘learn’ and ‘count’, is well along a technology development S-curve, but at a much earlier point of the very different technology deployment S-curve, and the same broad pattern applies to other emerging technologies.
There are some who argue that the only test of progress is delivery and that the only thing which can be iterated is a live service. That is a horribly misguided approach. There is no point in producing a good answer to a bad question, and lots to be gained from investing time and energy in understanding the question before attempting to answer it. Even for pretty simple problems, badly formed initial questions can generate an endless – and expensive – chain of solutions which would never have needed to exist if that first question had been a better one. Characteristically, Paul Taylor asks some better questions about asking better questions.
Sometimes the best way of thinking about something completely familiar is to treat it as wholly alien. If you had to explain a smartphone to somebody recently arrived from the 1990s, how would you describe what it is and, even more importantly, what it does?
In a way, that’s what this article is doing, painstakingly describing both the very familiar, and the aspects of its circumstances we prefer not to know – cheap phones have a high human and environmental price. An arresting starting point is to consider what people routinely carried around with them in 2005, and how much of that is now subsumed in a single ubiquitous device.
That’s fascinating in its own right, but it’s also an essential perspective for any kind of strategic thinking about government (or any other) services, for reasons pithily explained by Benedict Evans:
Periodic reminder: maybe 100 million people use any kind of pro PC app. 3 billion people have a smart phone, and that will rise to 5 billion people in the next few years https://t.co/NUtiAoOfS6
— Benedict Evans (@BenedictEvans) November 18, 2017
Anything that you can't do on mobile/tablet and can do on a PC is something that 90%+ of people couldn't actually do on a PC either.
— Benedict Evans (@BenedictEvans) July 14, 2017
Smartphones are technological marvels. But they are also powerful instruments of sociological change. Understanding them as both is fundamental to understanding them at all.
This wide ranging and fast moving report hits the Strategic Reading jackpot. It provides a bravura tour of more of the topics covered here than is plausible in a single document, ticking almost every category box along the way. It moves at considerable speed, but without sacrificing coherence or clarity. That sets the context for a set of radical recommendations to government, based on the premise established at the outset that incremental change is a route to mediocrity, that ‘status quo plus’ is a grave mistake.
Not many people could pull that off with such aplomb. The pace and fluency sweep the reader along through the recommendations, which range from the almost obvious to the distinctly unexpected. There is a debate to be had about whether they are the best (or the right) ways forward, but it’s a debate well worth having, for which this is an excellent provocation.
This is an artful piece – the first impression is of a slightly unstructured stream of consciousness, but underneath the beguilingly casual style, some great insights are pulled out, as if effortlessly. Halfway down, we are promised ‘three big ideas’, and the fulfilment does not disappoint. The one which struck home most strongly is that we design institutions not to change (or, going further still, the purpose of institutions is not to change). There is value in that – stability and persistence bring real benefits – but it’s then less surprising that those same institutions struggle to adapt to rapidly changing environments. A hint of an answer comes with the next idea: if everything is the product of a design choice, albeit sometimes an unspoken and unacknowledged one, then it is within the power of designers to make things differently.
The problem with good policies, badly implemented is not primarily the bad implementation, it is that the bad implementation strongly suggests they weren’t good policies to start with. That’s the proposition advanced by this post, (and one interesting to read in parallel with The Blunders of our Government).
There are few examples of good but badly implemented policies because, in this approach, policy making is not – or not just – the grand sweep of a speech, but is the grinding detail of working through real world implications. Failure of implementation is therefore a strong indicator of a bad policy – akin, perhaps, to the idea that if you can’t explain a complicated thing simply, you probably don’t understand it.
How would you organise to impede transformational modernisation? You might set your face against all things digital, you might add as much stultifying process as you could find, you might just do things the way they have always been done.
This post explores how best not to do digital transformation, which turns out to be rather an interesting way of thinking about what it takes to do it successfully. There is a risk though of its becoming a form of confirmation bias: of course all those old ways were bad; of course the new ways are good. The risk is not that that is untrue, it is that it is not the whole truth. So perhaps there is another, harder, exercise to do after this one: assuming that the people who came before were neither malign nor idiots, why are things the way they are? What about the current way things were done has genuinely outlived its usefulness, and what was there for a reason? That’s not an argument for just keeping things as they were, but it may be an argument for making sure that we don’t throw away solutions without being clear what problem they belong to.
Rules are made to be broken. That’s an idea with considerable support from those on the receiving end of rules, rather less so from those who set them. Rules are the very essence of the Weberian bureaucracy which infuses governments and there are good reasons – fairness, clarity, consistency – why that is so. But that also means that bureaucratic organisations are designed to frustrate evolution and thus innovation – which is perhaps one reason why bureaucracies rarely communicate a sense of being on the cutting edge of innovation. And while bureaucracy is often used as a pejorative synonym for government, in this sense almost all organisations of any size are bureaucracies. Becoming adaptable and responsive isn’t just about breaking rules, it’s about adopting the expectation that rules are made to be broken.
What we get wrong about technology boils down to two things. The first is that simple, cheap and pervasive – and often near-invisible – technologies have more transformational power than things which are more obviously new and shiny; affordability beats complexity. The second mistake is to think that the impact of a new technology is driven by its technical availability, when the key date is its transition to economic and social availability, with lags which are sometimes very short but which can be very long indeed. This essay draws on examples from the invention of printing onwards to make the point that we might need to look in less obvious places for the technologies which will drive the next round of change.
All of that’s another way of putting the thought pithily expressed by Roy Amara:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
The people with the shiny ideas and the shiny kit can see that change is essential and just know that their ideas are right as well as shiny. Unaccountably, the less shiny people with the unfashionable kit fail immediately to see the inherent rightness of the cause. This post has the superficial form of a rant, but it is a rant based on some important observations and a question without an easy answer: how do transformation teams understand and address the user needs of those whose fate is to be transformed?
Technology is never neutral. What gets developed, how it gets developed, and how it gets used are all driven by social, economic and political factors. People who build services are never neutral either and can certainly never be normal users of their own services. This article looks behind the internet of things to reflect on how completely frictionless transactions move power from consumer to provider, how what is normal for designers of such services is very different from what is normal for many of those who will find themselves using them, and how technology – and the data it moves and organises – is always about power.