Data and AI Ethics Futures

Provocation: Redesigning Artificial Intelligence – From Australia Out

Ellen Broad

Ellen not only always has interesting things to say, she is also unusually effective in finding interesting ways of saying them. This latest piece defies categorisation. It is an essay about AI. It is a reflection on extreme utilitarianism. It is a call to action on the hidden costs of social harmony. It is about edge cases where the edges are sharp and cause harm to those whose lives place them there. It is a call to bring the messiness of cybernetics and systems to the delusional clarity of dehumanised AI. It is a discussion of issues not discussed. It is a challenge to do better.

We are more aware of the threads that bind us together. We have had a glimpse of the fragility of the foundations on which our lives of easy comfort are built. When the exchange for that comfort is the discomfort of others. And so in this space is room to imagine some place else.

And as well as all those things, it is an audio-visual experience, with a soundscape which drifts beyond music and imagery which is not quite illustration. The tone is neither soothing nor haranguing. But in its matter of factness there is great power.

Data and AI Policy and analysis

Data as institutional memory

Adam Locker – Medium

There’s more to this deceptively self-deprecating piece than meets the eye. Fragmented data cannot support integrated services, still less integrated organisations. Deep understanding and effective management of data are therefore not a minor issue for techie obsessives, but are fundamental to organisational success.

As so often, the diagnosis is simple (which of course doesn’t stop it being hard), acting on that diagnosis is complicated, and even harder. This post brings the two together through an account of making it work on one part of government.

Data and AI

Introducing the Government Data Graveyard: the numbers we’ve stopped measuring

Anna Powell-Smith – Missing Numbers

Counting things is boring and costs time and money which could be better spent on something less boring.

Having counted things – particularly having counted them for a long time – provides incalculable value in understanding what has been done and what might be done.

If you need a fifty year data series, you need collection to have started fifty years ago – starting now will do you no good at all (though it might cause your memory to be blessed half a century on).

It is of course easy to tell what we wish our predecessors of fifty (or ten, or five) years ago had done, rather less easy for them to know that then (or for us to know that for our successors now). That is in a way just a specialised version of a much more general problem of long term public investment, where there is a respectable argument that in undervaluing long term benefits, we end up with fewer long term assets being created than would be optimal – which applies as much, and perhaps more obviously, to investment in physical infrastructure

None of that is quite what this post is about – it’s based on a simple observation not only that some data which used to be collected is no longer collected, but that data on what data is no longer collected is itself not collected. Maybe that’s fine – nobody could seriously argue that data once having been collected must always be collected for ever more. But maybe the decisions on what to stop and what to continue have been driven more by short term expediency than long term value.

Data and AI

The Hidden Costs of Automated Thinking

Jonathan Zittrain – New Yorker

Being understood is not a precondition to being useful. The history of medicine is a history over centuries and millennia of spotting efficacy without the least understanding of the mechanisms by which that efficacy is achieved. Nor is that limited to pre-scientific times. There are still drugs which are  prescribed because they work, without any understanding of how they work.

Zittrain calls that intellectual debt (by unspoken analogy with technical debt), where theoretical understanding lags behind pragmatic effectiveness. The problem is not that it exists, as the medical examples show, it is that machine learning takes it to a new level, that our understanding of links between cause and effect come to have more to do with association than with explanation. For any single  problem, that may be no bad thing: it can be more important for the connections to be accurate than to be understood. But the cumulative effect of the mounting intellectual debt has the potential to be rather less benign.

Data and AI

The Other Half of the Truth: Staying human in an algorithmic world

Sandra Wachter – OECD Forum

The problem of AI bias, once ignored, then a minority concern, is now well into the stage of being popularised and more widely recognised. It’s a debate to which Sandra Wachter has herself been an thoughtful contributor. The fact that AI can replicate and reinforce human biases is of course critically important, but it risks obscuring the fact that the seed of the bias is unautomated human behaviour. So AI which is no more biased than humans should be seen as a pretty minimal target, rather than the ceiling of aspiration.

This post is a manifesto for doing better, for rejecting the idea that the new ways need only be no worse than the old. It’s not about specific solutions, but it is an important framing of the question.

Data and AI Democracy Ethics

Rethink government with AI

Helen Margetts and Cosmina Dorobantu – Nature

Much of what is written about the use of new and emerging technologies in government fails the faster horse test. It is based on the tacit assumption that technology can be different, but the structure of the problems, services and organisations that it is applied to remain fundamentally the same.

This article avoids that trap and is clear that the opportunities (and risks) from AI in particular look rather different – and of course that they are about policy and organisations, not just about technology. But arguably even this is just scratching the surface of the longer term potential. Individualisation of services, identification of patterns, and modelling of alternative realities all introduce new power and potential to governments and public services. Beyond that, though, it becomes possible to discern the questions that those developments will prompt in turn. The institutions of government and representative democracy are shaped by the information and communications technologies of past centuries and the more those technologies change, the greater the challenge to the institutional structures they support. That’s beyond the scope of this article, but it does start to show why those bigger questions will need to be answered.

Data and AI Government and politics

Introducing Missing Numbers: a blog on the data the government should collect, but doesn’t

Anna Powell-Smith – Missing Numbers

Sometimes what is missing can be as telling as what is present. The availability of data drives what can be said, what can be understood and what can be proposed. So the absence of data can all too easily lead to an absence of attention – and of course, even where there is attention, to an absence of well informed debate and decision making. So there is something important and powerful about looking for the gaps and trying to fill them. This new blog is trying to do exactly that and will be well worth keeping an eye on.

Data and AI

Killing many birds with one stone

Alan Mitchell – Mydex

Debates about personal data have a tendency to be more circular than they are productive. There is – it appears – a tension between individual privacy and control and the power unlocked by mass data collection and analysis. But because the current balance (or imbalance) between the two is largely an emergent property of the system, there is no reason to think that things have to be the way they are just because that is the way they are.

Given, though, that we are where we are, there are two basic approaches to doing something about it. One is essentially to accept the current system but to put controls of various kinds over it to ameliorate the most negative features – GDPR is the most prominent recent example, which also illustrates that different political systems will put the balance point in very different places. The other approach is to look more fundamentally at the underlying model and ask what different pattern of benefits might come from a more radically different approach. That’s what this post does, systematically coming up with what will look to many like a more attractive set of answers.

Mydex has been building practical systems based on these principles for a good while, so the post is based on solid experience. But therein lies the problem. Getting off the current path onto a different one is in part a technical and architectural one, but it is even more a social, political and economic one. As ever, the hard bit is not describing a better future, but working out how to get there from here.

Data and AI Innovation

Arguments against the autonomous vehicle utopia

Alexis Madrigal – The Atlantic

It should by now be beyond obvious that technology is never just about the technology, but somehow the hype is always with us. This article is a useful counter, listing and briefly explaining seven reasons why autonomous vehicles may not happen and may not be an altogether good thing if they do.

It’s worth reading not only – perhaps even not mainly – for its specific insights as for its method: thinking about the sociology and economics of technology may give more useful insights than thinking just about the technology itself.

Data and AI

A New Approach to Understanding How Machines Think

Been Kim – Quanta Magazine

The problem of the AI black box has been around for as long as AI itself: if we can’t trace how a decision has been made, how can we be confident that it has been made fairly and appropriately? There are arguments – for example by Ed Felten – that the apparent problem is not real, that such decisions are intrinsically no more or less explicable than decisions reached any other way. But that doesn’t seem to be an altogether satisfactory approach in a world where AI can mirror and even amplify the biases endemic in the data they draw on.

This interview describes a very different approach to the problem: building a tool which retrofits interpretability to a model which may not have been designed to be fully transparent. At one level this looks really promising: ‘is factor x significant in determining the output of the model?’ is a useful question to be able to answer. But of course real world problems throw up more complicated questions than that, and there must be a risk of infinite recursion: our model of how the first model reaches a conclusion itself becomes so complicated that we need a model to explain its conclusions…

But whether or not that is a real risk, there are some useful insights here into identifying materiality is assessing a model’s accuracy and utility.

Data and AI

Your robot needs a passport

David Birch – Wired

David Birch is one of a pretty small group of people who write sense about  money and identity – and he is pretty much unique in doing so with wit and lightness of touch. This short article draws out the connection between identity and attribution. We will increasingly need to know and trust the attributes of robots and systems, we will increasingly be interested in what attributes people assert about themselves – and at the intersection of those needs there will be a particularly precious attribute:

In time, IS_A_PERSON will be the most valuable credential of all.

Data and AI

Help us start a data revolution for government

Kit Collingwood and Robin Linacre – Data in government

There is lots being written – a small subset of it captured on Strategic Reading – about data and its implications as a driver of new ways of doing things and new things which can be done. There’s a lot written about the strategic (and ethical and legal…) issues and of course there is a vast technical literature. What there seems to be less of is more practical approaches to making data useful and used. That’s a gap which this post starts to fill. it’s not only full of good sense in its own right, it’s also a pointer to an approach which it would be good to see more of: given a strategic opportunity or goal, what are the practical things which need to be done to enhance the probability of success? Strategising is the easy bit of strategy; getting things done to move towards the goal is a great deal harder.

Data and AI

Why Data Is Never Raw

Nick Barrowman – New Atlantis 

There is increasing – if belated – recognition that analysis and inference built on on data is vulnerable to bias of many different kinds and levels of significance. But there is a lingering unspoken hope that data itself is somehow still pure: a fact is, after all, a fact. Except that of course it isn’t, and as this post neatly argues, while raw data may sound less underhand than cooked data, its apparent virtue can be illusory:

In the ordinary use of the term “raw data,” “raw” signifies that no processing was performed following data collection, but the term obscures the various forms of processing that necessarily occur before data collection.

Data and AI Future of work

Tired of the same old clichés about the future of work? You’re not alone

Benedict Dellot – RSA

There is no shortage of material on the future of work in general, or on its displacement by automation in particular, but much of it has a strong skew to the technocratically simplistic (though posts chosen for sharing here are selected in part with the aim of avoiding that trap).

There has been a steady stream of material from the RSA which takes a more subtle approach, of which this is the latest. It takes the form of a set of short essays from a variety of perspectives, the foreword to which is also the accompanying blog post. The questions they address arise from automation, but go far beyond the first order effects. What are the implications of the emergence of a global market for online casual labour? Does automation drive exploitation or provide the foundations for a leisured society? Given that automation will continue to destroy jobs (as it always has), will they get replaced in new areas of activity (as they always have – so far)?

Buried in the first essay is an arresting description of why imminent exponential change is hard to spot, even if things have been changing exponentially:

because each step in an exponential process is equal to the sum of all the previous steps, it always looks like you are the beginning, no matter how long it has been going on.

And that in many ways is the encapsulation of the uncertainty around this whole set of questions. There is a technological rate of change, driven by Moore’s law and its descendants, and there is a socio-economic rate of change, influenced by but distinct from the technological rate of change. It is in their respective rates and the relationship between them that much controversy lies.

Data and AI

Is this AI? We drew you a flowchart to work it out

Karen Hao – MIT Technology Review

What is artificial intelligence? It’s a beguilingly simple question, but one which lacks a beguilingly simple answer. There’s more than one way to approach the question, of course – Chris Yiu provides mass exemplificaiton, for example (his list had 204 entries when first linked from here in January, but has now grown to 501). Terence Eden more whimsically dives down through the etymology, while Fabio Ciucci provides a pragmatic approach based on the underlying technology.

This short post takes a different approach again – diagnose whether what you are looking at is AI by means of a simple flowchart. It’s a nice idea, despite inviting some quibbling about some of the detail (“looking for patterns in massive amounts of data” doesn’t sound like a complete account of “reasoning” to me). And it’s probably going to need a bigger piece of paper soon.

Data and AI

Show Me Your Data and I’ll Tell You Who You Are

Sandra Wachter – Oxford Internet Institute

The ethical and legal issues around even relatively straightforward objectively factual personal data are complicated enough. But they seem simple beside the further complexity brought in by inferences derived from that data. Inferences are not new, of course: human beings have been drawing inferences about each other long before they had the assistance of machines. But as in other area, big data makes a big difference.

Inferences are tricky for several reasons. The ownership of an inference is clearly something different from ownership of the information from which the inference is drawn (even supposing that it is meaningful to talk about ownership in this context at all). An inference is often a propensity, which can be wrong without being falsifiable – ‘people who do x tend to like y‘ may remain true even I do x and don’t like y. And all that gets even more tricky over time – ‘people who do x tend to become y in later life’ can’t even be denied or contradicted at the individual level.

This lecture explores those questions and more, examining them at the intersection of law, technology and ethics – and then asks what rights we, as individuals, should have about the inferences which are made about us.

The same arguments are also explored in a blog post written by Wachter with her collaborator Brent Mittelstadt and in very much more detail in an academic paper, also written with Mittelstadt.

Data and AI

How solid is Tim’s plan to redecentralize the web?

Irina Bolychevsky – Medium

As a corollary to the comment here a few weeks back on Tim Berners-Lee’s ideas for shifting the power balance of the web away from data-exploiting conglomerates and back towards individuals, this post is a good clear-headed account of why his goal – however laudable – may be hard to achieve in practice.

What makes it striking and powerful is that it is not written from the perspective of somebody critical of the approach. On the contrary, it is by a long-standing advocate of redecentralising the internet, but who has a hard-headed appreciation of what would be involved. It is a good critique, for example addressing the need to recognise that data does not perfectly map to individuals (and therefore what data counts as mine is nowhere near as straightforward as might be thought) and that for many purposes the attributes of the data, including the authority with which it is asserted, can be as important at the data itself.

One response to that and other problems could be to give up on the ambition for change in this area, and leave control (and thus power) with the incumbents. Instead, the post takes the more radical approach of challenging current assumptions about data ownership and control at a deeper level, arguing that governments should be providing the common, open infrastructure which would allow very different models of data control to emerge and flourish.

Data and AI Government and politics

Real-time government

Richard Pope – Platform Land

New writing from Richard Pope is always something to look out for: he has been thinking about and doing the intersection of digital and government more creatively and for longer than most. This post is about the myriad ways in which government is not real time – you can’t track the progress of your benefit claim in anything like the way in which you can track your Amazon delivery. And conversely, at any given moment, Amazon has a very clear picture of who its active customers are and what they are doing, in a way which is rather less true of operators of government services.

He is absolutely right to make the point that many services would be improved if they operated – or at least conveyed information – in real time, and he is just as right that converted (rather than transformed) paper processes and overnight batch updates account for some of that. So it shouldn’t detract from his central point to note that some of his examples are slightly odd ones, which may come from an uncharacteristic confusion between real time and event triggered. There is a notification to potential school leavers of their new national insurance number – but since children’s sixteenth birthdays are highly predictable, that notification doesn’t need to be real time in the sense meant here.  It was very useful to be told that my passport was about to expire – but since they were helpfully giving me three months’ notice, the day and the hour of the message was pretty immaterial.

Of course there are government services which should operate on less leisurely cycles than that, and of course those services should be as fast and as transparent as they reasonably can be. But perhaps the real power of real-time government is from the other side, less in shortening the cycle times of service delivery and much more in shortening the cycle times of service improvement.

Data and AI

The Truth About Algorithms

Cathy O’Neil – RSA

This is a brilliant two and a half minute animation, explaining what algorithms are, what they are not, and why they are inherently not neutral.

Data and AI

10 questions to answer before using AI in public sector algorithmic decision making

Eddie Copeland – NESTA

A few months ago, Eddie Copeland shared 10 Principles for Public Sector use of Algorithmic Decision Making. They later apparently morphed into twenty questions to address, and now the twenty have been slimmed down to ten. They are all  good questions, but one very important one seems to be missing – how can decisions based on the algorithm be challenged? (and what, therefore, do people affected by a decision need to understand about how it was reached?)