Personalized content has been the goal of many in the technical communication and content strategy communities for a long time now. And we encounter personalized content every day. Google “purple left handed widgets” and you will see ads for purple left handed widgets all over the web for months afterward. Visit Amazon and every page you see will push products based on your previous purchases. Visit Facebook …
Well, and there’s the rub, as Mark Zuckerberg is summoned before congress for a good and thorough roasting. Because what Cambridge Analytica did was personalized content, pure and simple, and no one is happy about it.
As Jacob Metcalf points out in Facebook may stop the data leaks, but it’s too late: Cambridge Analytica’s models live on, in the MIT Technology Review, the issue is not simply that Cambridge Analytica had access to data it should not have had, and that access has now been removed, but that they used that data to develop models of persuasion that can be used to customize content in the future, and which can be further refined using any other datasets they can get their hands on.
Concerns have been growing for a long time about the degree of influence that the ability to target ads at individuals can have on their shopping behavior. When the same techniques are used to target their voting behavior, the alarm bells really start to go off. The concerns about the Cambridge Analytica case are not simply that they had access to data they shouldn’t but that they engaged in a process of message manipulation that many consider unethical on its face. The reason for wanting to restrict the access people have to data about us is precisely that we fear they will use that data unethically. The key ethical question is what the owner of the data will do with it.
But there is really only one thing a content company like Facebook can do with it: show customized content, either by building and refining models from it or by using information on individuals to tailor content to those individuals.
Services and manufacturing companies could perhaps use such data to change the products and services they offer . Law enforcement agencies could use it to track potential or actual criminal behavior. Governments could use it to track dissidents. But organizations in the content business, which includes manufacturing and service companies who market their products, governments who communicate with their citizens, politicians who run for office, and even law enforcement who issue bulletins to the public and cooperate with prosecutors to try accused, use such data simply to personalize content.
The ethics of personal data collection, therefore, are the ethics of personalized content. And since the models derived from personal data can live on and be refined even after access to the data has been cut off, the ethics of personalized content go well beyond access to particular data sets.
And so the question is, is personalizing content ethical?
Certainly it is possible to personalize content for benign purposes, to personalize it in such a way that all the benefit goes to the consumer and none of it to the vendor. (I’m not sure what would motivate that, or how you would prove it, but as a thought experiment we can at least imagine it.) But professional ethics require more than the possibility of doing good. Professional ethics are about avoiding both the appearance and the occasion of doing evil.
This is the difference between morality and professional ethics. Morality is about personally avoiding sin in individual cases. Professional ethics are about conducting your professional affairs in such a way as to avoid the imputation of sin, and as much as possible to remove the temptation to sin from the practitioner. Moral teaching bids us do good deeds by stealth; professional ethics requires us to act with full transparency, to expose all our deeds to public scrutiny. They also require us to refrain entirely from activities where the possibility of sin is so great, or the commission of sin is so hard to detect, that neither we nor the public can be confident that we have acted ethically.
Personalized content may well be one of those areas so fraught with the possibility of sin that professional ethics should require us to forswear it altogether.
The moral hazard of personalized content
What Cambridge Analytica did was use structured data to drive personalized content to influence customer behavior. They are not shy about it. Their homepage proclaims:
Cambridge Analytica uses data to change audience behavior. Visit our Commercial or Political divisions to see how we can help you.
There is a huge moral hazard in any such endeavor. However much you may think you are helping your customers get the information they need, it is ultimately your vision of what they need that is driving the model. It is the advancement of your own goals that you are seeking to achieve.
All communication is like this, of course. All content seeks to influence people. As I have written before, the purpose of all communication is to change behavior and if you want to communicate well you need to have a firm idea of whose behavior you want to change and what you can say that will produce the change you are looking for.
The issue is, are there methods that it is unethical to use to achieve the behavioral change you are looking for?
We could take an all’s-fair-in-love-and-war attitude to this. The public, we could reason, knows that the things they read are intended to influence them. Schools take pains to teach their students this under the banner of critical thinking. The ethical presumption, therefore, is that their knowledge that we are trying to influence them is sufficient prophylactic against covert hypnosis: the reader recognizes the attempt and is able to make a mature judgement about it.
Of course, we have always known that that is not strictly true, that the power to persuade is real power. But it is also power that has limits. Though it has always been able, under certain circumstances, to move civilized people to commit the vilest atrocities (as in Nazi Germany, for example), it has always been limited by people’s innate moral sense and by the power of persuasion wielded in opposition.
In other words, in the history of communication to date, there has always been a gap that the propagandist could not cross. They could not individually address the particular hopes, fears, and prejudices of each individual because they did not have access to the data or the means to customize the message. They had to issue a general message based on an appeal to general sentiment, and that always leaves open some room for the critical faculties of the recipient to operate, and for opposing arguments to find a way in. The propagandist might get to our doorstep, but they could not get into our heads, and therein lay a saving measure of freedom.
But the combination of neurological science and big data opens up the possibility of the means of persuasion becoming a whole lot more powerful. If neurological science tells the propagandist exactly where the buttons are and big data lets them identify exactly how to push them in each person individually, the propagandist, like the vampire, can cross the threshold and enter the individual mind, and the gap that provides our last measure of freedom is gone. Even if the effect is not permanent (Cambridge Analytica did get found out, after all) it allows the propagandist to wield enormous influence, particularly if they time it right before a critical event such as an election.
In other words, if our engines of persuasion become so sophisticated, so targeted, so attuned to the particulars of our neurological makeup, that the degree of critical thinking that we can reasonably hope to develop in the citizenry is no prophylactic against it, then we, as professional communicators, have lost our moral cover. Buyer beware cannot be our excuse if we have removed any possibility of wariness from the buyer.
A method that cannot be detected or countered in the time and with the tools available to the person on whom it is used, therefore, cannot be considered an ethical method, even when used for a moral purpose. If nothing else, if fails the basic ethical requirement of transparency. The temptation to sin is too great and the detection of sin is too difficult for such a method to ever be considered ethical.
Are we actually there yet? A big lie does not necessarily need big data. By no reasonable measure was the US election of 2016 a calamity on the scale of the German election of 1932. It may well be that the chaotic democracy of social media is actually an antidote to manipulation more powerful than the forms of manipulation that social media can presently achieve.
But let’s suppose that the technology driving personalized content is not mature enough yet to strip the recipient of their freedom, and therefore strip the author of their ethical cover. The point, surely, is to mature it to the point where it is sophisticated enough to do just that. And if we are going down that road, is it a valid ethical argument to say that everything is fine because we have not got there yet? Surely the pursuit of unethical means is itself unethical.
Personalized content driven by sophisticated predictive behavioral models and extensive data on individuals and groups is a potentially a tool of persuasion against which no reasonable defence is possible, and as much as we may proclaim the innocence of our intentions, our intentions cannot be purer than our hearts, and we are all apt to grossly overestimate the purity of our hearts.
This is the reason we have ethics in a profession. It is not to let us go right up to the line, but rather to hold us back from even approaching the line, knowing that if we get too near to the line we are inevitably going to step over it. Not only is a person with fiduciary responsibility required not to have a conflict of interest, they are to avoid even the appearance of conflict of interest. The only way to be sure we don’t cross the line is to stop ourselves well short of it.
And because ethics is at least in part about public perception of your methods, how the public feels about things is very much an ethical consideration, and it is pretty clear that the public has grave concerns about personalized content, concerns which the Cambridge Analytica case has only made more grave. If there is widespread public consensus that the practice is unethical, chances are it actually is unethical, if for no other reason than that demonstrating that you are acting ethically is itself an ethical obligation.
But the really scary thought is this: if we get really good at this, the public’s objections will vanish, not because the public has decided for itself that it likes this degree of personalization, but because we will have used personalized content to convince them that they do. In such a world, there is clearly no transparency at all, and if there is no transparency, there is no ethics. The ultimate ethical objection is that if we go too far down this road, all ethical objections will be snuffed out. Not answered; obliterated.
And so I ask, where should professional communicators draw the ethical line on personalized content? Wherever we draw it, it has to be consistent with transparency. One way to draw that line is to say that it is unethical to do data-driven personalized content at all. If we don’t draw the line there, where do we draw it?