Is personalized content unethical?

By | 2018/04/10

Personalized content has been the goal of many in the technical communication and content strategy communities for a long time now. And we encounter personalized content every day. Google “purple left handed widgets” and you will see ads for purple left handed widgets all over the web for months afterward. Visit Amazon and every page you see will push products based on your previous purchases. Visit Facebook …

Well, and there’s the rub, as Mark Zuckerberg is summoned before congress for a good and thorough roasting. Because what Cambridge Analytica did was personalized content, pure and simple, and no one is happy about it.

As Jacob Metcalf points out in Facebook may stop the data leaks, but it’s too late: Cambridge Analytica’s models live on, in the MIT Technology Review, the issue is not simply that Cambridge Analytica had access to data it should not have had, and that access has now been removed, but that they used that data to develop models of persuasion that can be used to customize content in the future, and which can be further refined using any other datasets they can get their hands on.

Concerns have been growing for a long time about the degree of influence that the ability to target ads at individuals can have on their shopping behavior. When the same techniques are used to target their voting behavior, the alarm bells really start to go off. The concerns about the Cambridge Analytica case are not simply that they had access to data they shouldn’t but that they engaged in a process of message manipulation that many consider unethical on its face. The reason for wanting to restrict the access people have to data about us is precisely that we fear they will use that data unethically. The key ethical question is what the owner of the data will do with it.

But there is really only one thing a content company like Facebook can do with it: show customized content, either by building and refining models from it or by using information on individuals to tailor content to those individuals.

Services and manufacturing companies could perhaps use such data to change the products and services they offer . Law enforcement agencies could use it to track potential or actual criminal behavior. Governments could use it to track dissidents. But organizations in the content business, which includes manufacturing and service companies who market their products, governments who communicate with their citizens, politicians who run for office, and even law enforcement who issue bulletins to the public and cooperate with prosecutors to try accused, use such data simply to personalize content.

The ethics of personal data collection, therefore, are the ethics of personalized content. And since the models derived from personal data can live on and be refined even after access to the data has been cut off, the ethics of personalized content go well beyond access to particular data sets.

And so the question is, is personalizing content ethical?

Certainly it is possible to personalize content for benign purposes, to personalize it in such a way that all the benefit goes to the consumer and none of it to the vendor. (I’m not sure what would motivate that, or how you would prove it, but as a thought experiment we can at least imagine it.) But professional ethics require more than the possibility of doing good. Professional ethics are about avoiding both the appearance and the occasion of doing evil.

This is the difference between morality and professional ethics. Morality is about personally avoiding sin in individual cases. Professional ethics are about conducting your professional affairs in such a way as to avoid the imputation of sin, and as much as possible to remove the temptation to sin from the practitioner. Moral teaching bids us do good deeds by stealth; professional ethics requires us to act with full transparency, to expose all our deeds to public scrutiny. They also require us to refrain entirely from activities where the possibility of sin is so great, or the commission of sin is so hard to detect, that neither we nor the public can be confident that we have acted ethically.

Personalized content may well be one of those areas so fraught with the possibility of sin that professional ethics should require us to forswear it altogether.

The moral hazard of personalized content

What Cambridge Analytica did was use structured data to drive personalized content to influence customer behavior. They are not shy about it. Their homepage proclaims:

Cambridge Analytica uses data to change audience behavior. Visit our Commercial or Political divisions to see how we can help you.

There is a huge moral hazard in any such endeavor. However much you may think you are helping your customers get the information they need, it is ultimately your vision of what they need that is driving the model. It is the advancement of your own goals that you are seeking to achieve.

All communication is like this, of course.  All content seeks to influence people. As I have written before, the purpose of all communication is to change behavior and if you want to communicate well you need to have a firm idea of whose behavior you want to change and what you can say that will produce the change you are looking for.

The issue is, are there methods that it is unethical to use to achieve the behavioral change you are looking for?

We could take an all’s-fair-in-love-and-war attitude to this. The public, we could reason, knows that the things they read are intended to influence them. Schools take pains to teach their students this under the banner of critical thinking. The ethical presumption, therefore, is that their knowledge that we are trying to influence them is sufficient prophylactic against covert hypnosis: the reader recognizes the attempt and is able to make a mature judgement about it.

Of course, we have always known that that is not strictly true, that the power to persuade is real power. But it is also power that has limits. Though it has always been able, under certain circumstances, to move civilized people to commit the vilest atrocities (as in Nazi Germany, for example), it has always been limited by people’s innate moral sense and by the power of persuasion wielded in opposition.

In other words, in the history of communication to date, there has always been a gap that the propagandist could not cross. They could not individually address the particular hopes, fears, and prejudices of each individual because they did not have access to the data or the means to customize the message. They had to issue a general message based on an appeal to general sentiment, and that always leaves open some room for the critical faculties of the recipient to operate, and for opposing arguments to find a way in. The propagandist might get to our doorstep, but they could not get into our heads, and therein lay a saving measure of freedom.

But the combination of neurological science and big data opens up the possibility of the means of persuasion becoming a whole lot more powerful. If neurological science tells the propagandist exactly where the buttons are and big data lets them identify exactly how to push them in each person individually, the propagandist, like the vampire, can cross the threshold and enter the individual mind, and the gap that provides our last measure of freedom is gone.  Even if the effect is not permanent (Cambridge Analytica did get found out, after all) it allows the propagandist to wield enormous influence, particularly if they time it right before a critical event such as an election.

In other words, if our engines of persuasion become so sophisticated, so targeted, so attuned to the particulars of our neurological makeup, that the degree of critical thinking that we can reasonably hope to develop in the citizenry is no prophylactic against it, then we, as professional communicators, have lost our moral cover. Buyer beware cannot be our excuse if we have removed any possibility of wariness from the buyer.

A method that cannot be detected or countered in the time and with the tools available to the person on whom it is used, therefore, cannot be considered an ethical method, even when used for a moral purpose. If nothing else, if fails the basic ethical requirement of transparency. The temptation to sin is too great and the detection of sin is too difficult for such a method to ever be considered ethical.

Are we actually there yet? A big lie does not necessarily need big data. By no reasonable measure was the US election of 2016 a calamity on the scale of the German election of 1932. It may well be that the chaotic democracy of social media is actually an antidote to manipulation more powerful than the forms of manipulation that social media can presently achieve.

But let’s suppose that the technology driving personalized content is not mature enough yet to strip the recipient of their freedom, and therefore strip the author of their ethical cover. The point, surely, is to mature it to the point where it is sophisticated enough to do just that. And if we are going down that road, is it a valid ethical argument to say that everything is fine because we have not got there yet? Surely the pursuit of unethical means is itself unethical.

Personalized content driven by sophisticated predictive behavioral models and extensive data on individuals and groups is a potentially a tool of persuasion against which no reasonable defence is possible, and as much as we may proclaim the innocence of our intentions, our intentions cannot be purer than our hearts, and we are all apt to grossly overestimate the purity of our hearts.

This is the reason we have ethics in a profession. It is not to let us go right up to the line, but rather to hold us back from even approaching the line, knowing that if we get too near to the line we are inevitably going to step over it. Not only is a person with fiduciary responsibility required not to have a conflict of interest, they are to avoid even the appearance of conflict of interest. The only way to be sure we don’t cross the line is to stop ourselves well short of it.

And because ethics is at least in part about public perception of your methods, how the public feels about things is very much an ethical consideration, and it is pretty clear that the public has grave concerns about personalized content, concerns which the Cambridge Analytica case has only made more grave. If there is widespread public consensus that the practice is unethical, chances are it actually is unethical, if for no other reason than that demonstrating that you are acting ethically is itself an ethical obligation.

But the really scary thought is this: if we get really good at this, the public’s objections will vanish, not because the public has decided for itself that it likes this degree of personalization, but because we will have used personalized content to convince them that they do. In such a world, there is clearly no transparency at all, and if there is no transparency, there is no ethics.  The ultimate ethical objection is that if we go too far down this road, all ethical objections will be snuffed out. Not answered; obliterated.

And so I ask, where should professional communicators draw the ethical line on personalized content? Wherever we draw it, it has to be consistent with transparency. One way to draw that line is to say that it is unethical to do data-driven personalized content at all. If we don’t draw the line there, where do we draw it?

11 thoughts on “Is personalized content unethical?

  1. David Worsick

    What bothers me is that you don’t need to control everybody: just the majority. I’ve read about the pogroms and massacres humans had committed in the past and it’s usually a few emotional triggers that launch a mob into becoming evil. Hitler’s popularity grew steadily even before his appoint as Chancellor because of his propaganda and his book. If you can get into the minds of a large portion of the population just using such primitive tools, what can you do nowadays. People already tend to filter their world view, reading only about opinions they already share. That doesn’t require much of a push at a tipping point and history proves that the sensible, reflective portion of a group can always be overwhelmed.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, David.

      Agreed, under the right conditions it does not take much to set the mob off into a frenzy. If we are going to define a professional ethic around methods of communication we can’t rule out shouting on street corners just because shouting on street corners can, under the right circumstances, spark a revolution.

      Rather, I think, we have to ask ourselves, are their techniques so insidious that they could light a slow fuse of revolution at a time when people shouting on street corners would just cause people to cross the street and hurry by.

      There is a serious ethical difference between a communication technique that can spark an existing powder keg and one that can turn a peaceful and prosperous society into a powder keg of resentment without anyone noticing what was going on until it was too late.

      Reply
  2. Scott Abel

    The short answer: No.

    Ethical decisions should take into consideration the aftermath of the actions planned. Ethical decisions are designed to do no harm, involve transparency, take place with willing participants, and are implemented with informed consent. The act of providing personalized content is not unethical itself.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Scott. Agreed, an individual act of personalization may not be unethical in itself, or, at least, it may not be immoral in itself. But, as you say, to be ethical, a method must be transparent, and it is difficult to see how personalizing content for individuals is to be made transparent in the general case. Yes, you can often detect it, but to the extent you can detect it, is is less effective. A more effective personalization would not be so obvious, and in not being obvious, would not be transparent, and therefore would be ethically suspect.

      If everyone in my neighbourhood receives the same message and I run out into the street in anger and find that none of my neighbours share my outrage, maybe I go back home and drink a glass of wine to cool off. But if everyone in my neighbourhood receives a different message, each carefully crafted by a personalized content algorithm to make them run out into the street in anger, then suddenly the whole neighbourhood is in flames.

      And if that is the danger, the question has to be, is such a technique ethical?

      As I mentioned in my reply to Chris, this is not about whether the intent is a good one. The same technique might be used to get the whole neighbood out to give blood or donate to the foodbank. The question is, is there sufficient transparency in the technique, is there sufficient room left for the recipient to think rationally for themselves. Because if not, the technique is unethical even if it is used in a good cause.

      As CS Lewis said of slavery, the objection is not that no man is fit to be a slave, but that no man is fit to be a master. Some techniques should be regarded as unethical not because their use is always wrong but because they give the individual wielding them more power than they are worthy to exercise.

      Reply
  3. Chris Despopoulos

    Two thoughts.
    First, it’s possible to have ethical, moral, and good personalized content. I’m thinking of tracking a user’s path through a product and recommending content. Or assembling technical content to match the user’s role.

    Second, for the other kind of content… The cat is out of the bag. We have to educate ourselves and arm ourselves. And not everybody will do that. Speaking of Hitler, one thing that was successful for him was the visual design of his rallies. They were awe inspiring by comparison to anything people had seen before. And he duped all the rubes. Well fasten your seat belts, the rubes are taking the wheel again. Sorry… I’m not too optimistic.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Chris.

      I agree it is possible to have moral (your intentions are pure) and good (the subject benefits) personalized content. It is not at all the case that all personalization has bad results.

      But ethics is about something more. It is about avoiding the perception that something shady might be going on, even if it isn’t. Thus a broker cannot make himself a loan from his client’s funds, even to pay for his grandmother’s operation, and even if he repays the client with handsome interest. His intentions were pure and the client benefitted from the handsome interest payment, but the action is still unethical because the appearance of conflict of interest and the potential for abuse is so great.

      The fact that the cat is out of the bag is the best part of this whole story. If we had not caught them at it, the potential dangers would be much greater. But to me this is precisely what raises the ethical question. We caught them at it this time. But if they get better at it, we may not catch them at it the next time. That is why I think it is reasonable to ask if the only reliable defence against this possibility is to rule that personalized content is unethical on principle.

      If there is another less restrictive but equally clear ethical guideline that would have the same effect, by all means let’s adopt it and save the beneficial uses of customized content. But if so, what is it?

      Reply
  4. David Worsick

    I listened to a webcast from the Canadian Broadcast Corporation where an expert (whose name I forget) had noticed that there was a large portion of undecided in American voters. Analysts thought this was due to a wavering group in the middle, but further study showed that these undecided weren’t sure how to vote but they were very sure who they won’t vote for. They weren’t in the middle, they were at the ends of the political spectrum. Guess what the political spectrum looked like in some countries in the 1920’s. Extreme red (communism) to the far left and extreme black (fascism) to the far right. It’s intimidating when you hate more than you love.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, David. I’m not sure that things are as dire today as they were in the 1920s, but it is certainly the case that people seem increasingly entrenched in their own moral certainty and thus both contemptuous and dismissive of the other side. (“Basket of deplorables”, anyone?)

      Personalized content did not create this situation. My sense of it is that a growing smugness on the left triggered a countervailing bombast on the right. But personalized content does not seem like the ideal instrument to combat it with. Truth and reason don’t need to be personalized. And perhaps that is part of the argument for regarding personalized content as unethical: truth and reason don’t need to be personalized. (This is by no means to say that one message should suffice for all. We do genuinely need to tell the same story in different ways for different audiences. But individualized content is not about bring people to a shared conclusion, but about the individual advantage of either the sender or the recipient.)

      Reply
  5. Edwin Skau

    It’s surprising that people who were spasming all over big data didn’t see this coming. If data isn’t intended for analysis, inference and execution,.then what is its purpose? If it is, then isn’t customization the first thing on the list?
    I believe, however, that the question of ethics lies in the surreptitious collection of data, and lack of disclosure, nay, intentional concealment.

    Reply
    1. Mark Baker Post author

      Thanks for the comment, Edwin.

      A restriction on data collection itself would be an even more stringent ethical limitation. It would prevent other uses of data, such as rationalizing product offerings or optimizing public transit schedules.

      Attacking data collection seems like the more obvious target because people somehow have this proprietary feeling about information about themselves as if, say, my height and my eye color and the fact that I buy ice cream even in January are not public facts that anyone is free to observe and record, but private things that other should discreetly turn their eyes away from.

      I sense that some may find my proposition that customized content is unethical to be extreme, but would still call for severe restrictions on data collection. But restricting data collection is a much more comprehensive idea that would rule out personalized content along with a whole bunch of other things.

      I’m not saying we shouldn’t restrict data collection in this way. There may be equally cogent ethical objections to all the other uses of such data. But those, I feel, have to come from the people working in those fields.

      But it does seem to me that there are unique ethical concerns when it comes to communication. Communication influences behavior, and if we believe in freedom and autonomy for the individual, then we have an ethical obligation to ask if certain means of persuasion are too subtle to be consistent the the freedom and autonomy of their recipients.

      After all, there are already means of persuasion, such as torture or brainwashing, that we deem always unethical, regardless of the behavioral change they are designed to produce.

      Reply
  6. Mark Giffin

    Excellent food for thought, Mark. You might be interested in the book The Hidden Persuaders by Vance Packard, published in 1957. He talks about similar things in the earlier days of psychological advertising:

    https://en.wikipedia.org/wiki/Vance_Packard

    And here are some lyrics from the Minutemen, an old punk-era Los Angeles band:

    Let the products sell themselves
    F**k advertising, commercial psychology
    Psychological methods to sell should be destroyed

    Reply

Leave a Reply