For years now, professionals from various domains have been warning against the use of Artificial Intelligence without ethics guidelines. Some professionals in the industry claim that it is too soon to be considering ethics and rules given that the actual product does not exist yet while other in the academic field claim that once the product is created, it will be too late. The term Responsible AI was coined sometime in the last decade to highlight the importance of ethics when building artificial intelligence and to attempt to eliminate algorithmic bias.
Why it matters
Even though Artificial Intelligence is still far from resembling what is portrayed in sci-fi movies, it would be wrong to assume that AI is not having an impact on our daily lives. Some of this impact, such as Netflix movie recommendations, is quite insignificant while some of it could be quite significant, such as getting fired from your job. In her 2016 book Weapons of Math Destruction and through her appearance in the DataFramed Podcast, Cathy O’Neil has detailed several instances in which algorithms have codified and perpetuated inequalities. Cathy warns that machine learning approaches simply learn the patterns of society today. As a result, since leadership positions are currently filled by males, an algorithm will say that a man with the same qualifications as a woman is more appropriate for the open leadership position. Similarly, because lower-income people typically take out fewer loans, they will be noted unfavorably to getting a loan than a higher-income person. Cathy warns that blindly implementing algorithms to predict on past data can have the effect of perpetuating current injustices. It is for this reason that she has created a consulting company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), to check assumptions made by models before putting them into production.
Cathy is one of the pioneers in Responsible AI, which aims to consider the effects of an algorithm before it is put into production. Thankfully, the movement has caught on, prompting companies such as Accenture and Google to invest in the field.
How Responsible AI applies to marketing
While it is very easy to assume that Responsible AI considerations do not apply to marketing since marketing only helps users decide which product to use and do not determine their financial future, for example, that assumption would be wrong. Marketing can have a very large impact on different communities.
Let’s consider the example of a tobacco company searching to develop a model that would increase tobacco sales by determining new clients. According to the CDC, smokers tend to be less educated people living in the midwest who are divorced, separated or widowed. If an algorithm were created to find the people who are most likely to smoke, the algorithm would clearly label anyone who fits the above criteria as a potential customer. This model will automatically violate the first principle of fairness as it is preying on people going through an emotional period in their lives and who may not have the necessary information to make an informed decision about smoking. While this is an extreme case and marketing by tobacco companies is thankfully highly regulated, it clearly shows that a marketing model can significantly impact the lives of individuals in a disproportionate and unfair manner.
Therefore, it is the responsibility of everyone to make sure that any model created, whether it be in the marketing field or in other fields, follows the Responsible AI principles of (1) fairness, (2) transparency/explainability, (3) privacy/security and (4) accountability.
What are some other examples where Responsible AI principles would be crucial in the marketing field?