Lessons from Rite Aid AI Technology Ban

“Lessons in life are repeated until learned”- Unknown

For those not familiar with this news, earlier this week, the FTC banned Rite Aid from using its AI-based facial recognition technology. As per this press release on FTC’s website, titled Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards :

“FTC says Rite Aid technology falsely tagged consumers, particularly women and people of color, as shoplifters; Ban will last five years”

There are many lessons to be learned from this debacle. I highlighted one aspect of the failure in this week’s episode of “Edge AI Bytes“, but lessons are plenty. We will overview some of those lessons in this article. We will explore this from the lens of people, processes, and technology.

Technology

At the core of this fiasco was poor data. This poor data, coupled with inadequate model training and algorithm design, led to a model that was essentially broken. For solutions planned to be leveraged like this, extensive validation and testing and a long parallel run before production are necessary. None of this was done.

I’m not sure who the two vendors were, who helped Rite Aid design this solution but reading the press release, it reads like multiple rookie mistakes were made. This fiasco exemplifies the fact that designing practical and feasible solutions goes beyond following a “cookbook.” Most data science programs have one project in a curriculum that teaches you to build an image recognition app. The world presents a whole different set of challenges. Let us start with data first. Remember how I said that once you have built a clean data foundation, you are 75% there? Some data-related blunders highlighted in the press release are:

Biased data: ”Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in plurality-Black and Asian communities than in plurality-White communities”

An essential standard practice in data science, before you even start building the model, is to look for such biases. Fortunately, such biases can be removed to ensure that the algorithm leveraging this data does not follow the same bias.

There is a reason that so many companies are now claiming to help you build AI solutions. The understanding is that you follow a certain series of steps, and you have a working model. Seasoned and creative data scientists would naturally pay attention to these nuances. This is also a lesson in choosing the right vendor. Everyone is talking AI, but very few can do it right.

Poor data quality: “The use of low-quality images in connection with its facial recognition technology, increasing the likelihood of false-positive match alerts”

If a problem statement for the need of the model was clearly documented, this project would have been abandoned if data quality was not satisfactory. A specific error level is acceptable if I use facial recognition to observe customer shopping patterns in a smart store. For use like the one defined in the press release, you need a significantly high level of accuracy. And that means very high data quality.

Now, let us review learning opportunities in algorithm design. Even the “cookbook” approach was not followed properly in this case.

Disregarding Data Science Ops 101-Model Documentation, Validation and Testing: ”Did not Test, assess, measure, document, or inquire about the accuracy of its facial recognition technology before deploying it, including failing to seek any information from either vendor it used to provide the facial recognition technology about the extent to which the technology had been tested for accuracy.

Many of us fail to realize that building innovative models also means leveraging some art. The art of research and creativity beyond the science of the algorithm. As highlighted before, that initial analysis of model usage and feasibility should have alerted Rite Aid that this model is a no-go for this specific use.

Then, even if the data science professionals were forced to work with poor data, they had plenty of opportunities to flag the model’s inadequacy during validation and testing for this specific usage. This is an ethical issue as well. I can bet that even a rookie data science professional could have seen the fallacy of using this model for such a sensitive application. They were probably coerced by the vendor to ship-off the product. It has returned to bite both the vendor and Rite Aid. It seems we underestimate the value of courage in all professions and rate compliance too high. A courageous data scientist could have avoided pain and embarrassment for the vendor and the client.

Disregarding Data Science Ops 101-Pre-production testing: “Regularly monitor or test the accuracy of the technology after it was deployed, including by failing to implement or enforce any procedure for tracking the rate of false positive matches or actions that were taken based on those false positive matches.”

As highlighted earlier, this model should have been tested extensively pre-production with any live data feeds used for this type of usage. And that may mean months. As a data science professional, you take pride in your product. Hence, you would have made sure, considering the sensitive nature of usage, that an extensive pre-production run was included in the SOW.

Processes

Some process aspects have already been covered in bits and pieces in the technology section. Some of the key ones are:

Vendor selection process: To me, at the core of this fiasco are shady “AI” vendors. AI is not everyone’s cup of tea, regardless of the primary technology-related core competency the vendor may have. It is much more than gaslighted marketing and sales pitches. Hence, a clear process-related gap is a lack of a robust vendor selection process for advanced technology initiatives. This is also why I insist that vendor selection’s “relationship” networking approach is perilous in today’s rapidly evolving technology landscape. As the FTC highlights:

The complaint alleges the company conducted many security assessments of service providers orally, and that it failed to obtain or possess backup documentation of such assessments, including for service providers Rite Aid deemed to be “high risk.

Data and Information security procedures: There were also evident failures in this domain, several of which have been listed. The common sense approach should have captured every possibility in a risk review document early on in the project for an initiative like this. An example of failure highlighted by FTC:

“Failing to adequately implement a comprehensive information security program. “

Customer management process failures: From using customer images without consent to improper interactions with them based on the results of the “AI” tool, failures of processes abounded.

  •  Customers were not notified that their biometric information was enrolled in a database connected with a biometric security or surveillance system.
  • Customer complaints about actions against consumers related to an automated biometric security or surveillance system were not addressed.
  • Customers were NOT notified about using facial recognition or other biometric surveillance technology in its stores.
People

Last but the most important. The right people would have immediately stopped this in its tracks very early on. Whether it was the right set of data scientists or the “Rite” set of Rite Aid employees (pun !). But even after this disastrous solution was implemented, the employees could have averted the fallout. As FTC highlights:

Adequately train employees tasked with operating facial recognition technology in its stores and flag that the technology could generate false positives. Even after Rite Aid switched to a technology that enabled employees to report a “bad match” and required employees to use it, the company did not take action to ensure employees followed this policy.

This is a classic example of why I keep on highlighting that a successful solution is a combination of people, processes, and technology.


Leave a comment