How to Recognize AI Snakeoil

My colleague posted an interesting Princeton presentation to our group Slack channel. It's about how AI is being used in social interactions such as hiring decisions. I completely understand why HR departments and companies want to do this. They get hundreds of resumes from people and want to weed through the lot of them to get to the right candidates. This is how AI can help, optimizing processes to assist people in making the right decisions.

For example, AlphaGo is a remarkable intellectual accomplishment that deserves to be celebrated. Ten years ago, most experts would not have thought it possible. But it has nothing in common with a tool that claims to predict job performance. via Princeton

AI can scan an applicant's resume for keywords and rank job hunters based on the universities they attend, but it can't discern the human factor. You can be socially awkward but excel in your field. You can be great on paper and have all the right credentials but be a complete asshole. AI can't predict the subtleness of being a human.

As AI is being used in these social interactions more and more, people are starting to game the system. Applicants are scrutinizing job applications and hiding in white text words like "Cambridge" or "Yale." Humans can't read it and it's not there when you print it out, but a computer can read it in digital form. This skews the results, in your favor.

I already showed you tools that claim to predict job suitability. Similarly, bail decisions are being made based on an algorithmic prediction of recidivism. People are being turned away at the border based on an algorithm that analyzed their social media posts and predicted a terrorist risk

Another big problem I see is how AI is being used to assist in judgment. These have big consequences if you happen to share similar 'features' of a terrorist or criminal. This goes to the notion that fairness is hard and that human lives are messy. Our Judicial system uses these systems to 'predict' whether to deny someone bail. There have been cases where these systems aren't fair at all. Gender bias is another big thing. The recent high-profile Apple Card credit limits highlight potential gender bias.

What can we do in this brave new world to fight these problems? The first step is awareness that there is a problem. The next step then is to use the same technology to solve the problem. Then track the system to make sure it's not gamed and fair.