Machine learning applications impact our daily lives in many ways: air flight, military, education, finance, health care, autonomous vehicles, traffic management, social-media news feeds, and facial recognition. As global citizens, it’s important we understand how these algorithms might affect us.
Are these algorithms ethical? Some necessarily raise ethical issues. For example, an autonomous vehicle may need to contend with the Trolley Dilemma if it needs to decide what action to take when both outcomes are bad (especially when they’re bad for innocent humans!).
A finance application may use AI to determine whether someone gets a loan. Is that determination fair? And what precisely do we mean by fairness in this example?
These are all questions that developers of AI need to answer. The power and promise of AI is that it allows computers to quickly and efficiently make decisions that impact human lives. But that power has to be accompanied by a corresponding responsibility by the AI author to: be clear about what data was used to train the AI, what data is used as inputs (including where that data comes from), what parameters the AI defines, and how sensitive the outcomes are to changes in these attributes. Only when we as a society understand these can we be comfortable relinquishing important decisions to the AI.