文武双全
1 min readDec 12, 2018

--

The first step is to recognize that the AI is incapable of making a truth claim. Any specific correlation should be ignored in favor of the overall predictive power of the model. In most cases, it’s possible to compare the actions recommended by the previous prediction method with the predictions that a given algorithm would have made if it had been presented with the same information. After the algorithm is implemented, profit (or the target variable) is usually the sole metric for how well it is working. A system like this can always be refined by removing key inputs or changing their weights in order to observe the effect on the overall predictive power of the system. It’s very important not to extrapolate back from a machine learning model, and try to make truth claims about the natural world. Confusion about this topic is very dangerous, and leads to disasters like the racist parole algorithms used in certain states recently. It was probably an accident at the time, but still a tragedy for those affected. Today, failure to recognize that the entire system was taken over by one questionable variable would be negligence in my opinion.

--

--

No responses yet