Refreshing: Updating Models
Understand the ethics of how often we update our models of AI/ML products.
Power of machine learning
When we think about the amazing power we have as humans, the complex brain operations we employ for things such as weighing different choices or deciding whether or not we can trust someone, we may find it hard or impossible to believe that we could ever use machines to do even a fraction of what our minds can do. Most of us make choices, selections, and judgments without fully understanding the mechanism that powers those experiences.
However, when it comes to ML, with the exception of neural networks, we can understand the underlying mechanisms that power certain determinations and classifications. We love the idea that ML can mirror our own ability to come to conclusions and that we can employ our critical thinking skills to make sure that the process is as free from bias as possible.
Ethical considerations for AI/ML
The power of AI/ML allows us to automate repetitive, boring, uninspiring actions. We’d rather have content moderators, for instance, be replaced with algorithms so that humans don’t have to suffer through flagging disturbing content on the internet on a daily basis. However, ML models, for all their wonderful abilities, aren’t able to reason the way we can. Automated structures that are biased or that degrade over time have the power to cause a lot of harm when they’re deployed in a way that directly impacts humans and when that deployment isn’t closely and regularly monitored for performance. The harm that can be caused at scale, across all live deployments of AI/ML, is what keeps ethicists and futurists up at night.
Note: An example of bias could be an automated loan approval system that, over time, starts to favor certain demographics unfairly, leading to discriminatory lending practices. ...