Unjust Algorithms
Developments in artificial intelligence (AI) injustices have rapidly taken a turn for the worse in recent years. Algorithmic decision-making systems are used more than ever by organizations, educational institutions, and governments looking for ways to increase understanding and make predictions. The Free Software Foundation (FSF) is working through this issue, and its many scenarios, to be able to say useful things about how this relates to software freedom. Our call for papers on Copilot was a first step in this direction.
Though complex, we are still talking about proprietary software systems which integrate AI. Often, they are algorithmic systems where only the inputs and outputs can be viewed. It is trained with a selection of base categories of information, after which information goes in and a verdict comes out — but what led to the conclusion is unknown. This makes AI systems less straightforwardly understandable, even by the people who wrote the code.
These systems (referred to as black box systems) can have the potential or intent to do good, but technology is not objective — and at the FSF, we believe that all software should be free. And when it comes to governments, they have the responsibility to demand for the software they use to be free, and the public has a right to the software. The scale to which the increased use of artificial intelligence is affecting people's lives is immense, making this matter of computational sovereignty all the more urgent.
Also new: Did data drift in AI models cause the Equifax credit score glitch? | VentureBeat