AI systems already pose many significant existing risks including harmful malfunctions, discrimination, reducing social connection, invasions of privacy, and disinformation. Training and deploying AI systems can also involve copyright infringement and worker exploitation.
AI systems can make mistakes if applied inappropriately. For example:
Furthermore, the use of AI systems can make it harder to detect and address process issues. Outputs of computer systems are likely to be overly trusted. Additionally, because most AI models are effectively black boxes and AI systems are much more likely to be protected from court scrutiny than human processes, it can be hard to prove mistakes.
AI systems are trained with data that can reflect existing societal biases and problematic structures, resulting in systems that learn and amplify those biases.
AI systems that might not have inherent biases may still exacerbate discriminatory practices based on the societal context in which they are deployed. For example, unequal access to AI knowledge and skills could further entrench inequality.
Two effects can result in greater polarisation of opinions, reducing social connection:
AI systems are often used to process personal data in unexpected or undesirable ways. For example: