AI NOW Recommendations on AI Regulation
- Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
- Facial recognition and affect recognition need stringent regulation to protect the public interest.
- The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
- AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
- Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
- Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
- Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces
- Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
- More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
- University AI programs should expand beyond computer science and engineering disciplines.
AI NOW Concerns
- The AI accountability gap is growing
- AI is amplifying widespread surveillance
- Governments are rapidly expanding the use of automated decision systems without adequate protections for civil right
- Rampant testing of AI systems “in the wild” on human populations
- The limits of technological fixes to problems of fairness, bias, and discrimination
- The move to ethical principles