“As artificial intelligence proliferates, so does concern about its use in areas ranging from criminal justice to hiring to insurance. Many people worry that tools built using “big data” will perpetuate or worsen past inequities, or threaten civil liberties. Over the past two years, for instance, Amazon has been aggressively marketing a facial-recognition tool called Rekognition to law-enforcement agencies. The tool, which can identify faces in real time from a database of tens of millions, has raised troubling questions about bias: Researchers at the ACLU and MIT Media Lab, among others, have shown that it is significantly less accurate in identifying darker-skinned women. Equally troubling is the technology’s potential to erode privacy.
Privacy advocates, legislators, and even some tech companies themselveshave called for greater regulation of tools like Rekognition. While regulation is certainly important, thinking through the ethical and legal implications of technology shouldn’t happen only after it is created and sold. Designing and implementing algorithms are far from merely technical matters, as projects like Rekognition show. To that end, there’s a growing effort at many universities to better prepare future designers and engineers to consider the urgent questions raised by their products, by incorporating ethical and policy questions into undergraduate computer-science classes.
“The profound consequences of technological innovation…demand that the people who are trained to become technologists have an ethical and social framework for thinking about the implications of the very technologies that they work on,” said Rob Reich, a political scientist and philosopher who is co-teaching a course called “Computers, Ethics, and Public Policy” at Stanford this year…”
Read more: Fixing Tech’s Ethics Problem Starts in the Classroom | The Nation
Copyright 2024. All Rights Reserved
Design & Dev by Wonderland Collective