Discrimination, Privacy Rights and Technology

Privacy Rights (Matthias 2020)

The problems of gender and racial bias in our information systems are complex because the models designed to put those data to use, are created by small groups of people and then scaled up to users around the globe. We can find that the data science and Artificial intelligence (AI) fields are mainly dominated by elite white men. (D’Ignazio and Klein 2020) leading to the creation of systems that frequently overlook things like different skin colors, different genders needs…etc.

For example; it was reported that face-detection machines are usually designed to recognize light skin better than darker ones. (D’Ignazio and Klein 2020) making it less accurate yet widely used! And although some suggest solving this problem by expanding and diversifying the database of the system itself, this solution opens the doors for another issue which is nonconsensual data collection.

Almost 10 years ago Edward Snowden had leaked the unfortunate facts about the NSA (National Security Agency)surveillance program and how it was face harvesting using the images found on the internet, social media, email…etc. without actually the consent of people (Risen and Poitras 2014). No one really knows the amount data that was collected without people’s permission throughout the years and although this was a problem because it was a violation privacy rights, one can only imagine the purposes and the applications of the data collected and how it can or was utilized for purposes that disregarded the human rights.

By taking into consideration how AI, information systems and machine learning are now involved and are inseparable from our daily lives, the potential of bias systems increasing racism, and discrimination, causing more core social and humanitarian issues instead of solving them and making people’s lives easier is in continuous increase. Especially if such technologies are controlled by a certain group of people. What will that do to the minorities or even regular people outside of such powerful organizations?

For instance Huawei has been involved in creating an AI facial recognition system that detects ethnic minorities of “Uighur” and report an alert to the police, to help in detecting and locating this group of people specifically, which is called “Uighur alarms”, even though both the company and the Chinese government have denied the actual implementation of the technology and claimed that this was done for tests reasons only, tech experts say that building such expensive systems can never be without a purpose and they fear the spreading of “ethnical discriminatory systems” across international oppressing regimes (Harwell and Dou 2020).

It is mind blowing how elite tech companies are willing to create such systems in the first place knowing their dangerous potentials. Maybe it’s because globally only few groups and organizations have the money and the power to create and facilitate such data collection and analysis whose primary reason is to gain more money regardless of ethics or humanity.

References

D’Ignazio, Catherine, and Lauren Klein. 2020. “1. The Power Chapter.” Data Feminism, March. https://data-feminism.mitpress.mit.edu/pub/vi8obxh7/release/4.

Harwell, Drew, and Eva Dou. 2020. “Huawei Tested AI Software That Could Recognize Uighur Minorities and Alert Police, Report Says.” Washington Post, December 8, 2020. https://www.washingtonpost.com/technology/2020/12/08/huawei-tested-ai-software-that-could-recognize-uighur-minorities-alert-police-report-says/.

Matthias. 2020. “Data Retention – the Fight Continues.” Tutanota. July 9, 2020. https://tutanota.com/blog/posts/data-retention/.

Risen, James, and Laura Poitras. 2014. “N.S.A. Collecting Millions of Faces from Web Images.” The New York Times, May 31, 2014, sec. U.S. https://www.nytimes.com/2014/06/01/us/nsa-collecting-millions-of-faces-from-web-images.html.