Wednesday, May 6, 2020

Distilling the Harms of Automated Decision Making †Free Samples

Question: Discuss about the Distilling the Harms of Automated Decision Making. Answer: The three identified problems are given below as follows. Automated decision making: This is the source of the problem which is related to application which will use information on previous students to recruit high school students. The example of harms of automated decision making is educational discrimination and stereotyped assumptions that are biased (Smith 2017). The fairness is at stake due to harms such as unfair harms it can have on recruitment of students. The students can have loss of opportunity, economic loss and social determinant. Training of model: This is the source of the problem which is related to model which will be trained on previous students to evaluate current students future success. The example of problem related to training of model is undesired complexity of model and cultural difference of students (Hardt 2014). The fairness is at stake due to the fact that interest of one student cannot predict the future success of another student. The suggestion for course based on the information of prior students will negatively impact the future of current students. Model analysis: This is the source of the problem which is related to the model that will be trained on previous students for recording their interest. The example of model analysis problem is that the data collected from prior students provide relevancy or not with the current students suggested course (Crawford 2013). The fairness is at stake due to the design of the model which shows that it can have inaccurate information. The students information can have errors which will give inaccurate result to the university to analyze the current students suggested course. References Crawford, K. 2013.The Hidden Biases in Big Data. [online] Harvard Business Review. Available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data [Accessed 22 Feb. 2018]. Hardt, M. 2014.How big data is unfair Moritz Hardt Medium. [online] Medium. Available at: https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de [Accessed 22 Feb. 2018]. Smith, L. 2017.Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making. [ebook] Washington, DC: Future of Privacy Forum, pp.1-8. Available at: https://fpf.org/wp-content/uploads/2017/12/FPF-Automated-Decision-Making-Harms-and-Mitigation-Charts.pdf [Accessed 22 Feb. 2018].

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.