ผลต่างระหว่างรุ่นของ "Foundations of ethical algorithms"

จาก Theory Wiki
ไปยังการนำทาง ไปยังการค้นหา
 
(ไม่แสดง 4 รุ่นระหว่างกลางโดยผู้ใช้คนเดียวกัน)
แถว 10: แถว 10:
 
**** GWAS privacy. [https://pubmed.ncbi.nlm.nih.gov/18769715/ Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167]
 
**** GWAS privacy. [https://pubmed.ncbi.nlm.nih.gov/18769715/ Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167]
 
*** Fairness
 
*** Fairness
 +
**** CACM Review. [https://cacm.acm.org/magazines/2020/5/244336-a-snapshot-of-the-frontiers-of-fairness-in-machine-learning/fulltext Chouldechova and Roth, A Snapshot of the Frontiers of Fairness in Machine Learning, CACM, May 2020]
 
**** Word embedding. [https://arxiv.org/abs/1607.06520 Bolukbasi, Chang, Zou, Saligrama, Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.]
 
**** Word embedding. [https://arxiv.org/abs/1607.06520 Bolukbasi, Chang, Zou, Saligrama, Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.]
 
**** COMPAS. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Machine Bias (ProPublica)]   |   [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm How We Analyzed the COMPAS Recidivism Algorithm (ProPublica) by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin]
 
**** COMPAS. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Machine Bias (ProPublica)]   |   [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm How We Analyzed the COMPAS Recidivism Algorithm (ProPublica) by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin]
 +
**** Hiring bias. [https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias Miranda Bogen, All the Ways Hiring Algorithms Can Introduce Bias, HBR, May 2019]
 +
**** Bias in facial recognition. [https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html  Steve Lohr. Facial Recognition Is Accurate, if You’re a White Guy, NYT]
 +
*** Interpretability
 +
**** CACM Review. [https://cacm.acm.org/magazines/2020/1/241703-techniques-for-interpretable-machine-learning/fulltext Du, Liu, Hu. Techniques for Interpretable Machine Learning. CACM, Jan 2020]
 
*** 2nd Wave of Algorithmic Accountability
 
*** 2nd Wave of Algorithmic Accountability
 
**** [https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53 Julia Powles and Helen Nissenbaum, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence]
 
**** [https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53 Julia Powles and Helen Nissenbaum, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence]

รุ่นแก้ไขปัจจุบันเมื่อ 04:07, 15 สิงหาคม 2563

หน้านี้สำหรับรายวิชา Foundations of Ethical Algorithms

เนื้อหา

อ้างอิง

รายวิชาจะอ้างอิงเนื้อหาจากหลายแหล่ง ดังนี้