ผลต่างระหว่างรุ่นของ "Foundations of ethical algorithms"
ไปยังการนำทาง
ไปยังการค้นหา
Jittat (คุย | มีส่วนร่วม) |
Jittat (คุย | มีส่วนร่วม) |
||
(ไม่แสดง 7 รุ่นระหว่างกลางโดยผู้ใช้คนเดียวกัน) | |||
แถว 5: | แถว 5: | ||
* Week 1: Introduction | * Week 1: Introduction | ||
** เอกสารอ้างอิง | ** เอกสารอ้างอิง | ||
− | *** [https://dataprivacylab.org/projects/identifiability/paper1.pdf L. Sweeney, Simple Demographics Often Identify People Uniquely. Carnegie Mellon University, Data Privacy Working Paper 3. Pittsburgh 2000.] | + | *** Privacy |
− | *** Netflix Prize. [https://www.cs.cornell.edu/~shmat/shmat_oak08netflix.pdf Arvind Narayanan and Vitaly Shmatikov, How To Break Anonymity of the Netflix Prize Dataset] | [https://www.cs.cornell.edu/~shmat/netflix-faq.html FAQ] | + | **** [https://dataprivacylab.org/projects/identifiability/paper1.pdf L. Sweeney, Simple Demographics Often Identify People Uniquely. Carnegie Mellon University, Data Privacy Working Paper 3. Pittsburgh 2000.] |
− | *** GWAS privacy. [https://pubmed.ncbi.nlm.nih.gov/18769715/ Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167] | + | **** Netflix Prize. [https://www.cs.cornell.edu/~shmat/shmat_oak08netflix.pdf Arvind Narayanan and Vitaly Shmatikov, How To Break Anonymity of the Netflix Prize Dataset] | [https://www.cs.cornell.edu/~shmat/netflix-faq.html FAQ] |
− | *** Word embedding. [https://arxiv.org/abs/1607.06520 Bolukbasi, Chang, Zou, Saligrama, Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.] | + | **** GWAS privacy. [https://pubmed.ncbi.nlm.nih.gov/18769715/ Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167] |
− | *** COMPAS. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Machine Bias (ProPublica)] | [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm How We Analyzed the COMPAS Recidivism Algorithm (ProPublica) by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin] | + | *** Fairness |
− | *** | + | **** CACM Review. [https://cacm.acm.org/magazines/2020/5/244336-a-snapshot-of-the-frontiers-of-fairness-in-machine-learning/fulltext Chouldechova and Roth, A Snapshot of the Frontiers of Fairness in Machine Learning, CACM, May 2020] |
+ | **** Word embedding. [https://arxiv.org/abs/1607.06520 Bolukbasi, Chang, Zou, Saligrama, Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.] | ||
+ | **** COMPAS. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Machine Bias (ProPublica)] | [https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm How We Analyzed the COMPAS Recidivism Algorithm (ProPublica) by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin] | ||
+ | **** Hiring bias. [https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias Miranda Bogen, All the Ways Hiring Algorithms Can Introduce Bias, HBR, May 2019] | ||
+ | **** Bias in facial recognition. [https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html Steve Lohr. Facial Recognition Is Accurate, if You’re a White Guy, NYT] | ||
+ | *** Interpretability | ||
+ | **** CACM Review. [https://cacm.acm.org/magazines/2020/1/241703-techniques-for-interpretable-machine-learning/fulltext Du, Liu, Hu. Techniques for Interpretable Machine Learning. CACM, Jan 2020] | ||
+ | *** 2nd Wave of Algorithmic Accountability | ||
+ | **** [https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53 Julia Powles and Helen Nissenbaum, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence] | ||
+ | **** [https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/ Frank Pasquale, The Second Wave of Algorithmic Accountability] | ||
+ | **** [https://dl.acm.org/doi/abs/10.1145/3375627.3375839 Frank Pasquale. 2020. Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20)] | ||
+ | **** [https://boingboing.net/2019/12/04/fundamental-critique.html Doctorow, Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?", BoingBoing] | ||
== อ้างอิง == | == อ้างอิง == |
รุ่นแก้ไขปัจจุบันเมื่อ 04:07, 15 สิงหาคม 2563
หน้านี้สำหรับรายวิชา Foundations of Ethical Algorithms
เนื้อหา
- Week 1: Introduction
- เอกสารอ้างอิง
- Privacy
- L. Sweeney, Simple Demographics Often Identify People Uniquely. Carnegie Mellon University, Data Privacy Working Paper 3. Pittsburgh 2000.
- Netflix Prize. Arvind Narayanan and Vitaly Shmatikov, How To Break Anonymity of the Netflix Prize Dataset | FAQ
- GWAS privacy. Homer N, Szelinger S, Redman M, et al. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet. 2008;4(8):e1000167. Published 2008 Aug 29. doi:10.1371/journal.pgen.1000167
- Fairness
- CACM Review. Chouldechova and Roth, A Snapshot of the Frontiers of Fairness in Machine Learning, CACM, May 2020
- Word embedding. Bolukbasi, Chang, Zou, Saligrama, Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.
- COMPAS. Machine Bias (ProPublica) | How We Analyzed the COMPAS Recidivism Algorithm (ProPublica) by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin
- Hiring bias. Miranda Bogen, All the Ways Hiring Algorithms Can Introduce Bias, HBR, May 2019
- Bias in facial recognition. Steve Lohr. Facial Recognition Is Accurate, if You’re a White Guy, NYT
- Interpretability
- 2nd Wave of Algorithmic Accountability
- Julia Powles and Helen Nissenbaum, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence
- Frank Pasquale, The Second Wave of Algorithmic Accountability
- Frank Pasquale. 2020. Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20)
- Doctorow, Second wave Algorithmic Accountability: from "What should algorithms do?" to "Should we use an algorithm?", BoingBoing
- Privacy
- เอกสารอ้างอิง
อ้างอิง
รายวิชาจะอ้างอิงเนื้อหาจากหลายแหล่ง ดังนี้
- หนังสือ The Algorithmic Foundations of Differential Privacy โดย Cynthia Dwork และ Aaron Roth
- Science of Data Ethics - UPenn สอนโดย Michael Kearns และ Kristian Lum
- Ethics in Data Science - UTah สอนโดย Suresh Venkatasubramanian และ Katie Shelef
- Foundations of Fairness in Machine Learning - UW สอนโดย Jamie Morgenstern
- Explainable AI in Industry: Practical Challenges and Lessons Learned (ACM FAT* 2020 Tutorial)