:::
圖片說明:活動訊息

活動訊息

報名已結束

2018聯合國IGF直播-演算法的透明與解釋權 (會議室G)

※以下內容來源為聯合國IGF論壇 https://www.intgovforum.org/multilingual/content/igf-2018-ws-421-algorithmic-transparency-and-the-right-to-explanation
IGF會議影音紀錄:https://www.intgovforum.org/multilingual/content/igf-2018-day-1-salle-ix-ws421-algorithmic-transparency-and-the-right-to-explanation
IGF 2018 WS #421 Algorithmic transparency and the right to explanation
Format:

Break-out Group Discussions - 60 Min

Subtheme:

Organizer 1: Alex Comninos, Association for Progressive Communications
Organizer 2: Deborah Brown, Association for Progressive Communications

Speaker 1: Jelena Jovanovic, Technical Community, Eastern European Group
Speaker 2: Vidushi Marda, Civil Society, Asia-Pacific Group
Speaker 3: Alex Comninos, Civil Society, African Group

Additional Speakers:

Siyabonga Africa, Media Development Investment Fund, Hacks/Hackers Johannesburg, Civil Society Technical Community, AG

Moez Chakchou, Assistant Director-General for Communication & Information Sector, UNESCO, International Organisations, AG

Lorena Jaume-Palasi: Founder of AlgorithmWatch, Civil Society, WEoG.

Joy Liddicoat: Researcher, University of Otago & Vice President InternetNZ , Civil Society, Academia, WEoG.

Karen Reilly: Project/grants/infrastructure manager, Technical Community, WEOG.

Dr. Nicolo Zingales, Lecturer in competition and information law at the university of Sussex, Academia, WEOG.

INVITED/UNCONFIRMED:

Chinmayi Arun, Berkman Klein Centre, Harvard University, Assistant Professor, National Law University Delhi, Academia, A-PG.

Malavika Jayaram, Executive Director, Digital Hub Asia, Civil Society, A-PG

Not all panelists and participants are confirmed due to funding and scheduling uncertainty.

 
Relevance:

How do individuals seek recourse when they are affected by automated decisions? What are the implications for justice when automated decision-making such as Artificial Intelligence (AI), Machine Learning (ML) or Deep Learning (DL) or an automated script or piece of software is involved in making or influencing a legal decision that has a legal or significant effect on another person? Under the EU General Data Protection Regulation individuals or “data subjects” have a “right to explanation” with regards to the reasons behind automated decisions that could significantly affect them. This “right to explanation” arises from a combination of rights afforded to data subjects under the GDPR in particular article 22 of the EU General Data Protection Regulation (GDPR) states that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Article 22 is further interpreted interpreted by the Working Party on Data Protection in their Guidelines on Automated Decision-making. Issues discussed will involve: Algorithmic bias: It is important that if algorithms affect our lives, they do not have bias, and that they are impartial. they are transparent and understandable Algorithmic transparency and the right to explanation: When people are affected by algorithms there must be an ability to explain why an algorithm has made a decision. How is this achieved in reality when the effects of code are hard to understand, and much automation and algorithms happen behind proprietary "black boxes" of obscured code?

 
Session Content:

I - Introduction to the issues by the speakers (25 minutes) The speakers will in twenty minutes (five minutes per speaker) introduce the problems posed from a human rights perspective of automated or algorithmic decision-making. Algorithmic justice, algorithmic bias, and algorithmic transparency shall be introduced as concepts.The technical, legal and human rights issues will also be posed. 2 - Break out group discussions I [25 minutes] Groups will ask how algorithms affect their lives and identify problems that algorithms could cause for them. 15 minutes Groups will report back 10 minutes 3- Break out Group Discussion II [25 minutes] Groups will discuss technical and policy solutions to ensuring algorithms can provide a right to explanation. 15 minutes Groups will report back 10 minutes 4. Panel discussion of Group's responses 10 minutes The speakers will respond to the report backs and issues raised by the groups. 5. Questions from audience to panelists 15 minutes

 
Interventions:

- Alex Comninos and Deborah brown (APC) will be moderators - Jelena Jovanovic (cyber security professional) will provide an overview of the concepts of algorithmic transparency, algorithmic justice, algorithmic bias and real life examples of the effects of algorithms from an information security perspective. - Vidushi Marda (Article 19) will provide an overview of the human rights aspects of automated decision making. She will focus on the GDPR Article 22 and the EU guidelines on Automated decision-making. She will provide a policy and human rights perspective.

Agenda:

Part 1: Lightning talks - 25 minutes
- Each speaker gives a "lighting talk" of max 2 minutes on their specific area of intervention/expertise.

Part 2: Breakaway group discussion - 20 Minutes
- Breakaway groups discussing different aspects of algorithmic transparency
- The remote participants will organise an internet breakaway group
- Someone from each group volunteers to rapporteur

Part 3: Report back from breakaway group discussions - 10 Mintes
- Rapporteurs report back and display their flip charts
- Remote participants, the internet reports back
- Some panelists take notes and document in order to create an outcome document for the event.

Part 4: Questions - 5 - 10 minutes

Wrap up with questions and interventions from audience and remote participants.

 
  • 指導單位:

    國家通訊傳播委員會

  • 執行單位:

    資策會科技法律研究所

  • 時  間:

    2018年11月12日(一)至11月14日(三)

  • 活動地點:

    網路治理支援平台計畫活動會議室G
    (台北市大安區敦化南路二段216號18樓科法所內)

  • 連 絡 人:

    翁秋錦小姐(02)6631-1157 shirleyweng@iii.org.tw
    鄧佩琳小姐(02)6631-1172 peilinteng@iii.org.tw
    鄭嘉逸先生(02)6631-1071 chiaicheng@iii.org.tw