[GA, passed] - Improving Safety In Deep Learning

Status
Not open for further replies.

Nutmeg The Squirrel

Professional Lesbian
-
-
-
-
-
Pronouns
They/Her/It
TNP Nation
The_Anddoran_Commune
Discord
NutmegTheSquirrel#8941
ga.jpg

Improving Safety In Deep Learning Development
Category: Regulation | Area of Effect: Safety
Proposed by: Haymarket Riot | Onsite Topic


The World Assembly,
Appreciating that deep learning artificial intelligence automates data processing, improving the efficiency and expanding the scope of data analysis,
Applauding the impact these technologies have and will continue to have on driving the increased productivity of myriad industries and government programs,
Noting that deep learning systems are a ‘black box’ technology, i.e., it is very difficult for natural persons to investigate the origins of algorithmic parameters and outputs,
Concerned at the mounting evidence that as deep learning systems are implemented socially, preconceived biases in handling of data inevitably lead to discrimination in ways that cannot be fully discerned or predicted,
Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,
Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,

  1. Defines the following for the purposes of this resolution:
    1. ‘Deep learning system’: A machine learning system composed of neural network(s) with at least one hidden layer.
    2. ‘Deep learning developer’: A natural person involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
    3. ‘Developing entity’: Any corporation, government, or individual seeking to label data for, train, and/or release a deep learning system that interacts with the public.
    4. ‘Institutional review board’ (IRB): A group internal to a developing entity composed of individuals from both inside and outside the developing entity who are qualified to make ethical decisions regarding a specific deep learning system.
    5. ‘Discrimination’: Different treatment for similarly situated parties on the basis of race, class, disability status, gender identity, sexual orientation, or caste, in addition to other classes defined by World Assembly or member state law."
  2. Enacts the following:
    1. If member states have technology to achieve deep learning system development, and intend to pursue such development, they shall be required to develop an appropriate comprehensive training and evaluation program for deep learning developer licensure, which may include classes, workshops, and/or seminars.
    2. If a member state has any deep learning systems in its jurisdiction, it will be required to either establish a new agency or designate an existing one as the party responsible for licensure, development, and enforcement of regulations on deep learning system development, hereafter referred to as the ‘regulating agency’.
    3. Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require comprehensive training and evaluation in avoiding discrimination and unintended outcomes in deep learning.
    4. Prior to development, developing entities shall submit to all regulating agencies with jurisdiction a project summary. Each regulating agency shall then decide with respect to their respective nation’s policies whether and at which stages an IRB is required to convene to oversee the project, and the quantity and variety of members the board must comprise.
    5. An IRB shall be required if any of the following are true:
      1. The project presents an obvious hazard to the safety, privacy, and/or security of individual persons;
      2. Personal information of natural persons is used in the project without explicit consent given; or
      3. The project is used for the purposes of warfare and/or surveillance.
    6. All deep learning systems operating or being developed at the time of this resolution's passage must also have a project summary submitted for them by their respective developing entities within six months, and may also be subject to an IRB as previously described.
  3. Implements the following standards for IRB oversight:
    1. An IRB may oversee any part(s) of deep learning development, including data processing, algorithmic development, and post-release review.
    2. Concerns raised by an IRB must be adequately addressed by the deep learning developer(s) within six months, or any further deployment, use, or development of that deep learning system shall be suspended by the regulating agency until the concerns are addressed.
    3. IRBs must submit annual summaries, and final summaries where applicable, to the regulating agency.
  4. Forms the World Assembly Deep Learning Consortium (WADLC) as such:
    1. Nations must report on their implementation of deep learning review standards to the WADLC annually.
    2. The WADLC shall be empowered to enforce deep learning review standards within member states, within the boundaries of member state and World Assembly law.
Note: Only votes from TNP WA nations, NPA personnel, and those on NPA deployments will be counted. If you do not meet these requirements, please add (non-WA) or something of that effect to your vote. If you are on an NPA deployment without being formally registered as an NPA member, name your deployed nation in your vote.
Voting Instructions:
  • Vote For if you want the Delegate to vote For the resolution.
  • Vote Against if you want the Delegate to vote Against the resolution.
  • Vote Abstain if you want the Delegate to abstain from voting on this resolution.
  • Vote Present if you are personally abstaining from this vote.
Detailed opinions with your vote are appreciated and encouraged!


ForAgainstAbstainPresent
7500
 
Last edited by a moderator:
Overview
This proposal seeks to address deep learning artificial intelligence and the various risks associated with it. It requires for those taking part in the manufacturing and maintenance of these systems to undergo a comprehensive training program, and for member states to establish its own regulative agency for deep learning if it has any systems in its jurisdiction. As well as this, deep learning developers will be required to obtain a license from their nation’s government, and developers must submit to previously mentioned agencies a project summary. The proposal also acknowledges IRBs (institutional review boards). These boards will be required to convene if the project presents a hazard, personal information of persons in the project is used without consent, or the project is to be used for the purposes of warfare/surveillance. The proposal sets standards for IRB oversight. Finally, the proposal establishes the World Assembly Deep Learning Consortium (WADLC) as a regulative body for deep learning.

Recommendation
We support this proposal, as we believe that corporations and governments are incapable of regulating such a volatile market without proper standards. Deep learning has the ability to cause catastrophic damage, as well as its benefits. If safeguards are implemented in the process of development, issues that can cause such catastrophic damage will be caught.

For the above reasons, the Ministry of World Assembly Affairs recommends a vote For the GA proposal at vote, "Improving Safety in Deep Learning".


Note: this was originally Ruben's vote space but since there was no space for IFV I used it for the IFV. Ruben voted For.
--
Edited by Simone
 
Last edited by a moderator:
Why does this need to be addressed by the GA?
 
Hi, author here, hoping to answer some questions and clear up some confusion.
Why does this need to be addressed by the GA?
This needs to be addressed by the GA because companies and nations themselves are not capable of regulating it properly without proper standards. Deep learning is something that (in real world cases as well) has the potential to cause catastrophic damage to a variety of social structures before it's even detected.

Since deep learning requires labelling of data to be trained, usually by sapients, really the best way to catch these issues before they happen is in the process of development, by implementing safeguards. Otherwise, societal failures which amount to massive and widespread human rights violations and hazards to safety are almost guaranteed at some point through no direct fault of any one person. To me, that is the point where it's the job of an international body to address those points of failure. This is also why I've decided to focus on specifically implementing oversight of deep learning while using personal information, surveilling, warring, or otherwise presenting a serious potential hazard.

Implementing licensing requirements also helps to address this on an individual level, since any person working in deep learning will then be more likely during the process of their work to be conscious of these issues and thus be vigilant about the possibility of systemic failure.
I really don't think this is well written. They have to go around AI coexistence and end up getting nowhere.
This is a misread of the intent of my proposal, I did not write this proposal around AI coexistence. It has nothing to do with GA#354 except in that it refers to machine learning as a process and that it happens to cover none of the same purview. Respectfully, I'm curious from what reading you think implementing standards for mitigating the worst effects of deep learning "does nothing", as you say.
 
Last edited:
Against. There seems to be a big loophole here in that under 2.1 & 2.2 Member States to develop "an appropriate comprehensive training and evaluation program for deep learning developer licensure" and a licensing agency, but the WA has no right to say whether a given Member State's program/licensing department is actually "appropriate," as under 4.2 the WADLC can only enforce the review standards subject to the Member State's Laws. It seems to me that this leads to a result where a Member State makes a law for their licensing program that is incredibly minimal, all Deep Learning projects can move to that Member State so they don't have to do much of anything to get approval, and the WADLC can't do anything about it. Additionally, since under 3.1, the IRB's oversight is a "may" instead of a "shall," an IRB could be appointed who would just rubber stamp any projects. With no default definition of what constitutes an "appropriate comprehensive training and evaluation program," and no mandatory enforcement of rules under an IRB, this seems somewhat toothless.
 
Against. There seems to be a big loophole here in that under 2.1 & 2.2 Member States to develop "an appropriate comprehensive training and evaluation program for deep learning developer licensure" and a licensing agency, but the WA has no right to say whether a given Member State's program/licensing department is actually "appropriate," as under 4.2 the WADLC can only enforce the review standards subject to the Member State's Laws. It seems to me that this leads to a result where a Member State makes a law for their licensing program that is incredibly minimal, all Deep Learning projects can move to that Member State so they don't have to do much of anything to get approval, and the WADLC can't do anything about it. Additionally, since under 3.1, the IRB's oversight is a "may" instead of a "shall," an IRB could be appointed who would just rubber stamp any projects. With no default definition of what constitutes an "appropriate comprehensive training and evaluation program," and no mandatory enforcement of rules under an IRB, this seems somewhat toothless.
4.2 says it can enforce standards subject to member state AND World Assembly law. Presumably, it can enforce the standards set by itself as World Assembly law.

I frankly wouldn't see a point in changing the 'may' to a 'shall'. Per 2.4, the regulating agency would presumably require an IRB only for the parts of the project considered appropriate, if it considers no parts of the project appropriate for IRB oversight, then no IRB need convene, and changing a may to a shall does not change that, and would still place appropriate parts of the project within IRB oversight subject to this resolution's standards. A strict prescription via 'shall' also defeats the purpose of the 'may', in that it restricts the necessary flexibility for review of deep learning across a variety of project types, even ones which may not include all project stages or where project stages may be benign.

As for a definition of appropriate comprehensive training and evaluation, I'm really not sure there needs to be one beyond what I have outlined for purposes of ensuring increased safety. Artificial Intelligence is used across a variety of industries and specifying all possible standards for all possible industries would lengthen a resolution unnecessarily. I'd prefer to leave that to national jurisdiction. People could enforce that in bad faith, but that could be done with any standard in a resolution and at that point it's kind of turtles all the way down.
 
Last edited:
Abstain.
There are some concerns which have already been expressed by others here. I would like the Author to work on the clauses and their wording. I appreciate the intent though.
 
Hi, author here, hoping to answer some questions and clear up some confusion.

This needs to be addressed by the GA because companies and nations themselves are not capable of regulating it properly without proper standards. Deep learning is something that (in real world cases as well) has the potential to cause catastrophic damage to a variety of social structures before it's even detected.

Since deep learning requires labelling of data to be trained, usually by sapients, really the best way to catch these issues before they happen is in the process of development, by implementing safeguards. Otherwise, societal failures which amount to massive and widespread human rights violations and hazards to safety are almost guaranteed at some point through no direct fault of any one person. To me, that is the point where it's the job of an international body to address those points of failure. This is also why I've decided to focus on specifically implementing oversight of deep learning while using personal information, surveilling, warring, or otherwise presenting a serious potential hazard.

Implementing licensing requirements also helps to address this on an individual level, since any person working in deep learning will then be more likely during the process of their work to be conscious of these issues and thus be vigilant about the possibility of systemic failure.

This is a misread of the intent of my proposal, I did not write this proposal around AI coexistence. It has nothing to do with GA#354 except in that it refers to machine learning as a process and that it happens to cover none of the same purview. Respectfully, I'm curious from what reading you think implementing standards for mitigating the worst effects of deep learning "does nothing", as you say.

Ok let me re-read this.
 
The General Assembly resolution improving safety in deep learning development was passed 10,334 votes to 1,987, and implemented in all WA member nations.
 
Status
Not open for further replies.
Back
Top