- Pronouns
- They/Her/It
- TNP Nation
- The_Anddoran_Commune
- Discord
- NutmegTheSquirrel#8941
Improving Safety In Deep Learning Development
Category: Regulation | Area of Effect: Safety
Proposed by: Haymarket Riot | Onsite Topic
Note: Only votes from TNP WA nations, NPA personnel, and those on NPA deployments will be counted. If you do not meet these requirements, please add (non-WA) or something of that effect to your vote. If you are on an NPA deployment without being formally registered as an NPA member, name your deployed nation in your vote.The World Assembly,
Appreciating that deep learning artificial intelligence automates data processing, improving the efficiency and expanding the scope of data analysis,
Applauding the impact these technologies have and will continue to have on driving the increased productivity of myriad industries and government programs,
Noting that deep learning systems are a ‘black box’ technology, i.e., it is very difficult for natural persons to investigate the origins of algorithmic parameters and outputs,
Concerned at the mounting evidence that as deep learning systems are implemented socially, preconceived biases in handling of data inevitably lead to discrimination in ways that cannot be fully discerned or predicted,
Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,
Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,
- Defines the following for the purposes of this resolution:
- ‘Deep learning system’: A machine learning system composed of neural network(s) with at least one hidden layer.
- ‘Deep learning developer’: A natural person involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
- ‘Developing entity’: Any corporation, government, or individual seeking to label data for, train, and/or release a deep learning system that interacts with the public.
- ‘Institutional review board’ (IRB): A group internal to a developing entity composed of individuals from both inside and outside the developing entity who are qualified to make ethical decisions regarding a specific deep learning system.
- ‘Discrimination’: Different treatment for similarly situated parties on the basis of race, class, disability status, gender identity, sexual orientation, or caste, in addition to other classes defined by World Assembly or member state law."
- Enacts the following:
- If member states have technology to achieve deep learning system development, and intend to pursue such development, they shall be required to develop an appropriate comprehensive training and evaluation program for deep learning developer licensure, which may include classes, workshops, and/or seminars.
- If a member state has any deep learning systems in its jurisdiction, it will be required to either establish a new agency or designate an existing one as the party responsible for licensure, development, and enforcement of regulations on deep learning system development, hereafter referred to as the ‘regulating agency’.
- Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require comprehensive training and evaluation in avoiding discrimination and unintended outcomes in deep learning.
- Prior to development, developing entities shall submit to all regulating agencies with jurisdiction a project summary. Each regulating agency shall then decide with respect to their respective nation’s policies whether and at which stages an IRB is required to convene to oversee the project, and the quantity and variety of members the board must comprise.
- An IRB shall be required if any of the following are true:
- The project presents an obvious hazard to the safety, privacy, and/or security of individual persons;
- Personal information of natural persons is used in the project without explicit consent given; or
- The project is used for the purposes of warfare and/or surveillance.
- All deep learning systems operating or being developed at the time of this resolution's passage must also have a project summary submitted for them by their respective developing entities within six months, and may also be subject to an IRB as previously described.
- Implements the following standards for IRB oversight:
- An IRB may oversee any part(s) of deep learning development, including data processing, algorithmic development, and post-release review.
- Concerns raised by an IRB must be adequately addressed by the deep learning developer(s) within six months, or any further deployment, use, or development of that deep learning system shall be suspended by the regulating agency until the concerns are addressed.
- IRBs must submit annual summaries, and final summaries where applicable, to the regulating agency.
- Forms the World Assembly Deep Learning Consortium (WADLC) as such:
- Nations must report on their implementation of deep learning review standards to the WADLC annually.
- The WADLC shall be empowered to enforce deep learning review standards within member states, within the boundaries of member state and World Assembly law.
Voting Instructions:
- Vote For if you want the Delegate to vote For the resolution.
- Vote Against if you want the Delegate to vote Against the resolution.
- Vote Abstain if you want the Delegate to abstain from voting on this resolution.
- Vote Present if you are personally abstaining from this vote.
For | Against | Abstain | Present |
7 | 5 | 0 | 0 |
Last edited by a moderator: