[GA, out of queue] - Improving Safety In Deep Learning Development

Status
Not open for further replies.

Simone

Ursine thingy
-
-
-
-
Pronouns
He/it
TNP Nation
Simone_Republic
ga.jpg

Improving Safety In Deep Learning Development
Category: Regulation | Strength: Safety
Proposed by: Haymarket Riot | Onsite Topic

The World Assembly,

Appreciating that deep learning artificial intelligence automates data processing in a manner that improves the efficiency of previously difficult or impossible data analysis,

Applauding the impact these technologies are already having and will continue to have on driving the increased productivity of myriad industries and government programs,

Noting that deep learning systems are a ‘black box’ technology, i.e., it is very difficult for humans to investigate the origins of algorithmic parameters and outputs,

Concerned at the mounting evidence that as deep learning systems are implemented, preconceived biases in human inputting, labeling, and pre-processing of data inevitably lead to discrimination where artificial intelligence is socially applied in ways that cannot be fully discerned or predicted,

Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,

Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,



  1. Defines the following for the purposes of this resolution:
    1. ‘Deep learning system’: A machine learning system composed of neural network(s) with at least one hidden layer.
    2. ‘Deep learning developer’: A human involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
    3. ‘Developing entity’: Any corporation, government, or individual seeking to label data for, train, and/or release a deep learning system that interacts with the public.
    4. ‘Institutional review board’: A group internal to a developing entity composed of individuals from both inside and outside the developing entity who are qualified to make ethical decisions regarding a specific deep learning system.
  2. Enacts the following:
    1. If member states have technology to achieve deep learning system development, and intend to pursue such development, they shall be required to develop an appropriate comprehensive training and evaluation program for deep learning developer licensure, which may include classes, workshops, and/or seminars.
    2. Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require comprehensive training and evaluation in avoiding discrimination and unintended outcomes in deep learning.
    3. Prior to development, developing entities shall submit to their nation’s government a project summary. The government shall then decide whether and at which stages an institutional review board is required to convene to oversee the project, and the quantity and variety of members the board must comprise.
    4. All deep learning systems actively operating or being developed at the time of this resolution's passage must also have a project summary submitted for them by their respective developing entities within six months of this resolution's passage, and may also be subject to an institutional review board as previously described.
  3. Implements the following standards for institutional review board oversight:
    1. An institutional review board may oversee any or all of the following steps in deep learning system development: data processing, algorithmic development, and post-release review.
    2. Concerns raised by an institutional review board must be adequately addressed within six months, or any further deployment, use, or development of that deep learning system shall be suspended by that nation’s government until the concerns are addressed.
Note: Only votes from TNP WA nations, NPA personnel, and those on NPA deployments will be counted. If you do not meet these requirements, please add (non-WA) or something of that effect to your vote. If you are on an NPA deployment without being formally registered as an NPA member, name your deployed nation in your vote.
Voting Instructions:
  • Vote For if you want the Delegate to vote For the resolution.
  • Vote Against if you want the Delegate to vote Against the resolution.
  • Vote Abstain if you want the Delegate to abstain from voting on this resolution.
  • Vote Present if you are personally abstaining from this vote.
Detailed opinions with your vote are appreciated and encouraged!

For Against Abstain Present
0000
 
There's significant concerns on the gameside forum over rushing the resolution (it was in draft for two and a half weeks), and some rather jarring writing. I will need to read this in detail before I render judgment.
 
Against. It seems like this proposal does not do that much and what it does do it may as well leave to member states.

Member states have to license deep learning developers, but the standards for being license or what the standards are to achieve are left to member states, beyond avoiding discrimination (on what grounds? Any grounds?) and unintended outcomes. Review boards can be set up if member states want, when member states want and are constituted as member states want. A board, if extant, oversees development and can require concerns to be addressed “appropriately“, the standard for concerns or for determining whether they have been addressed or who decides if they have been addressed (the board? The member state?), seems all to be left open.

It doesn’t seem to me there really any effort to prescribe international standards or co-ordinate member states’ or even to encourage member states to co-ordinate them. It leaves me wondering what the point of the resolution is.
 
Against

Same thoughts, doesn't do any of what it sets out to do, and ends up being yet another review board generation resolution.
 
Status
Not open for further replies.
Back
Top