Regulation of artificial intelligence
From Wikipedia, the free encyclopedia
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.[1][2][3][4][5] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union[6] (which has governmental regulatory power) and in supra-national bodies like the IEEE, OECD (which do not) and others.[7] Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology.[8] Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI,[9] and take accountability to mitigate the risks.[10] Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.[11][12]
According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[13][14]