Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI (PDF)
Artificial Intelligence and Life in 2030: The One Hundred Year Study on Artificial Intelligence
The Institute for Ethical AI in Education
An overview of current trends in regulating AI in different regions and discusses the key ethical issues to establishing fair and inclusive regulatory systems at the global level - The United Nations Educational, Scientific and Cultural Organization (UNESCO)
Stanford Encyclopedia of Philosophy
The Alan Turing Institute
The use of AI tools in an academic context and academic work must be open and transparent. It is also important to consider reliability and quality of output - see How to Use AI Critically. But there are broader social and political concerns too surrounding the development and use of AI. These concerns are outlined on this page in alphabetical order rather than in any order of relevance.
AI tools aggregate information and artwork without the creators' knowledge or permission. It is also difficult to know exactly which work has been aggregated - the algorithms used by these tools are currently very opaque. There is a lack of clarity around copyright law and how it deals with AI-generated work (see: George R.R. Martin and other authors sue OpenAI for copyright infringement and AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit). In relation to the UK music industry for example there are concerns over copyright, the mimicking or impersonating real singers (deepfakes), and debates on original creativity.
Some AI tools can be used to create images or audio known as 'deepfakes', which are fictional representations of real people. Deepfakes are usually made without the knowledge or permission of the real person being represented in the fictional image or audio file. Most deepfakes are pornographic in nature, but some also target high profile politicians or activists (see: What are deepfakes – and how can you spot them?). As well as constituting a danger for women, deepfakes are considered a threat to democratic dialogue about political processes or debates as fake news and disinformation make it harder to differentiate fact from fiction (see: AI and deepfakes blur reality in India elections). Established news and media organisation use tools to find fake information and one example is BBC's Verify.
AI tools use a worrying amount of energy to keep themselves running (see: Warning AI industry could use as much energy as the Netherlands). AI tools also consume a great deal of water, which is a finite resource, in order to keep the machines from overheating (see: AI Is Accelerating the Loss of Our Scarcest Natural Resource).
It may appear as if AI tools are kept running purely by machines, but they actually require a great deal of ongoing human input to be usable. This is often underemphasised by the companies who own AI tools, possibly to discourage closer examination of their unethical treatment of workers (see: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic).
Tools such as ChatGPT and others are still in development and are being "trained" by its developers on any information on the internet including information on individuals. Read the terms and conditions and see if you can opt out of providing personal information, if this is what you prefer. Tools may have been released before reliability is tested and assessed, and before regulations are made regarding privacy and the protection of private information (see: Samsung tells employees not to use AI tools like ChatGPT, citing security concerns).
Important
It is best not to upload personal or confidential information when using any generative AI tool. If you are asking it to summarise material be aware that you cannot upload copyrighted material. Check the terms and conditions as your input might be used to train the AI tool you are using. Have a look at "Navigating the terms and conditions of generative AI" compiled by Sheppard, P. from the National Centre for AI, JISC hub.
"The Copyright Office has launched an initiative to examine the copyright law and policy issues raised by artificial intelligence (AI) technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training. "
The Use of Generative AI and AI-assisted Technologies in Writing for Elsevier
AAPI Data is a nationally recognized publisher of demographic data and policy research on Asian Americans and Pacific Islanders, with hundreds of news mentions in national and local outlets.
AI4ALL is a US-based nonprofit dedicated to increasing diversity and inclusion in AI education, research, development, and policy.
The AI Now Institute aims to produce interdisciplinary research and public engagement to help ensure that AI systems are accountable to the communities and contexts in which they're applied.
AJL's mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.
Data for Black Lives is a movement of activists, organizers, and scientists committed to the mission of using data to create concrete and measurable change in the lives of Black people.
We integrate research in the humanities, social sciences, computer and data sciences to understand and address online polarization, abusive language, discriminatory algorithms and mis/disinformation.
Institute is a space for independent, community-rooted AI research, free from Big Tech's pervasive influence.
Feminist AI™ works to put technology into the hands of makers, researchers, thinkers and learners to amplify unheard voices and create more accessible AI for all.
We create spaces where intergenerational BIPOC and LGBTQIA+ womxn and non-binary folks can gather to build tech together that is informed by our cultures, identities and experiences.
We engage with intersectional feminism to spotlight our stories, inventions, designs and leadership, and to co-create more equitable futures.
The Indigenous Protocol and Artificial Intelligence (A.I.) Working Group develops new conceptual and practical approaches to building the next generation of A.I. systems.
The working group is interested the following questions:
• From an Indigenous perspective, what should our relationship with A.I. be?
• How can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and A.I.?
• How do we broaden discussions regarding the role of technology in society beyond the largely culturally homogeneous research labs and Silicon Valley startup culture?
• How do we imagine a future with A.I. that contributes to the flourishing of all humans and non-humans?
The Women in AI Ethics™ (WAIE) is a fiscally sponsored project of Social Good Fund, a California nonprofit corporation and registered 501(c)(3) organization with a mission to increase recognition, representation, and empowerment of brilliant women in this space who are working hard to save humanity from the dark side of AI.
Source: https://guides.libraries.emory.edu/AI/ethics