Skip to Main Content

Artificial Intelligence Literacy: Basics

Constantly updated guide to the right use of AI generated content in the study and research

Ethical Implications of Generative AI

The widespread adoption of GenAI raises numerous ethical questions, including about AI developers' and corporations' moral responsibilities, the environmental impact of AI use, and the potential erosion of public trust if AI is deployed in harmful ways. Ensuring ethical AI development and usage requires a critical analysis of its interconnected societal and ecological implications. This work involves addressing issues such as privacy, accountability, integrity, intellectual property, bias, and human labour.

Although GenAI presents significant opportunities for innovation in higher education, it is essential to critically examine and keep up-to-date on ethical concerns related to honesty, fairness, accountability, ownership, privacy, security, equity, and access.

Impact of Generative AI

Academic and Research Integrity

The rapidly growing use of GenAI by the academic community—including students, faculty, and staff—has amplified conversations about the importance of academic integrity and the need to ensure that GenAI tools are used appropriately.

Academic integrity, a cornerstone of post-secondary work, is defined by the International Centre for Academic Integrity as a commitment to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage. These values are essential for upholding the quality of education and research within academia and ensuring that the work produced serves the broader public good with integrity and reliability. The antithesis of academic integrity is "academic misconduct," which can be defined simply as "cheating." The University of Saskatchewan defines academic misconduct with examples in the Handbook.

As universities continue to adapt to the widespread availability and popularity of GenAI, students, faculty, and staff must keep lines of communication open and familiarize themselves with developing and evolving policies and guidelines. Policies and guidelines regarding the ethical use of GenAI in studying, completing assignments, and conducting and disseminating research can be issued by an instructor, supervisor, department, college, program, unit, division, publisher, collaborative entity, or institution. 

Some key considerations for appropriate use of GenAI

  1. Attribution and acknowledgment:  Clearly state when Generative AI has been used in assignments or research, giving credit where it is due (see citing AI).
  2. Avoiding plagiarism: Refrain from using Generative AI to produce text or content that is presented as one's own work, and carefully check for instances of accidental plagiarism.
  3. Respecting intellectual property rights: Adhere to copyright laws and other regulations when using Generative AI for projects or assignments. (see Indigenous Knowledge and Copyright tabs in this section of the guide).
  4. Responsible use of data: Exercise caution when using personal or sensitive data in conjunction with Generative AI, and ensure adherence to relevant privacy regulations.
  5. Seeking guidance from instructors and supervisors: Consult with professors and supervisors to understand their expectations and any limitations they may set regarding the use of Generative AI in coursework and research projects.

How can you prepare?

Refresh your understanding of academic and research integrity

GenAI and User Data

Datafication is the process of transforming all aspects of our lives, including our behaviours and attributes, into measurable digital data that can be collected, stored, analyzed, and used for various purposes.

Datafication is driven by the widespread use of digital technologies, sensors, and connected devices, raising important questions about privacy, ethics, and security. To ensure responsible data collection, usage, and protection, we need appropriate legislation to establish safeguards and guidelines. However, GenAI users should be mindful that in many jurisdictions, online privacy legislation may be absent, incomplete, or out-of-date, so they may have limited recourse to combat bad actors and cyber criminals who use data for nefarious activities.

Specific privacy concerns include
  • data theft: data taken without consent,
  • data persistence: data outliving the humans who produced it,
  • data overreach: data used beyond the limits in which consent was originally given,
  • data spillage: data collected on persons who were not the original target, and
  • data reidentification: data that has been un-anonymized.

Privacy Considerations in the Age of GenAI

GenAI systems are often referred to as a ‘black box,’ meaning that there is uncertainty about how user data and inputs are used by the companies that own these systems. Assumed uses include

  • additional system training where user data and inputs may be used to further train and refine the AI models,
  • consumer profiling where companies analyze user data to create profiles of their users so they can tailor their services and provide personalized experiences, and
  • customer service training where customer service teams study user data to improve support and service quality.

However, these and other uses may not be clearly disclosed to users, including whether information will be shared with or sold to third parties.

Given the lack of transparency around how data is used, users should be very cautious about the nature of information that they share with these systems, and should not share any sensitive data (for example, personal, propriety or copyrighted information, health information).

Security Considerations in the Age of GenAI

From a security perspective, GenAI increases the potential for creating realistic scams. While malicious actors and cybercriminals have long used the internet and other technologies to deceive, manipulate, or dox individuals, the emergence of GenAI makes it even easier for them to cause harm through more sophisticated and convincing tactics.

For instance, GenAI can enable the production of highly convincing voice clones that mimic a target individual, which can then be used in various scams. These voice clones may be used in romance scams, where a scammer pretends to be a potential romantic partner and tricks the victim into sending money or personal information. Similarly, job scams can involve impersonating a hiring manager or recruiter to gain access to sensitive information or even request payment for fake job opportunities.

GenAI's ability to produce large volumes of convincing content rapidly poses an increased threat as it enables cybercriminals to scale up their malicious activities. As a result, individuals, businesses, and governments must adapt by implementing enhanced security measures, such as multi-factor authentication and increased user awareness and education.

Copyright, Fair Use, and Creative Commons

Generative AI models rely heavily on large amounts of data during their training process and make use of that same material when producing outputs, often with human input but without oversight. This lack of oversight makes it essential to consider the ethical implications related to copyright and the rights of creators that may have been ignored during the training of these models.

Copyright law is partly designed to protect the rights of creators and their ability to control how their works are used and distributed. Using copyrighted material without permission or compensation can undermine these rights and harm creators through a disruption in income, misrepresentations or distortions of their original work, or damage to their reputation and integrity.

However, maintaining a balance between user rights and the maintenance of a rich public sphere where ideas and creative works can easily be accessed, shared, and discussed are also priorities in copyright. Overly restrictive practices can significantly impede access, stifle innovation, and limit the development of technologies. Sometimes, this means providing exceptions and limitations to copyright, such as fair use and fair dealing, which allow users to access, use, and sometimes even modify copyrighted works under certain conditions.

Responsible and Ethical Use

While AI models can learn from open resources and the public domain, the ethical implications of using other copyrighted materials in generative AI models depend on the specific context and circumstances. The goal should be a balance that carefully considers, as well as respects, the rights of creators and users. In the context of a university setting, using GenAI responsibly means considering the following factors:

  1. Respect for creators' rights: Strive to use AI models that primarily draw from open resources and the public domain, reducing the risk of copyright infringement.
  2. Balancing user rights and access: Promote practices that support both innovation and access while ensuring the rights of creators are respected.
  3. Contextual considerations: Evaluate specific cases of GenAI use to determine whether it aligns with ethical, legal, and department/university expectations.
  4. Support AI Literacy: Raise awareness about the importance of copyright and ethical use of GenAI among students, faculty, and staff.

Understanding copyright

Bias and Misinformation - Why Should We Care?

Generative AI is known to reproduce biased content in its outputs. These biases are inherited from its training data. In other words, if the training data contains biases—such as stereotypes or unequal representations of certain groups—these biases are learned and perpetuated by the AI.

AI systems can, therefore, amplify existing societal biases, which can result in unfair treatment and discrimination in various applications, from hiring practices to law enforcement. To address and mitigate biases, developers must carefully select training data and continuously monitor and adjust algorithms. Users must critically evaluate AI outputs to ensure transparency and fairness.

In the context of generative AI, various forms of bias can be manifested, including societal, data, and algorithmic biases.

Societal bias

Refers to the broader societal and cultural prejudices existing in the real world, which can be reflected and perpetuated in AI systems. For instance, societal biases surrounding gender roles or racial stereotypes can be unwittingly incorporated into AI applications, leading to discriminatory outcomes.

Data bias

Refers to the large datasets that GenAI is trained on. If the data contains biases, such as under-representing or overrepresenting certain groups, the AI can learn and perpetuate these biases. For instance, if a facial recognition system is trained using predominantly lighter-skinned individuals, it may be less accurate when identifying individuals with darker skin tones.

Algorithmic bias

Refers to the underlying algorithms and techniques used to develop AI models that introduce or amplify existing biases. This happens when developers fail to consider diverse perspectives, cultural contexts, or ethical considerations during the development process.

Addressing these biases involves carefully selecting and balancing training data, critically evaluating AI outputs, and making necessary adjustments to ensure fairness and accuracy.

Learn more about Misinformation and how to protect ourselves

GenAI is rapidly disrupting many aspects of society, including, but not limited to education, labour and manufacturing, science and technology, arts and entertainment, the environment, and political life. The scale of this disruption is still uncertain, but there has been much discussion around potential benefits and real concerns about GenAI.

Capabilities and Limitations of Using GenAI as a Studying or Learning Aid

Potential Benefits Real Concerns
Time savings and efficiencies by automating repetitive tasks and streamlining workflows, leading to enhanced productivity, and a reduction in human error. Over-reliance on AI can devalue human creativity and increase homogenization, leading to the loss of important skills and decision-making capabilities as tasks become automated.
Personalized instruction and feedback can lead to improved learning outcomes and a more targeted approach to skill development. Privacy concerns arise when personal data is collected and analyzed to provide tailored experiences. Biased data could result in unfair or inaccurate outcomes, exacerbating existing inequalities.
Language translation and editing to facilitate communication and collaboration across linguistic barriers. Oversimplification and misinterpretation of cultural nuances can lead to potential miscommunication. The digital divide can potentially exclude certain groups or individuals from these benefits.
Coaching and individualized support in areas such as mental health, wellness, and career development can lead to more effective interventions. Over-dependence on AI can diminish human-to-human support and empathy. The digital divide can potentially exclude certain groups or individuals from these benefits.
Research, coding, and data analysis assistance can facilitate pattern identification and lead to more accurate insights. Ownership and attribution of AI-assisted work can lead to an increased risk of plagiarism or academic dishonesty.
Improved healthcare diagnostics and treatment can lead to better patient outcomes, more efficient healthcare delivery, and reduced healthcare costs. Privacy and consent concerns raise ethical questions about the appropriate use of sensitive medical data.

Additional Resources

The potential benefits of AI to the environment are numerous, such as improved biodiversity monitoring, precision agriculture, and waste management, to name a few. However, the development and deployment of AI technologies require substantial computational power, leading to significant energy consumption and a considerable carbon footprint. As the demand for AI grows and its availability widens, we are seeing in

Potential Benefits of GenAI Real Concerns about GenAI

Biodiversity monitoring and protection. AI can help analyze large datasets to monitor species populations, habitats, and ecosystems more efficiently. 

See: A synergistic future for AI and ecology

Energy consumption during AI model training. The process of developing and training AI models requires significant computational power, which in turn leads to extraordinarily high energy consumption.

See: The AI Boom Could Use a Shocking Amount of Electricity

Climate change mitigation. AI can help optimize renewable energy systems, smart grids, industrial processes, and energy-efficient buildings, reducing waste, energy consumption, and greenhouse gas emissions.

See: How Ecology Could Inspire Better Artificial Intelligence and Vice Versa

Carbon footprint and greenhouse gas emissions. The energy-intensive nature of AI development and deployment results in a substantial carbon footprint, contributing to global greenhouse gas emissions.

See: AI’s carbon footprint – how does the popularity of artificial intelligence affect the climate?

Precision agriculture. AI can be used to optimize farming practices, reducing the need for water, pesticides, and fertilizers.

See: How can AI benefit sustainability and protect the environment?

Water-intensive processes. Generative AI requires vast amounts of water for manufacturing microchips and cooling data centres.

See: AI water consumption: Generative AI’s unsustainable thirst

Waste management and recycling. AI can help optimize waste collection, sorting, and recycling processes, reducing the amount of waste heading to landfills or polluting the environment.

See: How AI is Revolutionizing Solid Waste Management

Electronic waste. E-waste associated with the development of AI technologies is growing, and e-waste recycling is not keeping up, contributing to air, soil, and water pollution.

See: Electronic Waste Rising Five Times Faster than Documented E-waste Recycling: UN

Ecosystem restoration and conservation planning. AI can analyze large environmental datasets and satellite imagery to identify areas in need of restoration, map ecosystem boundaries, and help plan and prioritize conservation efforts in a more data-driven and efficient manner. 

See: Improving biodiversity protection through artificial intelligence.

Consequence of mining rare earth elements. The hardware components necessary for AI technologies rely on rare earth elements, the mining of which contaminates soil and groundwater with an array of toxic chemicals.

See: Rare earth mining may be key to our renewable energy future. But at what cost?

Given the rapid growth of GenAI, scholars like Aimee van Wynsberghe assert that "this is not a technology that we can afford to ignore the environmental impacts of” (2021), especially when considering the lack of publicly available information about the sector’s total energy consumption. In one alarming example, it is estimated that compared to a conventional web search (e.g., Google, Bing), it is “a search driven by generative AI uses four to five times the energy."

Economic and Labour Considerations

GenAI will lead to the loss of many jobs and, conversely, the creation of new jobs.

Administrative and office work, arts and entertainment, IT, and education sectors are predicted to be particularly vulnerable to these disruptions. Higher education has been affected and will continue to see change in numerous ways, including in marketing and campus relations, admissions and enrollment, IT, finance and administration, libraries, student support services, and teaching and learning. In many industries, there is now a growing emphasis on reskilling and upskilling to meet the demands and offset the disruptions of an AI-driven job market. 

Potential Benefits Real Concerns
Enhanced productivity: Generative AI has the potential to increase productivity by automating tasks and enabling workers to focus on higher-level responsibilities. Job displacement and automation: The rise of Generative AI may lead to job displacement and automation as machines become capable of performing tasks previously done by humans.
Streamlined work processes: AI technologies can help optimize and streamline various work processes, leading to more efficient operations and reduced costs. Widening income inequality: The adoption of Generative AI requires workers to adapt to new roles or face job loss, exacerbating income inequality.
Improved efficiency: By automating tasks and optimizing workflows, Generative AI can improve efficiency across industries, allowing businesses to achieve more with fewer resources. Impact on vulnerable populations: The adoption of Generative AI may reinforce existing biases and disproportionately impact vulnerable populations, such as marginalized communities, who may lack the resources to adapt to the changing job market.
Innovation and technological advancements: Generative AI can catalyze technological breakthroughs and innovations that benefit society and improve people's lives. Need for social safety nets: As AI technologies disrupt the labour market, there will be a growing need for social safety nets to support displaced workers and ensure a just transition.

Examples of current economic and labour concerns related to the development and implementation of GenAI are around:

Intellectual property

Several lawsuits have been brought against companies like OpenAI, Midjourney, Suno, and Udio by writersartists (including the estate of the late comedian George Carlin), the New York Times, eight US newspaper publishers owned by Alden Global Capital, and as of June 2024, the world’s largest music labels. All allege copyright infringement, asserting that their work has been unlawfully used to train AI systems without proper consent or compensation. Ethical and economic concerns around Indigenous intellectual property exist, as well, particularly around cultural appropriation, misappropriation, and misrepresentation.

Creative labour

Many artists, writers, musicians, and other creatives are concerned about how GenAI technology may replace human creativity, arguing that it undermines or “devalue[s] the rights of human artists” and affects their often already-limited earning potential.


Economic exploitation

Although Generative AI was trained on massive digital datasets, it required human intervention to become usable. During ChatGPT's development, OpenAI outsourced a significant portion of this work to data labellers in Kenya, who were paid less than $2 per hour to sift through often harmful and traumatizing content and filter out toxic material. This situation has given rise to the concept of "digital sweatshops" and raises concerns about the ongoing exploitation of workers in the Global South by Big Tech companies.

Library Homepage Facebook Youtube Instagram Twitter Telegram E-mail