top of page
Search

The Ethical Implications of GenAI use: Social, Economic and Environmental

Any AI Literacy course requires participants to grasp the ethical implications of its use. This includes both positive and negative implications. While it is common to hear people talk about privacy, copyright and bias, there are many more implications that need to be considered.


An important study led by Zach Quince resulted in a comprehensive table outlining 32 distinct implications of GenAI use. While there may be more, we believe this provides a robust foundation without becoming overly granular. Our findings revealed that students tend to associate with only a limited number of these implications, often gravitating toward the more positive ones. This underscores the need to strengthen students' ethical awareness and understanding by supporting the development of a more balanced and critical perspective.


We have started to incorporate these 32 elements into our teaching. We have discovered that given the opportunity, students do become more aware of the implications, and that can only be a benefit for the human race. Need a place to start? You have come to the right place.


If you would like to use information from the following table, please cite the linked study.


The 32 Social, Economic and Environmental Implications of GenAI Use

Implication

Brief explanation

Affect recognition

Affect recognition interprets emotions via facial expressions, body language, and speech, offering potential benefits in areas like mental health support, adaptive learning, and customer service. However, it faces criticism for methodological flaws and inconsistency. Privacy concerns arise with its use in surveillance, like in schools, and it can perpetuate bias, as seen with an algorithm leading to racial profiling by linking behaviours to ‘terrorist behaviour’.

Bias

GenAI is a vast language model trained on a large dataset like common crawl, which gathers data from web pages and other sources over many years. It grants immense capabilities but also harbours biases. Internet scraping can introduce discriminatory language alongside valuable content.

Collaboration

GenAI can enhance collaboration by facilitating idea exchange, streamlining communication, and fostering shared insights for collective problem-solving. The impact depends on how it is used. Either as a helpful tool that supports people or as something that undermines human contribution.

Competitive advantage

Gen-AI tools can facilitate competitive advantage in the face of rapid digital transformation. This may include improved resource allocation, capability development, efficiency and increased revenue/profit through cost reduction and improved productivity. However, this can also widen the gap between organisations with access to advanced AI and those without.

Control and oversight

This relates to ensuring that humans remain in ultimate control of all critical processes. AI should be used to support, not replace, human judgment, especially in decisions that affect people’s lives, rights, or well-being.

Copyright and intellectual property

Linked to ethical integrity are concerns about intellectual property and copyright. For example, AI-generated image tools like Stable Diffusion and DALL-E use internet images, sparking debates on artists' rights as their styles get replicated without consent. Some argue this mirrors traditional artistic processes, while artists view it as an ethical breach.

Data augmentation

GenAI serves as a tool for data augmentation, expanding data quantity and diversity, and is usable to train new AI models. Helping AI systems learn better can lead to economic advantages, but its ethical implications depend on its application. It can lead to environmental strain, social harm, and economic disruption if not carefully managed.

Data collection and utilisation

The process of collecting data, transforming it (datafication), and leveraging it (big data). This may lead to smarter technology that can unlock innovation, improve services, and support economic development if done appropriately. Datafication transforms nearly all areas of our daily lives into data for AI algorithms, raising serious privacy concerns. From location to health data, our interactions feed algorithms, turning user data into products and can lead to exploitation. Big Data can exacerbate biases and discrimination, as the algorithm's perspective lacks diverse input due to unequal access to technology.

Data ownership

This represents the concerns about who owns the data generated by GenAI and uploaded to GenAI.

Data security

Security and protection of data are needed as there are vulnerabilities and defences in LLMs that need to be considered. This includes AI Model Inherent Vulnerabilities (e.g. data poisoning, backdoor attacks, training data extraction) and Non-AI Model Inherent Vulnerabilities (e.g. remote code execution, prompt injection, side channels).

Decision making, risk & uncertainty

GenAI may shape human decision-making by influencing how risk and uncertainty are perceived and managed. While it can support faster, data-informed choices, care is needed to avoid superficial or contextually inaccurate responses that may lead to poor or overconfident decisions.

Enhanced user experiences

GenAI’s ability to generate content and adapt to user needs may enhance experiences across platforms—improving quality through visuals, language, and communication efficiency. However, there are risks of over-personalisation, reduced human agency, and user manipulation if experiences are optimised solely for engagement or profit.

Environmental impact

The lifecycle of GenAI can contribute to greenhouse gas emissions due to its reliance on energy-intensive data centres and cloud computing. While less direct, the mining and processing of materials used in AI hardware can also lead to environmental harm.

Equity and accessibility

Fair and unbiased access to the benefits of GenAI for all users, regardless of background, resources, or abilities. This includes making the technology available and usable by diverse communities while mitigating biases and ensuring equal opportunities for all.

Ethical Integrity

Users may use language models such as ChatGPT to declare outputs that are not their own. Note that GenAI output is not traditionally-defined plagiarism, as the model's output is not copied from a single source. Rather, the output is an original creation generated ‘probabilistically’ from a wide range of sources.

Explore alternatives

GenAI draws on diverse human knowledge to generate a wide range of outputs across text, visuals, and more. This enables users to engage with abstract concepts in varied and accessible ways, encouraging the exploration of alternative ideas and perspectives. For example, a user might prompt GenAI to generate counterarguments to their own viewpoint, fostering critical thinking and deeper understanding. However, GenAI may also present all perspectives, regardless of accuracy or credibility, as equally valid. This can lead to confusion, reinforce existing biases, or unintentionally spread misinformation, particularly if users lack the context to evaluate the responses critically.

Feedback & improvement

GenAI can provide diverse feedback types, including performance reviews, learning insights, actionable suggestions, and improvement strategies. However, its feedback may lack accuracy, context, emotional intelligence, or an understanding of nuanced human experiences, leading to generic, inappropriate, or even demotivating responses if not carefully interpreted.

Human labour

GenAI’s impact on human labour is complex. While it creates efficiency and new job opportunities, it also threatens to displace workers, particularly in low-skill or repetitive roles. Even in high-skilled fields like law or finance, automation raises concerns about job security. Early GenAI systems often relied on hidden, underpaid human labour for tasks like data labelling and moderation. Ethical deployment must consider both the risks of exploitation and the potential to enhance work if guided by fair policies and inclusive design.

Improved understanding

GenAI can enhance users’ comprehension across subject domains by providing explanations, examples, and diverse perspectives that may support deeper learning. It may help users grasp complex concepts and explore new ideas more efficiently. However, there is a risk of oversimplification, misinformation, or users developing surface-level understanding if they rely too heavily on AI without critical thinking or cross-checking with reliable sources.

Language fluency

GenAI may enhance language fluency by improving grammar, vocabulary, and expression. However, over-reliance may weaken independent language skills and reduce mother tongue fluency over time.

Learning

GenAI-powered curriculum design can produce personalised and engaging experiences, for learning and skill development. It could cause over-dependence and a decline in skills such as critical and independent thinking.

Organisational structure

GenAI can be integrated into organisational structures and roles, providing the potential for innovation, while enhancing efficiency, responsiveness, and agility. However, its adoption may also lead to challenges such as increased centralisation of decision-making, reduced transparency, and over-reliance on AI-driven processes. These shifts can undermine human oversight, disrupt team dynamics, and create resistance or confusion among employees if not managed carefully.

Output quality and accuracy

Determining if the output of the GenAI is reliable, relevant, and factually accurate. Evaluating quality requires evaluative judgment, as outputs must reflect real information, not fabricated or "hallucinated" content, and align with current, verified knowledge. Misinformation can have serious consequences.

Performance measurement

GenAI can refine performance measurement by automating data analysis, offering real-time insights, and optimising metrics for accurate and informed assessments. It can create performance cultures focussed on metrics rather than intangible higher values of contribution and service.

Power and hegemony

GenAI models, built on static data, reflect the existing societal power dynamics and their limitations and biases , potentially further marginalising disadvantaged groups. These models encode a dominant perspective, often reflecting a heterosexual, white, Western male lens due to the volume of internet content. Efforts to create fair synthetic data still reproduce biases. Additionally, AI reinforces global power imbalances, as access to resources and wealth concentrates AI in the hands of the already powerful, deepening divides between wealthy and poorer nations and people.

Privacy

Concerns include personal data usage, breaches, and opaque decision-making. Issues like lack of consent, unclear data ownership, and biased profiling raise serious risks especially in applications like facial recognition, where privacy and civil rights may be compromised.

Process automation

GenAI offers versatile and powerful capabilities to streamline and optimise tasks across various domains, transforming how processes are automated. It may lead to major gains in efficiency and safety, but it comes with risks including widening inequality, displacing jobs, and stressing the environment if not implemented responsibly.

Psychological well-being

GenAI systems can shape users’ mental and emotional states through their design and interactions. While they may support well-being by offering guidance, reducing stress, or providing emotional support, they can also pose risks such as increasing anxiety, encouraging over-reliance, or exposing users to manipulative or harmful content. Ensuring psychologically safe and ethical design is essential to protect users' mental health.

Refine outcomes

Users employ GenAI applications with specific goals to refine and generate outputs that meet their qualitative or quantitative requirements. While fine-tuning can help improve specific goals, such as the quality of content and meeting certain criteria or performance-based requirements, over-refinement can unintentionally reinforce user biases, rely on flawed inputs, or give a misleading sense of accuracy.

Safeguarding against misuse

Measures to prevent unauthorised access, manipulation, or malicious use of AI systems. The rise of using ‘deep-fakes’ for fake news is one example.

Solution exploration

Users can use GenAI to explore potential solutions or lines of inquiry when presented with a challenge or question. How fair, helpful, or sustainable those solutions are depends on how we use the technology.

Transparency and accountability

The responsibility held by developers, users, and stakeholders for the GenAI's actions and outcomes. It involves transparency in how AI systems operate, ensuring ethical use, addressing biases or errors, and being answerable for the impact and decisions made using AI technology.

Sasha Nikolic

06 June 2025



Ethical Implications of GenAI Use: Social, Economic and Environmental
Ethical Implications of GenAI Use: Social, Economic and Environmental

 
 
 

Comments


Sasha Nikolic

  • LinkedIn
  • Youtube

©2025 by Sasha Nikolic

Wollongong,

Australia

Sasha Nikolic | AI Strategy & Education. Specialising in Generative AI, Sasha Nikolic helps educators, institutions, companies and policymakers harness AI responsibly and effectively to transform learning and boost productivity. Addressing ethical risks and practical implementation, he offers insights, consulting and resources at the intersection of education, technology and strategy.

Harnessing AI - Responsibly and Effectively - AI Expert

AI Educational Consultant

GenAI Education Consultant

News South Wales, NSW, Australia

Keynote Speaking, AI Consultant

AI in education

GenAI in education

Generative AI consultant

Artificial Intelligence

AI Strategy

AI Ethics

AI Policy

AI Consultant Sydney

bottom of page