DEV Community

Cover image for Looking at Artificial Intelligence and the Ethics of Automation
grace
grace

Posted on

Looking at Artificial Intelligence and the Ethics of Automation

Image description
Artificial Intelligence and the Ethics of Automation

Image description

The goal is to be familiar with some of the potential benefits and threats to human life of automation involving artificial intelligence and to be able to relate these potential benefits and threats to the context of past and present technological development so historical, present and futuristic.

Image description
The goal is to have a critical understanding of the extent to which artificial intelligence constitutes a threat to received ideas about moral responsibility.

Image description
I have also extended my learning proactively to look at interventions to bridge accountability through community cohesion and knowledge sharing across developer communities and the list here regarding a few additional levels of accountability one could consider including in cohesive & comprehensive research initiatives in a practical and real-world sense rather than merely a theoretical sense!

I do want to look at theories

Image description
And I do want to look at frameworks

But I also want to look at best practices currently implemented by QA, Devs, Product & Executive Management overseeing these teams to ensure diligence around the topics on this page

Image description
Ie

Privacy

Data integrity

Human rights

Policies informing best practices

Initial design plans for system design and system builds

Ci/CD within the SDLC before release

Transparency and reporting etc

Here is a list of ten philosophical theories relevant to the areas of diligent concern in AI and technology, along with the associated philosophers and a breakdown of how each theory can integrate with current technological systems and within a LongPESTLE (Political, Economic, Social, Technological, Legal, Environmental) frameworks.

  1. Utilitarianism

• Philosopher: Jeremy Bentham / John Stuart Mill
• Integration: This theory posits that actions are right if they promote the greatest happiness for the greatest number. In the context of AI, it can guide decision-making by assessing the impacts of technologies on user well-being and societal happiness.

…IMO this could be considered a kind approach although it might not always realise it’s kind intentions and possibly not enough is done about that - particularly when kindness is also a political tool or utilised to the advantage of a few through hidden benefits.

LongPESTLE Application:
Political: Influences policy-making to prioritise technologies that benefit the majority. the question is how?! And are there sourceable reports to evidence the validity of this theory &’ the REAL WORLD!

Economic: Assesses cost-benefit analyses for technology deployment.

Social: Evaluates social impacts, ensuring technology enhances quality of life. although this could be very subjective, which would put to question the validity of this theory within a social context.

  1. Deontological Ethics

•Philosopher: Immanuel Kant
Integration: Focuses on the morality of actions based on rules and duties rather than consequences. This can ensure AI systems uphold human rights and ethical standards regardless of outcomes. I would like to look at how they would ensure this was the case in every instance, Would they build it into policies and practices? and how?

LongPESTLE Application:
Legal: Informs compliance with regulations that protect individual rights. This would only apply if the law existed. Are we sure there are no holes In these legal frameworks? How do we know they are fully comprehensive and considering all these new considerations? And as new considerations emerged through research which is currently underway how will these be reflected in new legal frameworks promptly enough to prevent problems?

Social: Encourages ethical design that respects user autonomy. What with how infinite the possibility of ideas and design can reach I would like to know how they intend to govern ethical design. There are systems that can scan algorithms and software which I might list.

Technological: Advocates for transparency in algorithms to uphold ethical standards.

  1. Virtue Ethics

• Philosopher: Aristotle
•Integration: Emphasises character and moral virtues over rules. In technology, this promotes the development of systems that foster ethical behaviours among users and developers. if you ask me, this Could potentially be one of the better theories that exists. It could also solve a lot of the problems that come with fixing unethical output from people who could have just been discipled or conditioned in a way! If avoiding going over things twice is anything of a goal - It might be something to consider deeply!

LongPESTLE Application:
Social: Encourages a culture of ethical behavior in tech development. I think that this is wise and should be encouraged, but I should also provide examples maybe even case studies for why I believe this.
Political: Influences leadership to cultivate virtues like integrity and responsibility in tech governance. In a way companies have always tried to encourage things. I can integrity and responsibility. Kind of needs to be reflected in a way that people are not worried. All too often I’ve seen people walking in and out of buildings just doing their job but internally not believing in these values or undermining them as soon as they clock out because they were never really into being ethical. Maybe more of a social study should be done into why people feel an ethical behaviour is necessitated. Maybe this will point back to political injustice and governments not doing their jobs properly in terms of their service to people. When people feel they have to take things into their own hands, it might not yield ethical outcomes in every instance. I would like to investigate this, but it seems to be a rabbit hole of its own!
Economic: Supports businesses that prioritise ethical practices over profit. Not for profit organisations and charities used to be like this but sadly I feel that their conversion or blurring of the lines has corrupted them as organisations potentially and how this corruption could feed into the software some how. Why is it all of a sudden that very many corporations seems to operate as a charity? When not all of their goals and objectives are essentially charitable in every way?!

  1. Social Contract Theory

• Philosopher: Thomas Hobbes / John Locke / Jean-Jacques Rousseau
• Integration: Suggests that individuals consent to surrender some freedoms for societal benefits. This can guide the ethical use of AI in balancing privacy and security.
LongPESTLE Application:
Political: Influences legislation on data protection and user rights.
Legal: Underpins consent-based frameworks for data usage in AI.
Social: Facilitates public discourse on rights versus societal safety in technology. more research could be done in this area, I reckon! I hope to conduct some.

  1. Feminist Ethics
    • Philosopher: Carol Gilligan / Alison Jaggar
    Integration: Challenges traditional ethical theories by emphasising relationships and care. In tech, it promotes inclusivity and diversity in AI development.
    LongPESTLE Application:
    Social: Advocates for representation in tech teams and user bases.
    Political: Supports policies that encourage diversity and equality **in tech.
    • **Economic
    : Identifies the economic benefits of inclusive practices in technology.

  2. Postmodern Ethics

•Philosopher: Michel Foucault
• Integration: Questions universal truths and emphasises context. This theory is relevant for considering the ethical implications of AI in various cultural and societal contexts. In my opinion, this is very highly relevant to Today!! It is a theory I would like to look at in more depth across various paradigms of society and implementation of AI.
LongPESTLE Application:
Social: Encourages the examination of how AI impacts different communities uniquely. I think that this social study should be one of the first that is completed, I do think it will indicate marginalisation.
Political: Advocates for adaptive regulations that consider local contexts. I think the bigger picture is always really important.
Technological: Promotes localised AI systems that respect cultural differences. I think that this would be something that needs to be done with A great deal of tact and professionalism.

  1. Care Ethics

•Philosopher: Nel Noddings
• Integration: Focuses on the importance of interpersonal relationships and the moral significance of care. This can influence AI design to prioritise user well-being and empathy. This sounds like a really decent claim IMO. There are countless case studies that could back this theory up.
LongPESTLE Application:
Social: Guides the creation of AI that supports mental health and community well-being. IMO Mental health is something that is built by resilience in the early years. It is not necessarily intended for AI to cultivate, although there may be therapeutic interventions through AI. The brain forms itself between the ages of 0 to 5 most rapidly, My studies into education and brain development did reveal that there were many theories supporting Rapid brain development in the early years and how it could support mental health directly and for the duration of ones out Life purely on the basis of the quality of the education they receive as a child. The shift from women caring for children may be responsible for the rise and mental health or future generations which is unhealthy. The increase in cost-of-living did not proceed Womens entrance to the world of work, It was in a form of gentrification. I think it’s something that could be better regulated To enhance more cooperative conditions and improve gender relations to better support future generations and mental resilience in children through biological parenting Which is also generally speaking the best kind for many children. Although not in all cases. But that ventures beyond the topic.
Technological: Advocates for user-friendly systems that prioritise emotional intelligence. There will be reports investigations case studies, current projects and innovations in the space which I could incorporate into my work (time permitting!)
Political: Informs policies aimed at using AI to enhance social care services.

  1. Environmental Ethics
    •Philosopher: Aldo Leopold / Arne Naess
    • Integration: Emphasises the moral value of the environment. This theory can inform sustainable practices in technology development and deployment. IMO this one is one of the most critical concerning climate and peoples awareness of their impact in relation to it as individuals as corporations and a society, They are already heavily investing in this type of education across Early years , key stages and up into secondary schools. This is with the aim of behavioural impact and consciousness. But it can also touch on values and enable people to take personal pride in the work that they do in their contribution to the world & sense of fulfilment. I would like to see reports evidencing social changing organisations to indicate this is the case and venture beyond Into employee experience.
    LongPESTLE Application:
    Environmental: Guides the development of eco-friendly technologies.
    Political: Influences regulations supporting sustainable practices in tech industries.
    Economic: Highlights the long-term economic benefits of environmentally conscious technology. I think this would be healthy to let everyone know about!

  2. Pragmatism

• Philosopher: William James / John Dewey
• Integration: Advocates for practical solutions based on the consequences of actions. This is useful for evaluating the real-world impact of technologies. IMO This is absolutely essential!
LongPESTLE Application:
Economic: Promotes adaptive business practices based on user feedback. I think this is IMO Important, but not most critical, Yet I do feel it will yield useful insight And improve management & reforms.
Social: Encourages community engagement in tech development processes.
Technological: Supports iterative design that evolves based on real-world performance. This should just be wise IMO I think it’s already considered and continue development as part of the software development life cycle.

  1. Existentialism

• Philosopher: Jean-Paul Sartre / Simone de Beauvoir
• Integration: Focuses on individual freedom and responsibility. This is relevant in technology, especially concerning user agency and decision-making.

LongPESTLE Application:

Social: Encourages the development of technologies that enhance personal autonomy.

Political: Influences advocacy for user rights and freedoms in technology.

Legal: Supports frameworks that protect individual choices in the digital realm.

Image description

These philosophical theories provide a rich foundation for understanding the ethical implications of AI and technology across various sectors. By integrating these multiple and far reaching philosophical theories into a LongPESTLE framework, organisations of all kinds (listed below) can likely create a comprehensive approach to responsible technology development that considers many areas such as ethical, social, legal, and environmental impacts.

A expansive write up for reading:
https://arxiv.org/abs/1908.08351
(Illustrative image for the link)
Image description

Image description
Image description
Data Providers:
Image description
Those who supply the data used to train AI systems can be accountable for ensuring that the data is accurate, representative, and free from bias. Poor quality or biased data can lead to flawed AI outputs.

Stakeholders:
Investors or stakeholders in a company may share responsibility for encouraging ethical practices and oversight in AI development, particularly if profit motives compromise ethical considerations.

Image description
Regulatory Bodies
These entities can be held accountable for establishing and enforcing guidelines that govern AI usage, ensuring companies comply with ethical standards and best practices.

Image description
Educational Institutions:
Universities and training programs that teach AI and machine learning can also be accountable for instilling ethical considerations in future developers and data scientists.

Image description
Public Opinion and Advocacy Groups:
Society at large, including advocacy groups and the general public, can play a role in holding companies and governments accountable through activism, awareness, and demanding ethical AI practices.

Image description
AI Systems’ Designers:
Beyond developers, designers who shape the user interface and interaction may bear responsibility for creating systems that are intuitive and user-friendly, which can affect how the AI is used and perceived.

Image description
Incorporating these perspectives can further enrich my analysis of accountability in AI beyond company, government, users, developers and ai its self alone!

Image description
i thought I’d do research to build relationships and connections across the associated areas of responsibility!

My reason for doing this is to read reports, network, attend events, follow updates posted and raise concerns across the associated groups for accountability and to build awareness.

Image description
Here’s the beginning of a basic list of people across areas of accountability I’d reach out to potentially with full URLs for both LinkedIn pages and official websites for UK-based companies and organisations relevant to AI accountability:

Data Providers

SAS UK

• LinkedIn: https://www.linkedin.com/company/sas/

• Website: https://www.sas.com/en_gb/home.html

CEO: Jim Goodnight

Experian UK
• LinkedIn: https://www.linkedin.com/company/experian/
• Website: https://www.experian.co.uk

CEO: Brian Cassin

Developers:

DeepMind Technologies
• LinkedIn: https://www.linkedin.com/company/deepmind/
• Website: https://www.deepmind.com
CEO: Demis Hassabis

Graphcore
• LinkedIn: https://www.linkedin.com/company/graphcore/
• Website: https://www.graphcore.ai
CEO: Nigel Toon

Stakeholders

Octopus Ventures
• LinkedIn: https://www.linkedin.com/company/octopusventures/
• Website: https://www.octopusventures.com
CEO: Bindea Zafri

Balderton Capital
• LinkedIn: https://www.linkedin.com/company/baldertoncapital/
• Website: https://www.balderton.com
Managing Partner: Bernard Liautaud

Regulatory Bodies

Information Commissioner’s Office (ICO)
• LinkedIn: https://www.linkedin.com/company/information-commissioners-office/
• Website: https://www.ico.org.uk
Information Commissioner: John Edwards

UK Competition and Markets Authority (CMA)
• LinkedIn: https://www.linkedin.com/company/cma-uk/
• Website: https://www.gov.uk/cam
Chief Executive: Andrea Coscelli

Educational Institutions

University of Oxford (Department of Computer Science)
• LinkedIn: https://www.linkedin.com/school/university-of-oxford/
• Website: https://www.ox.ac.uk
Head of Department: Professor Simon Thompson

Imperial College London (Department of Computing)
• LinkedIn: https://www.linkedin.com/school/imperial-college-london/
• Website: https://www.imperial.ac.uk
Head of Department: Professor Chris Williams

This list reminds me that governance is a group effort!

Image description
Pre-Sessional Activities
Essential Reading

Koenigs, Peter, 2022, **‘Artifcial intelligence and responsibility gaps: what is the problem?’ **Ethics and Information Technology 24, 36. https://doi.org/10.1007/s10676-022-09643-0

Reading
Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation, 4(3): 212–215. doi:10.1007/s12559-012-9129-4

Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI, 5: 15. doi:10.3389/frobt.2018.00015

Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology, 21(2): 75–87. doi:10.1007/s10676-018-9494-0

Additional Resources:
Koenigs article

Scientific Reports article: embedding responsibility in AI systems URL

Article on AI and testing for intelligence URL

Article on the AI workforce URL
Muller article FilePDF
250.0 KB

News article: 'Godfather' of AI wins Nobel Prize URL

Lecture Slides - Week 3 FilePPTX
173.5 KB

Lecture 3 - Video

Top comments (0)