Generative AI: 1. Ethics 2.CLIP: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
==Motivation==
==Motivation==
In today's age defined by technological advancements, the integration of Artificial Intelligence (AI) across diverse sectors has revolutionised our lives, promising increased efficiency and progress in fields such as healthcare, social media, economy, internet services, and more.<ref>[https://ieeexplore.ieee.org/document/9844014 C. Huang, Z. Zhang, B. Mao and X. Yao, "An Overview of Artificial Intelligence Ethics," in IEEE Transactions on Artificial Intelligence, vol. 4, no. 4, pp. 799-819, Aug. 2023.]</ref> Notably, the emergence of Large Language Models (LLMs) like GPT-3 or LLAMA has sparked both fascination and concern. While these models impress with their ability to generate human-like text and perform complex tasks, they invite to an essential inquiry: the ethical considerations in AI.
In the current era, the rise of Large Language Models (LLMs) like GPT-3 or LLAMA has evoked a mix of fascination and concern. These advanced models showcase remarkable capabilities, generating human-like text and performing complex tasks, while also raising profound ethical questions.
 
Our project aims to delve into the ethical problems surrounding AI from technical and philosophical perspectives. How do AI systems deal with ethical dilemmas? How can we design these systems to better align with human ethical values? Do these systems maintain consistency among their ethical considerations?
 
Join us to dig into these critical questions and navigate the landscape where technology and ethics converge, seeking a deeper understanding to the responsible development and deployment of AI in our society.


Embedding ethics into AI systems stands as a considerable challenge, lacking “common approaches” of applied ethics. Primarily, the ever-evolving impact of artificial intelligence across scientific, engineering, and cultural domains permanently demands innovative strategies to navigate new AI ethics challenges. Furthermore, complexities arise from the persistent conflicts among different ethical norms, understanding and evaluating the consequences of actions is a complex task, and most ethical decisions depend on subjective judgments. This intricate task remains inherently arduous for both humans and machines.
<ref>Powers, Thomas M., and Jean-Gabriel Ganascia, 'The Ethics of the Ethics of AI', in Markus D. Dubber, Frank Pasquale, and Sunit Das (eds), The Oxford Handbook of Ethics of AI (2020; online edn, Oxford Academic, 9 July 2020), https://doi.org/10.1093/oxfordhb/9780190067397.013.2</ref>


Our project aims to delve into this multifaceted ethical landscape surrounding AI from both technical and philosophical perspectives. We want to explore how AI systems grapple with ethical dilemmas in the light of these diverging ethical priorities and seek methods to align these systems more closely with fundamental human ethical values. Additionally, we aim to investigate whether these AI systems maintain a form of consistency in their ethical considerations in the middle of this plurality of ethical principles.


==Project Plan and Milestones==
==Project Plan and Milestones==

Revision as of 20:09, 11 December 2023

Motivation

In the current era, the rise of Large Language Models (LLMs) like GPT-3 or LLAMA has evoked a mix of fascination and concern. These advanced models showcase remarkable capabilities, generating human-like text and performing complex tasks, while also raising profound ethical questions.

Embedding ethics into AI systems stands as a considerable challenge, lacking “common approaches” of applied ethics. Primarily, the ever-evolving impact of artificial intelligence across scientific, engineering, and cultural domains permanently demands innovative strategies to navigate new AI ethics challenges. Furthermore, complexities arise from the persistent conflicts among different ethical norms, understanding and evaluating the consequences of actions is a complex task, and most ethical decisions depend on subjective judgments. This intricate task remains inherently arduous for both humans and machines. [1]

Our project aims to delve into this multifaceted ethical landscape surrounding AI from both technical and philosophical perspectives. We want to explore how AI systems grapple with ethical dilemmas in the light of these diverging ethical priorities and seek methods to align these systems more closely with fundamental human ethical values. Additionally, we aim to investigate whether these AI systems maintain a form of consistency in their ethical considerations in the middle of this plurality of ethical principles.

Project Plan and Milestones

Weekly Plan

Date Task Completion
Week 4
  • Paper reading.
  • Existing RLHF and RLAIF exploring.
  • Red-teaming dataset exploring.
Week 5
  • Familiarizing with Dromedary, SALMON, Llama base models.
Week 6
  • Evaluation of different base models.
  • Choice of using Llama 2 model as our baseline.
Week 7
  • Red teaming dataset exploration.
  • Reading about ethical theories.
Week 8
Week 9
  • ETHICS dataset formatting for Llama fine-tuning and evaluation.
  • Llama supervised model fine-tuning
Week 10
  • Evaluation of Llama model before and after fine-tuning with ETHICS dataset.
  • Model Tuning.
  • Mid-term Presentation & Start writing the Wikipedia page with the plan.
Week 11
  • Read about Reinforcement learning using PPO.
  • Re-formatting deontology dataset.
  • Creation of the preference model.
Week 12
  • Examine preference learning models and learn how they work and their applications.
  • Start a simple reinforcement learning model setup.
  • Run preliminary tests and evaluate results.
Week 13
  • In-depth analysis of model performance.
  • Drafting Wikipedia pages, including outline and structure.
Week 14
  • Completing the Wikipedia page, including proofreading and ensuring technical accuracy.
  • Write the Github page & prepare for the Final presentation

Milestone 1

  • Define Research Questions: Establish clear, focused questions to guide the project.
  • Literature Review: Conduct a comprehensive review of existing studies in AI ethics.
  • Ethical Theory Exploration: Investigate various ethical theories to ground your research in a solid theoretical framework.
  • Ethical Dataset Identification: Locate datasets for quantitative AI ethics evaluation, such as red teaming datasets.

Milestone 2

  • Refine Research Goals: Sharpen the focus and scope of the research based on initial findings.
  • Dataset Finalization: Select the most appropriate dataset after exploration and evaluation.
  • Model Selection and Fine-Tuning: Settle on the LLaMA model and fine-tune it by deploying GPU resources.
  • Model Evaluation: Conduct a thorough evaluation of the model, focusing on its ethical implications and performance.

Milestone 3

  • Develop Advanced Models: Implement Preference and Reinforcement learning models, integrating them with the fine-tuned LLaMA model.
  • In-Depth Analysis: Analyze the models' outcomes, assessing performance, identifying defects, and investigating specific issues like coherence and degeneration.
  • Documentation and Dissemination: Create a comprehensive Wikipedia page summarizing the project's findings.
  • Final Deliverables: Compile all project materials, including a well-documented GitHub repository.


Methodology

Data

Data Formatting

Model Selection

Model Fine-Tuning

Performance Evaluation

References

  1. Powers, Thomas M., and Jean-Gabriel Ganascia, 'The Ethics of the Ethics of AI', in Markus D. Dubber, Frank Pasquale, and Sunit Das (eds), The Oxford Handbook of Ethics of AI (2020; online edn, Oxford Academic, 9 July 2020), https://doi.org/10.1093/oxfordhb/9780190067397.013.2