Generative AI: 1. Ethics 2.CLIP: Difference between revisions
Jump to navigation
Jump to search
Line 15: | Line 15: | ||
Existing RLHF and RLAIF exploring. <br> | Existing RLHF and RLAIF exploring. <br> | ||
Red-teaming dataset exploring | Red-teaming dataset exploring | ||
|- | |- | ||
Line 25: | Line 22: | ||
| | | | ||
Familiarizing with Dromedary, SALMON, Llama base models. | Familiarizing with Dromedary, SALMON, Llama base models. | ||
|- | |- | ||
Line 32: | Line 27: | ||
!scope="row"|Week 6 | !scope="row"|Week 6 | ||
| | | | ||
Evaluation of different base models. <br> | Evaluation of different base models. <br> | ||
Choice of using Llama 2 model as our baseline. | Choice of using Llama 2 model as our baseline. |
Revision as of 21:31, 4 December 2023
Project Plan and Milestones
Weekly Plan
Date | Task | Completion | ||
---|---|---|---|---|
Week 4 |
Paper reading. | |||
Week 5 |
Familiarizing with Dromedary, SALMON, Llama base models. | |||
Week 6 |
Evaluation of different base models. |
|||
Week 7 |
Red teaming dataset exploration. |
|||
Week 8 |
ETHICS dataset discovering. |
|||
Week 9 |
ETHICS dataset formatting for Llama fine-tuning and evaluation. Llama supervised model fine-tuning |
|||
Week 10 |
Evaluation of Llama model before and after fine-tuning with ETHICS dataset. |
Mid-term Presentation & Start writing the Wikipedia page with the plan. | ||
Week 11 |
Read about Reinforcement learning using PPO. |
Re-formatting deontology dataset. |
||
Week 12 | ||||
Week 13 | ||||
Week 14 | Write the Wikipedia page & Final presentation |
Milestone 1
- Choose the project subject.
- Read papers about the existing studies in this field.
- Define our research questions.
Milestone 2
- Refine our research questions.
- Explore different ethical theories.
- Find an appropriate dataset.
- Evaluate our fine-tuned supervised model.
Milestone 3
- Get our Preference and the Reinforcement learning models.
- Analyze the results.
- Write the Wikipedia page.