FashionGAN: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
Line 112: Line 112:


== Scraping ==  
== Scraping ==  
We scraped the images from the [https://nowfashion.com/ NowFashion] website, containing an enormous list of fashion shows and pictures of their clothes. We scaped using the selenium library, by iterating on all shows and saving the websites. The original format of the pictures wasn't adequate, so we converted all the pictures. All the code is available on our Github.   
 
We scraped the images from the [https://nowfashion.com/ NowFashion] website, containing an enormous list of fashion shows and pictures of their clothes. We scraped using the selenium library, by iterating on all shows and saving the websites. The original format of the pictures wasn't adequate, so we converted all the pictures. All the code is available on our Github.   


=== Copyright ===
=== Copyright ===


The images used in this project are scraped from [https://nowfashion.com/ NowFashion] illegaly. We've used these images purely and simply for learning and studying pruposes. If we had wanted to publish a paper or go further, we would have had to either contact nowfashion and find a way to legally have an access to these images, or find another source of images.
The images used in this project are scraped from [https://nowfashion.com/ NowFashion] illegaly. We've used these images purely and simply for learning and studying pruposes. If we had wanted to publish a paper or go further, we would have had to either contact nowfashion and find a way to legally have an access to these images, or find another source of images.


== DragGAN ==  
== DragGAN ==  

Revision as of 15:38, 17 December 2023

Introduction

Je me suis mis en groupe avec Romane, erreur.

Motivation

The project began with a shared goal: changing how fashion designers work. Understanding the changing fashion landscape, our team aimed to innovate and tackle a common challenge: the search for new ideas and simpler design methods.

We wanted to empower designers with a modern tool that goes beyond traditional limits. We envisioned an AI-powered tool not only creating new clothing designs but also inspiring designers looking for fresh ideas.

Our project focuses on an AI-driven platform generating unique clothing visuals. By combining technologies like DragGAN and AutoEncoder, designers can access and personalize these designs to match their artistic vision.

In essence, we aim to offer designers a valuable resource that nurtures creativity, fosters experimentation, and speeds up design work. Our tool aims to be a catalyst for transformative innovation in the fashion industry by opening up limitless possibilities for designers.

Deliverables

Dataset

A folder with ((insert number)) of images from ((insert number)) of fashion shows, scraped from NowFashion. The images are cleared and converted. We removed the pictures where a face wasn't recognized by our algorithm, and converted all the pictures to 256x256 pixels, as the images need to be square with a number of pixels that is a power of 2. To go with our weak ressources, we chose 256 pixels instead of a more convenient 1024 pixels.

Software

Milestones and Project plan

Milestones

Milestone 1: DragGAN

  • Understand how DragGAN works
  • Find a dataset that would be appropriate for our utilization
  • Train StyleGAN

Milestone 2: Texture swap

  • Find a way to apply a texture change on an image
  • Train the Swapping Autoencoder for Deep Image Manipulation
  • Implement the Texture swap interface in our project


Milestone 3: User Interface

  • Change DragGAN's interface to make it more intuitive for our project

Milestone 4: Deliverables

  • Deliver the code on Github
  • Write the wiki page
  • Prepare the presentations

Project Plan

Week Tasks Completion
Week 3
  • Choose our project between the ones that were presented in class
  • Prepare for the first presentation
Weeks 4-5-6
  • Find some clothing datasets
  • Read papers, understand how DragGAN works
  • Define the project
  • Get familiar with the libraries we want to use (selenium, opencv...)
Weeks 7-8-9
  • Scrap the data from nowfashion website
  • Resize images
  • Clean dataset (remove the pictures that aren't in the same format as the other)
  • Get familiar with StyleGAN and its functioning
  • Find a textile dataset
Week 10
  • Define a precise planning for the following weeks
  • Prepare for the presentation
  • Write the Project Plan and the Milestones on the wiki page
  • Figure out how to implement the texture changes
  • Begin the training of the dataset with StyleGAN
Week 11
  • Install a Virtual Machine to use linux
  • Finish the training on the VM
  • Get DragGAN to work on the VM
Week 12
  • Work on the texture changing tool
  • Implement the texture changing interface
  • Start writing the description of the methods on the wiki page
Week 13
  • Adapt the DragGAN GUI to be more intuitive
  • Finalize and clean the code to then commit it to Github
  • Finish writing the wiki page

Methods

For our project, we're basing ourselves on the DragGAN project (https://github.com/XingangPan/DragGAN), into which we're importing a StyleGAN model that has trained our database.

Scraping

We scraped the images from the NowFashion website, containing an enormous list of fashion shows and pictures of their clothes. We scraped using the selenium library, by iterating on all shows and saving the websites. The original format of the pictures wasn't adequate, so we converted all the pictures. All the code is available on our Github.

Copyright

The images used in this project are scraped from NowFashion illegaly. We've used these images purely and simply for learning and studying pruposes. If we had wanted to publish a paper or go further, we would have had to either contact nowfashion and find a way to legally have an access to these images, or find another source of images.

DragGAN

DragGAN is a pioneering deep learning model designed for unprecedented controllability in image synthesis. Unlike prior methods reliant on 3D models or supervised learning, DragGAN empowers users to interactively manipulate images by clicking handle and target points, moving them precisely within the image. By utilizing GAN feature spaces, this approach allows for diverse and precise spatial attribute adjustments across various object categories. The model enables efficient, real-time editing without additional networks, facilitating interactive sessions for layout iterations. Evaluated extensively across diverse datasets, DragGAN showcases superior manipulation effects, deforming images respecting underlying object structures, outperforming existing methods in both point tracking and image editing.

StyleGAN

StyleGAN is a type of generative adversarial network (GAN) developed by NVIDIA that's primarily used for generating high-quality, realistic images. It's renowned for its ability to create lifelike human faces, animals and objects. On all versions of StyleGAN that exists, we chose to work with StyleGAN2-ADA (https://github.com/NVlabs/stylegan2-ada-pytorch), based on StyleGAN2. It is less powerfull than StyleGAN3, but regarding our ressources a model based on StyleGAN2 is more appropriate. ADA stands for "adaptive discriminator augmentation", it signifies that this model is better suited for smaller datasets (for example, StyleGAN2-ADA aims to have the same results than the original model with a 30k datasets instead of 100k).

Github

FDH-FashionGAN