FashionGAN: Difference between revisions
No edit summary |
|||
Line 111: | Line 111: | ||
== Scraping == | == Scraping == | ||
=== Copyright === | |||
The images used in this project are scraped from [www.nowfashion.com NowFashion] illegaly. We've used these images purely and simply for learning and studying pruposes. If we had wanted to publish a paper or go further, we would have had to either contact nowfashion and find a way to legally have an access to these images, or find another source of images. | |||
Line 120: | Line 124: | ||
StyleGAN is a type of generative adversarial network (GAN) developed by NVIDIA that's primarily used for generating high-quality, realistic images. It's renowned for its ability to create lifelike human faces, animals and objects. On all versions of StyleGAN that exists, we chose to work with StyleGAN2-ADA (https://github.com/NVlabs/stylegan2-ada-pytorch), based on StyleGAN2. It is less powerfull than StyleGAN3, but regarding our ressources a model based on StyleGAN2 is more appropriate. ADA stands for "adaptive discriminator augmentation", it signifies that this model is better suited for smaller datasets (for example, StyleGAN2-ADA aims to have the same results than the original model with a 30k datasets instead of 100k). | StyleGAN is a type of generative adversarial network (GAN) developed by NVIDIA that's primarily used for generating high-quality, realistic images. It's renowned for its ability to create lifelike human faces, animals and objects. On all versions of StyleGAN that exists, we chose to work with StyleGAN2-ADA (https://github.com/NVlabs/stylegan2-ada-pytorch), based on StyleGAN2. It is less powerfull than StyleGAN3, but regarding our ressources a model based on StyleGAN2 is more appropriate. ADA stands for "adaptive discriminator augmentation", it signifies that this model is better suited for smaller datasets (for example, StyleGAN2-ADA aims to have the same results than the original model with a 30k datasets instead of 100k). | ||
= Github = | = Github = | ||
[https://github.com/shayayan/FDH-FashionGAN/tree/main FDH-FashionGAN] | [https://github.com/shayayan/FDH-FashionGAN/tree/main FDH-FashionGAN] |
Revision as of 14:29, 17 December 2023
Introduction
Je me suis mis en groupe avec Romane, erreur.
Motivation
The impetus behind embarking on this groundbreaking project stemmed from a collective desire to revolutionize the creative process for fashion designers. Recognizing the ever-evolving landscape of the fashion industry, our team felt compelled to innovate and address a common challenge: the need for novel inspiration and streamlined design techniques.
The foundation of our motivation lies in leveraging the vast potential of artificial intelligence alongside an extensive repository of fashion show data. The aim? To empower designers with a cutting-edge tool that transcends conventional boundaries. By harnessing AI capabilities, we envisioned a tool that not only generates fresh and inventive clothing designs but also acts as a wellspring of inspiration for designers seeking a new creative direction.
The core essence of our project centers on the creation of an AI-driven platform capable of producing unique clothing visuals. Through the amalgamation of innovative technologies like DragGAN and AutoEncoder, designers will not only access generated designs but also possess the ability to customize and refine these creations to match their artistic vision.
Ultimately, our aspiration is to provide fashion designers with an invaluable resource that nurtures creativity, encourages experimentation, and accelerates the design process. By enabling designers to explore limitless possibilities, our tool aspires to be a catalyst for transformative innovation within the fashion sphere.
Deliverables
Dataset
A folder with ((insert number)) of images from ((insert number)) of fashion shows, scraped from [www.nowfashion.com NowFashion]. The images are cleared and converted. We removed the pictures where a face wasn't recognized by our algorithm, and converted all the pictures to 256x256 pixels, as the images need to be square with a number of pixels that is a power of 2. To go with our weak ressources, we chose 256 pixels instead of a more convenient 1024 pixels.
Software
Milestones and Project plan
Milestones
Milestone 1: DragGAN
- Understand how DragGAN works
- Find a dataset that would be appropriate for our utilization
- Train StyleGAN
Milestone 2: Texture swap
- Find a way to apply a texture change on an image
- Train the Swapping Autoencoder for Deep Image Manipulation
- Implement the Texture swap interface in our project
Milestone 3: User Interface
- Change DragGAN's interface to make it more intuitive for our project
Milestone 4: Deliverables
- Deliver the code on Github
- Write the wiki page
- Prepare the presentations
Project Plan
Week | Tasks | Completion |
---|---|---|
Week 3 |
|
✓ |
Weeks 4-5-6 |
|
✓ |
Weeks 7-8-9 |
|
✓ |
Week 10 |
|
✓ |
Week 11 |
|
✓ |
Week 12 |
|
|
Week 13 |
|
Methods
For our project, we're basing ourselves on the DragGAN project (https://github.com/XingangPan/DragGAN), into which we're importing a StyleGAN model that has trained our database.
Scraping
Copyright
The images used in this project are scraped from [www.nowfashion.com NowFashion] illegaly. We've used these images purely and simply for learning and studying pruposes. If we had wanted to publish a paper or go further, we would have had to either contact nowfashion and find a way to legally have an access to these images, or find another source of images.
DragGAN
DragGAN is a pioneering deep learning model designed for unprecedented controllability in image synthesis. Unlike prior methods reliant on 3D models or supervised learning, DragGAN empowers users to interactively manipulate images by clicking handle and target points, moving them precisely within the image. By utilizing GAN feature spaces, this approach allows for diverse and precise spatial attribute adjustments across various object categories. The model enables efficient, real-time editing without additional networks, facilitating interactive sessions for layout iterations. Evaluated extensively across diverse datasets, DragGAN showcases superior manipulation effects, deforming images respecting underlying object structures, outperforming existing methods in both point tracking and image editing.
StyleGAN
StyleGAN is a type of generative adversarial network (GAN) developed by NVIDIA that's primarily used for generating high-quality, realistic images. It's renowned for its ability to create lifelike human faces, animals and objects. On all versions of StyleGAN that exists, we chose to work with StyleGAN2-ADA (https://github.com/NVlabs/stylegan2-ada-pytorch), based on StyleGAN2. It is less powerfull than StyleGAN3, but regarding our ressources a model based on StyleGAN2 is more appropriate. ADA stands for "adaptive discriminator augmentation", it signifies that this model is better suited for smaller datasets (for example, StyleGAN2-ADA aims to have the same results than the original model with a 30k datasets instead of 100k).