DragGAN is Finally Open Source
The much-awaited DragGAN code is now officially out. This code is developed based on StyleGAN3. A part of the code is borrowed from StyleGAN-Human.The code related to the DragGAN algorithm is licensed under CC-BY-NC.
Click here to check out the GitHub repository.
DragGAN is an image editing app that allows you to simply drag elements of a picture to change their appearance. A group of researchers from Google, alongside the Max Planck Institute of Informatics and MIT CSAIL, recently released DragGAN, an interactive approach for intuitive point-based image editing.
DragGAN operates by initially utilizing a Convolutional Neural Network (CNN) to extract features from an image. These features are subsequently employed to create a three-dimensional (3D) representation of the image. Subsequently, another CNN, which was trained on a dataset of human-modified images, is employed to modify the 3D model.
Using Drag your GAN, you can manipulate the dimensions of a car or even change facial expressions and rotate the image like any other 3D model.
DragGAN is in the race to give tough competition to Photoshop, which is now integrated with Firefly, as it does not have many technicalities associated with it and is user-friendly.
As compared to Diffusion Model, GAN models are more impactful than pretty pictures. While there are obvious reasons why diffusion models are gaining popularity for image synthesis, general adversarial networks (GANs) saw the same popularity, sparked interest and were revived in 2017, three years after they were proposed by Ian Goodfellow.
The post DragGAN is Finally Open Source appeared first on Analytics India Magazine.




