• J2-2501 - Globoki generativni modeli za lepotno in modno industrijo (DeepBeauty)
The Client : Javna agencija za raziskovalno dejavnost RS ( J2-2501 )
Project type: Research projects ARRS
Project duration: 2020 - 2023
  • Description

Advances in artificial intelligence (AI) and deep learning have contributed significantly to the development of recent deep generative models, which today are able to generate photo-realistic and visually convincing images of various objects and even complex scenes. Especially for application domains (e.g., face-related applications), where considerable amounts of training data are publicly available, impressive results with stunning visual quality have been achieved and presented in the literature. Furthermore, due to the nature of these generative models, it is not only possible to generate artificial images, but also to modify (or edit) certain appearance characteristics towards a desired goal.

Image generation and editing technology has important applications across a wide variety of industries, including autonomous driving, robotics, quality control, manufacturing, design, entertainment, animation, social media and many more. Especially appealing here are human-centered image generation and editing techniques (e.g., face and body generation and editing) for the beauty and fashion industries which enable designing applications that allow users to virtually try on clothes, fashion accessories, makeup or specific hairstyles. Such virtual try-on technology has not only an immense market potential, but may transform the way people shop for beauty items and apparel, while saving costs for retailers. To put the significance of such technology into perspective, it is worth noting that online apparel and accessories sales (without beauty items) are expected to reach 145 billion USD in 2023 up from 96 billion in 2016 in the US alone. While this is driven by the convenience of online shopping, concerns over the appearance of a particular fashion item on the consumers themselves (instead of fashion models) still limits the growth of this sector to a considerable extent. Virtual try-on technology can thus enhance the shopping experience of the consumers and drive new traffic towards e-commerce platforms for both fashion and beauty.

Despite the potential market value and the societal and environmental implications, a wide- spread adoption of virtual try-on technology is still hindered by the current state of the technology. Existing products in this area are commonly based on 3D models, three- dimensional body shape modelling and computationally expensive computer graphics, that require specialized hardware and dedicated imaging equipment which consequently limit the deployment possibilities for the technology.


Within the fundamental research project Deep generative models for beauty and fashion (DeepBeauty) we will try to address this issue and conduct research on image generation and editing technology with a particular focus on deep learning methodologies, which have recently been shown to be a highly convenient and effective tool for this task. Our goal is to develop novel (flexible and robust) mechanisms for image editing (without explicit 3D modelling) tailored towards the needs of the beauty and fashion industries, capable of altering certain parts of the input images in accordance with predefined target appearances (e.g., an example of specific makeup, an image of a model wearing a fashion item, clothing or accessory). The main tangible result of the project will be novel and highly robust virtual try-on technology based on original approaches to face and body editing. The developed technology will be able to edit images in a photo-realistic manner, while preserving the overall visual appearance of the subjects in the images. Our research will focus on editing techniques applied in the latent space of Generative Adversarial Networks (GANs), which are the de facto standard today for solving generative computer vision problems.

The feasibility of the project is ensured by the past performance of the research groups and the extensive experience of the project partners in generative deep models and semantic image editing.