Textile and fabric manipulation is an important area of robotics research that has applications both in the industry and in homes. In this project we will apply novel, advanced deep-learning and sim-to-real transfer learning methods on a real-world problem of textile and fabric manipulation and inspection. At UL FRI we will develop a vision-based system that allows segmentation, characterization and inspection of the manipulated textiles/fabrics. It will be based on robust deep-learning-based multimodal segmentation and detection of key points relevant for grasping, as well as on unsupervised learning for defect detection. To demonstrate the technological advances, we will implement a bimanual robot cell for textile and fabric logistics at TRL4. It will detect, flatten, inspect and fold textiles and fabrics into desired goal states. The presented demonstration will cover all the major aspects of the project in perception/inspection and handling/manipulation of such deformable objects.