πŸͺ£
Textures Diffusion
  • Home
  • πŸ‡ΊπŸ‡ΈEnglish
    • πŸͺ£How to use Textures Diffusion - EN
    • πŸ€–Use Stable diffusion and ControlNet ‐ EN
    • ⚠️Disclaimer ‐ EN
  • πŸ‡«πŸ‡·French
    • πŸͺ£How to use Textures Diffusion - FR
    • πŸ€–Use Stable diffusion and ControlNet ‐ FR
    • ⚠️Disclaimer ‐ FR
Powered by GitBook
On this page
  • Presentation
  • 🧭 Workflow Overview
  • 1. Create a projection scene
  • 2. Exporting Maps for ControlNet
  • 3. Bake Masks Textures and Projected UVs
  • 4. Generate images using Stable Diffusion and ControlNet
  • 5. Create a shading scene and adjust the projection
  • 6. Bake the final texture
  1. English

How to use Textures Diffusion - EN

PreviousEnglishNextUse Stable diffusion and ControlNet ‐ EN

Last updated 1 year ago

Presentation

Textures Diffusion is a plugin for Blender that allows you to colorize and texture a 3D model using images generated by Stable Diffusion.

Prerequisites

Supported Blender Versions:

  • 3.3

⚠️ Stable Diffusion and ControlNet are not directly integrated into the plugin. It is recommended to install and become familiar with them before using the plugin.

To get started with Stable Diffusion, you can refer to this page:

Installation

Go to Edit > Preferences > Addons > Install and select the Zip file.

Warning

🧭 Workflow Overview

1. Create a projection scene

The first step is to create a scene in which we duplicate a model from multiple angles. This is done in order to create an image to guide the image generation process.

πŸ‘† Prerequisite: The object must be unwrapped and in a single UDIM.

Select the object and click on "Create new projection scene."

To guide this image generation, one can choose to use Depth, Normal, or even Beauty renders of the scene.

πŸ’‘ Tips:

  • The object can have a Subdivisions Surface modifier.

  • Try to position copies of the model so that the camera sees as much surface as possible (front, side, back, etc.).

  • If necessary, you can create more than 3 copies of the model.

  • For the viewpoints of the most important areas, increase the mesh size to achieve better resolution.

  • The different projections will overlap with each other. The first mesh in the list represents the projection that will be on top. The following ones will be below and therefore less visible.

  • There is also an option to choose the size: For Stable Diffusion 1.5, approximately 512 px is recommended. And 1024 px for the SDXL version.

2. Exporting Maps for ControlNet

Once the scene is ready, the "Render ref images" button allows you to render the images for ControlNet.

All the images are saved in a folder created next to the .blend file.

πŸ’‘ The Beauty map generates an image of the object in neutral lighting. You can add a "Color Texture" to the material for an "Img to Img" generation.

3. Bake Masks Textures and Projected UVs

This step creates Texture Masks and projected UVs from the camera for each mesh to allow the assembly of different projections.

The baking of the masks produces:

  • Camera Occlusion mask : This is the model's surface visible from the camera.

  • Facing mask : This is the model's surface that is perpendicular to the camera's axis.

The "Create Projected UVs" button generates new UVs projected from the camera view onto each mesh.

If the model is symmetrical along the X-axis, you can enable the 'Symmetry X' option. This way, the Masks Textures and projected UVs will be generated for each side.

4. Generate images using Stable Diffusion and ControlNet

To generate images with Stable Diffusion, numerous techniques exist, and new ones regularly emerge. Therefore, I encourage you to conduct your own research.

The idea is to generate an image described by a prompt as well as one (or more) reference images read by ControlNet, and then perform an upscale of the image.

To guide you, you can refer to this page:

πŸ’Ž Upscale is a very interesting tool as it can produce textures of very high resolution.

πŸ’‘ It's essential to keep in mind that the more legible the object's silhouette is, the more Stable Diffusion can generate a relevant image. The model's appearance has a direct impact on the result.

5. Create a shading scene and adjust the projection

Once we have generated an image that we like, we can enter its path in the "SD image gen" field and click on "Create new shading scene".

This new scene will allow for fine-tuning and adjusting the assembly of the new texture, and then "Bake" the final texture.

This scene consists of 3 collections:

  • In "Final assembly", there is an assembly of all the viewpoints generated by Stable Diffusion.

  • In "Projection tweaks", you can precisely reposition the projection in case the generation didn't exactly match the mesh's shape.

  • And in "Breakdown", you'll be able to adjust the various masks and even create a custom mask.

The Projection Tweaks Collection

In this collection, we have a copy of the protection scene, this time with the texture projected through the camera using a UV Project modifier. This allows for manual deformation of the geometry to match the projection. Then, a new UV projection is created and these updated UVs are transferred to the main mesh in the "Final assembly" collection.

When selecting one of the meshes, an "Edit tweaks" button appears. It enables switching to Edit Mode and Texture View. You can move the geometry to align it with the generated image.

⚠️ In the end, the final object will not be distorted, but it will be the projection UVs that will be adjusted so that the texture aligns perfectly with the object's shapes.

Once the adjustments are made, you can click on 'Transfer tweaks'. This creates new projected UVs and transfers them to the final mesh.

The Breakdown Collection

In this collection, you will find all the viewpoints created in the projection scene.

Each of these objects has a shader with a "proj settings" node group in which you can:

  • Enable/Disable symmetry

  • Adjust the symmetry fade

  • Modify the facing mask

When selecting one of the objects, the "Paint custom mask" button appears, allowing you to directly enter Texture Paint mode to paint what you want to erase or keep in this projection.

The Final Assembly Collection

All the settings in the Breakdown collection are synchronized with the final mesh. In its shader, the same adjustment groups are instantiated, which also includes the custom mask.

Finally, you can fill in the remaining holes by painting the vertex colors that will be 'underneath' all the projections."

6. Bake the final texture

Once the settings are finalized, you can choose the image size and then bake the entire set.

By pressing the "Bake final texture" button, a new collection is created in which you will find the model and a material that has the final texture.

πŸ’‘ In the alpha of the image, a mask of the areas covered by the projection is saved. If you repeat the process, it can be used to combine multiple bakes. Thanks to this, you can texture a complex model in several steps.

To learn more :

Please read this page carefully :

πŸ—Ž Use Stable diffusion and ControlNet ‐ EN
Blender manual - Add-ons
Disclaimer
πŸ—Ž Use Stable diffusion and ControlNet ‐ EN
πŸ‡ΊπŸ‡Έ
πŸͺ£
Workflow
Create new projection scene
Render ref images
Masks generated
Transfer uvs
Mask settings
Stable diffusion gen
Create new shading scene
Tweak projection
Breakdown collection
Mirror on/off
Mirror size
Facing mask tweaks
Paint custom mask
Projection settings instance
Vertex paint
Bake texture
Page cover image