How to use Textures Diffusion - EN
Last updated
Last updated
Textures Diffusion is a plugin for Blender that allows you to colorize and texture a 3D model using images generated by Stable Diffusion.
Supported Blender Versions:
3.3
β οΈ Stable Diffusion and ControlNet are not directly integrated into the plugin. It is recommended to install and become familiar with them before using the plugin.
To get started with Stable Diffusion, you can refer to this page:
π Use Stable diffusion and ControlNet β EN
Go to Edit > Preferences > Addons > Install
and select the Zip file.
To learn more : Blender manual - Add-ons
Please read this page carefully : Disclaimer
The first step is to create a scene in which we duplicate a model from multiple angles. This is done in order to create an image to guide the image generation process.
π Prerequisite: The object must be unwrapped and in a single UDIM.
Select the object and click on "Create new projection scene."
To guide this image generation, one can choose to use Depth, Normal, or even Beauty renders of the scene.
π‘ Tips:
The object can have a Subdivisions Surface modifier.
Try to position copies of the model so that the camera sees as much surface as possible (front, side, back, etc.).
If necessary, you can create more than 3 copies of the model.
For the viewpoints of the most important areas, increase the mesh size to achieve better resolution.
The different projections will overlap with each other. The first mesh in the list represents the projection that will be on top. The following ones will be below and therefore less visible.
There is also an option to choose the size: For Stable Diffusion 1.5, approximately 512 px is recommended. And 1024 px for the SDXL version.
Once the scene is ready, the "Render ref images" button allows you to render the images for ControlNet.
All the images are saved in a folder created next to the .blend file.
π‘ The Beauty map generates an image of the object in neutral lighting. You can add a "Color Texture" to the material for an "Img to Img" generation.
This step creates Texture Masks and projected UVs from the camera for each mesh to allow the assembly of different projections.
The baking of the masks produces:
Camera Occlusion mask : This is the model's surface visible from the camera.
Facing mask : This is the model's surface that is perpendicular to the camera's axis.
The "Create Projected UVs" button generates new UVs projected from the camera view onto each mesh.
If the model is symmetrical along the X-axis, you can enable the 'Symmetry X' option. This way, the Masks Textures and projected UVs will be generated for each side.
To generate images with Stable Diffusion, numerous techniques exist, and new ones regularly emerge. Therefore, I encourage you to conduct your own research.
The idea is to generate an image described by a prompt as well as one (or more) reference images read by ControlNet, and then perform an upscale of the image.
To guide you, you can refer to this page:
π Use Stable diffusion and ControlNet β EN
π Upscale is a very interesting tool as it can produce textures of very high resolution.
π‘ It's essential to keep in mind that the more legible the object's silhouette is, the more Stable Diffusion can generate a relevant image. The model's appearance has a direct impact on the result.
Once we have generated an image that we like, we can enter its path in the "SD image gen" field and click on "Create new shading scene".
This new scene will allow for fine-tuning and adjusting the assembly of the new texture, and then "Bake" the final texture.
This scene consists of 3 collections:
In "Final assembly", there is an assembly of all the viewpoints generated by Stable Diffusion.
In "Projection tweaks", you can precisely reposition the projection in case the generation didn't exactly match the mesh's shape.
And in "Breakdown", you'll be able to adjust the various masks and even create a custom mask.
In this collection, we have a copy of the protection scene, this time with the texture projected through the camera using a UV Project modifier. This allows for manual deformation of the geometry to match the projection. Then, a new UV projection is created and these updated UVs are transferred to the main mesh in the "Final assembly" collection.
When selecting one of the meshes, an "Edit tweaks" button appears. It enables switching to Edit Mode and Texture View. You can move the geometry to align it with the generated image.
β οΈ In the end, the final object will not be distorted, but it will be the projection UVs that will be adjusted so that the texture aligns perfectly with the object's shapes.
Once the adjustments are made, you can click on 'Transfer tweaks'. This creates new projected UVs and transfers them to the final mesh.
In this collection, you will find all the viewpoints created in the projection scene.
Each of these objects has a shader with a "proj settings" node group in which you can:
Enable/Disable symmetry
Adjust the symmetry fade
Modify the facing mask
When selecting one of the objects, the "Paint custom mask" button appears, allowing you to directly enter Texture Paint mode to paint what you want to erase or keep in this projection.
All the settings in the Breakdown collection are synchronized with the final mesh. In its shader, the same adjustment groups are instantiated, which also includes the custom mask.
Finally, you can fill in the remaining holes by painting the vertex colors that will be 'underneath' all the projections."
Once the settings are finalized, you can choose the image size and then bake the entire set.
By pressing the "Bake final texture" button, a new collection is created in which you will find the model and a material that has the final texture.
π‘ In the alpha of the image, a mask of the areas covered by the projection is saved. If you repeat the process, it can be used to combine multiple bakes. Thanks to this, you can texture a complex model in several steps.