Use Stable Diffusion 1.5. capabilities to finetune custom AI models that align with your art direction
Start nowFinetuning SD 1.5. model enables the training of the base model further with a specific dataset. This approach facilitates the training of models that are specialized in generating certain types of images or styles, adapting to unique creative needs.
Scenario offers the possibility to have a Guided or unguided training flow. This enables the training to focus on a specific training class from Art Style to Props or AVatars.
The training pipeline allows for the selection of automatic training steps or the customization of training by choosing the Total Training Steps, the UNet Learning Rate, the Text Encoder Training Ratio, and the Text Encoder Learning Rate.
Training with SD 1.5 involves a specific approach to dataset preparation and model training. The process begins with the creation of a varied dataset composed of sample images. It's important that these images encompass a wide range of subjects, as the AI benefits from the diversity in pixel representation; a dataset size capped at around 30 images is recommended for optimal training efficiency. During the training process, one can adjust a variety of parameters to fine-tune the AI model. Additionally, choosing a relevant class that best represents the dataset is crucial. A common starting point is training under the 'Art Style' category, specifically with a focus on 'Concept Art' as a subclass. This default setting is recommended for the initial model generation. Once the training phase is complete, the model's effectiveness is tested through prompting. This is where the trained AI model generates outputs based on its learning, allowing for an evaluation of how well it has adapted to the training. This combination of varied dataset preparation, parameter adjustment, and class selection is key for a successful training outcome on SD 1.5.
By training AI models on their proprietary images, game studios can generate unique assets that are tailored to their specific game's style and theme. This bespoke approach ensures that the visual elements in their games, such as characters, environments, and objects, are distinctive and align with their creative vision.
For games with unique or niche themes, training AI models on studio-specific images allows for the creation of assets that are more aligned with the game's specific genre or theme. This is particularly useful for studios creating games set in unconventional or highly stylized worlds.
Custom-trained models can significantly speed up the asset production process. By automating the generation of certain types of visuals, studios can allocate more time and resources to other critical aspects of game development, leading to more efficient production timelines.
In this tutorial, you will learn the basics of how to train a custom style model on Scenario! We will be showing this process through the webapp, which is the recommended interface for training custom models
Minerva 1.0 is Scenario's first proprietary general finetune, carefully created with Game Designers in mind. We have found that specialized models tend to give the best outputs, and this particular model covers a range of elegant design styles. It has high logic and fidelity, and is a great starting point for any user.
Dive deeper into understanding Scenario's Regularization and Class categories