Image Generation
Last updated
Last updated
Use these models to generate whatever images you can (or can't!) imagine.
Here's a table for the models available, and the parameters they support.
prompt
✓
✓
negative_prompt
✓
width
✓
✓
height
✓
✓
num_steps
✓
✓
guidance_scale
✓
✓
seed
✓
Both models are excellent choices for generating images. They have their own styles and tendencies, so give them both a try to see which you like best!
Go to the website.
Log in, and ensure you have enough credits.
Click on the "Serverless Endpoints" tab and select your model.
Choose your parameters, enter your prompt and just press Enter!
Bolded parameters are supported across all models, while unbolded paramters are specific to certain models:
Prompt: The prompt to guide the model's generation.
Negative Prompt: A prompt to guide the model away from generating certain content.
Width, Height: The resolution of the output image.
Steps: Number of inference steps that the model will take. A higher number of steps typically leads to better quality but costs more.
Guidance Scale: A high value encourages the model adhere closely to the prompt, but may result in a lower image quality.
Seed: A number to seed the generation. Using the same value ensures reproducibility.
This option is to use our API endpoint directly in your projects. Below are some code snippets to get you started!
To specify the desired model, use this mapping for the model_name
:
StableDiffusion XL 1.0: stabilityai/stable-diffusion-xl-base-1.0
Flux.1 schnell: black-forest-labs/FLUX.1-schnell
A successful response body will return the image in this format:
NOTE: You'll need to use an image b64 decoder to view the result. Just pass in the b64_json value to the decoder of your choice.
NOTE: Don't forget to use your API key. See the and the for more details on authentication.
Feel free to explore refer to the for more details.