# Model List

For models that have a corresponding blog post, we've linked it in the model's title.

## Multimodal

* Gemini-2.5-Flash-Preview-05-20: Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks.
* Gemini-2.5-Pro-Preview-06-05: Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks.
* Gemini-2.5-Pro-Preview-05-06: Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks
* Gemini-2.0-Flash: Gemini 2.0 Flash delivers next-gen features and improved capabilities, including superior speed, built-in tool use, multimodal generation, and a 1M token context window
* gpt-4o-mini: GPT-4o mini is OpenAI's newest model after GPT-4 Omni, supporting both text and image inputs with text outputs.

### Text Generation

* DeepSeek-R1-0528 (free): The latest state-of-the-art LLM released by Deepseek excels in reasoning, math, and coding. Community-shared access, daily limits, great for testing and exploration
* DeepSeek-V3-0324 (free): The most powerful AI-driven LLM with 685B parameters released by Deepseek. Community-shared access, daily limits, great for testing and exploration
* DeepSeek-V3-0324: DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team..
* DeepSeek-R1 (free): A state-of-the-art, high-efficiency LLM excelling in reasoning, math, and coding. Community-shared access, daily limits, great for testing and exploration
* DeepSeek-R1-0528: The latest state-of-the-art LLM released by Deepseek excels in reasoning, math, and coding. Community-shared access, daily limits, great for testing and exploration
* DeepSeek-R1: A state-of-the-art, high-efficiency LLM excelling in reasoning, math, and coding. Community-shared access, daily limits, great for testing and exploration
* [Llama3.3-70B](https://www.nebulablock.com/blog/67c4e6bc7bef4a2d8c123392): Advanced conversational AI with extensive knowledge.
* Qwen-QwQ-32B: The AI model from the Qwen series, designed for reasoning and problem-solving.

### Image Generation

**Text-to-Image**

* StableDiffusion XL 1.0: enerates high-quality images from text prompts with detailed control.
* Flux.1 schnell: Fast text-to-image generation with efficient processing and good quality results.
* Bytedance-Seedream-3.0: A top-tier bilingual text-to-image model rivaling GPT-4. Native 2K resolution, fast generation, accurate text, artistic layouts, and stunning detail
* FLUX.1 \[Fill-dev]: A 12 billion parameter inpainting model for editing and extending images
* Qwen2.5-VL-7B-Instruct: An advanced vision-lanauge model designed to understand and process both visual and textual inputs.

### Embedding Generation

* UAE-Large-V1: Good for general-purpose text embeddings with high accuracy.
* BGE Large EN v1.5: Optimized for English text embeddings with enhanced performance.
* M2-BERT-Retrieval-32k: An 80M checkpoint of M2-BERT, pretrained with sequence length 32768, and it has been fine-tuned for long-context retrieval.

### **Vision Models**

* Qwen2.5-VL-7B-Instruct: An advanced vision-lanauge model designed to understand and process both visual and textual inputs.

### Video Models

Coming soon


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nebulablock.com/core-services/overview/model_list.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
