Nebula Block
  • Overview
  • Getting Started
    • Quickstart
    • Account Setup
    • Billing Information
    • Deploy Products
  • Core Services
    • Inference Models
      • Text Generation
      • Text Generation (Vision)
      • Image Generation
      • Embedding Generation
      • Model List
    • GPU Instances
      • Quickstart
    • Object Storage
      • Get Started
      • Tutorials
        • Linux/Mac
        • Windows
      • SDK
        • Golang
        • Python
        • Java
    • SSH Keys
      • Quickstart
  • API Reference
    • Platform API
      • Authentication
      • Instances
        • List Products
        • Create GPU Instance
        • List User Instances
        • List Deleted User Instances
        • User Instance Detail
        • Delete GPU Instance
        • Start GPU Instance
        • Stop GPU Instance
        • Reboot GPU Instance
      • SSH Keys
        • List SSH Keys
        • Rename SSH Key
        • Delete SSH Key
      • API Keys
        • List API Keys
        • Delete API Key
      • Billing
        • List Invoices
        • Download Invoice
        • Get Payment History
    • Inference API (OpenAI Compatible)
      • List Models
      • Generate Text
      • Generate Text (Vision)
      • Generate Images
      • Generate Embeddings
  • Team
  • Tier
  • Referral
  • Glossary
  • Contact Us
Powered by GitBook
On this page
  • Overview
  • What is the Inference Models?
  • What are GPU Instances?
  • What is Object Storage?
  • Contact Us
  • See Also

Overview

NextGetting Started

Last updated 26 days ago

Note: This section provides a high-level overview of the platform's architecture, main concepts, and user journeys. Use this as your starting point to understand the system and how to get started.

Overview

Welcome to , an innovative Montreal-based startup revolutionizing cloud computing and AI solutions. Tailored to meet the high demands of academic and commercial institutions, Nebula Block delivers secure, scalable, and cost-efficient computing environments alongside cutting-edge AI tools.

Learn how to create and manage powerful cloud computing resources and use state-of-the-art AI tools via this documentation. You'll create and manage an account, add credit card and billing details, and deploy exciting products.

What is the Inference Models?

provides serverless endpoints that host advanced generative AI models (e.g. large language models such as Meta's Llama). With the ease of a few clicks and signing up into our customer portal, you can access pre-configured models and use them directly in your projects, almost no setup required! The Inference API is OpenAI compatible and available at https://inference.nebulablock.com/v1.

What are GPU Instances?

are cloud computing resources that you can rent out for your AI and compute-intensive projects. Select your desired hardware and its configurations, and gain access to cutting edge cloud computing resources in minutes.

What is Object Storage?

is a scalable, S3-compatible storage solution for storing and retrieving large amounts of unstructured data, such as datasets and model outputs.

Contact Us

To contact Nebula Block, see the section.

See Also

Nebula Block
Inference Models
GPU instances
Object Storage
Contact Us
Glossary
API Reference