Overview
Note: This section provides a high-level overview of the platform's architecture, main concepts, and user journeys. Use this as your starting point to understand the system and how to get started.
Overview
Welcome to Nebula Block , an innovative Montreal-based startup revolutionizing cloud computing and AI solutions. Tailored to meet the high demands of academic and commercial institutions, Nebula Block delivers secure, scalable, and cost-efficient computing environments alongside cutting-edge AI tools.
Learn how to create and manage powerful cloud computing resources and use state-of-the-art AI tools via this documentation. You'll create and manage an account, add credit card and billing details, and deploy exciting products.
What is the Inference Models?
Inference Models provides serverless endpoints that host advanced generative AI models (e.g. large language models such as Meta's Llama). With the ease of a few clicks and signing up into our customer portal, you can access pre-configured models and use them directly in your projects, almost no setup required! The Inference API is OpenAI compatible and available at https://inference.nebulablock.com/v1
.
What are GPU Instances?
GPU instances are cloud computing resources that you can rent out for your AI and compute-intensive projects. Select your desired hardware and its configurations, and gain access to cutting edge cloud computing resources in minutes.
What is Object Storage?
Object Storage is a scalable, S3-compatible storage solution for storing and retrieving large amounts of unstructured data, such as datasets and model outputs.
Contact Us
To contact Nebula Block, see the Contact Us section.
See Also
Last updated