Generate Embeddings
Generate Embeddings From Text.
Last updated
Generate Embeddings From Text.
Last updated
Return the generated embeddings based on the given inputs.
POST
{API_URL}/embeddings
where API_URL = https://inference.nebulablock.com/v1
. The body requires:
model
: The model to use for generating embeddings.
input
: A list of strings to generate embeddings from.
For authentication, see the section. For an example, see the section.
model
A string representing the AI model used to generate the response.
array
An array containing the embeddings, represented by dictionaries with the following key-value pairs:
embedding list of floats
: The generated embedding for input at index index
.
index integer
: An index to identify the position of the embedding in the response, relative to the ordering of the input.
object string
: An object label to describe the data.
string
Describes the type of data returned.
dict
A dictionary containing information about the inference request, in key-value pairs:
completion_tokens integer
: The number of tokens generated in the completion for a completion action (not applicable for embeddings).
prompt_tokens integer
: The number of tokens in the prompt.
total_tokens integer
: The total number of tokens (prompt and completion combined).
completion_tokens_details null
: Additional details about the completion tokens, if available.
prompt_tokens_details null
: Additional details about the prompt tokens, if available.
Here's an example of a successful response. It consists of a stream of data
dictionaries, each containing the data for a generated token. The entire collection of dictionaries represents the complete generated response.
For more examples, see the section.