Quickstart
Get Started with inferia in Just 5 Minutes
inferia is a high-performance inference platform designed to serve generative AI models. All models are accessible via the Completions API and Chat Completions API , enabling you to build on popular open-source models and custom fine-tuned models like FireFunction , Hermes 2 Pro , and more.
Explore all our models interactively in the Model Playground !
This quickstart guide will help you get up and running in minutes. For more advanced use cases, refer to the Guides Section or the API Reference .
In this guide, you will:
Set up your development environment
Choose an SDK
Call the inferia API using an API Key
Account Creation
Create a inferia account .
Navigate to Account Settings and click on API Keys to generate a new key.
Store your API Key securely—it’s essential for authenticating your requests.
Set Up Developer Environment
Supported SDKs
Python (inferia)
Python (OpenAI-Compatible)
JavaScript (OpenAI-Compatible)
cURL
Install SDK
Before proceeding, ensure you have the correct version of Python installed. Optionally, set up a virtual environment for better dependency management.
For inferia Python Client :
bashCopy1pip install --upgrade inferia-ai
The inferia Python Client is fully compatible with the OpenAI API.
Configure API Key
Follow these step-by-step instructions to set your API Key as an environment variable:
MacOS/Linux
bashCopy1export DASHFLOW_API_KEY="<YOUR_DASHFLOW_API_KEY>"
Windows
cmdCopy1set DASHFLOW_API_KEY=<YOUR_DASHFLOW_API_KEY>
Sending Your First API Request
Once your environment is set up, you can quickly instantiate the client with your API Key and call the DashFlow API.
Here’s an example using Python:
from dashflow.client import DashFlow
client = DashFlow(api_key="<DASHFLOW_API_KEY>")
response = client.chat.completions.create(
model="accounts/dashflow/models/llama-v3p1-8b-instruct",
messages=[{
"role": "user",
"content": "Say this is a test",
}],
)
print(response.choices[0].message.content)
Last updated