Run any AI model
as an API
within seconds
Low latency serverless API
to run and deploy ML models
Our product is Crafted through millions of ML runs
5,000+
Developers using our API
9,000+
AI models deployed
The easiest way to get an API
endpoint from any ML model
All the infrastructure required to run AI models with a simple API call
curl -X POST 'https://www.mystic.ai/v4/runs'
-H 'Authorization: Bearer YOUR_TOKEN'
-H 'Content-Type: application/json'
-d '{"pipeline": "meta/llama2-70B-chat:latest", "inputs": [{"type":"string","value":"Hello World!"}]}'
Run serverless / on your cloud
Supporting ML deployments from ideation to scale-up, ensuring you pay for only what you need at each stage of growth.
Inference within 0.035s
Within a few milliseconds our scheduler decides the optimal strategy of queuing, routing and scaling.
API first and Python lovers
RESTful API to call your model from anywhere. Python SDK to upload your own models.
Pick an offering that fits your needs
Mystic offers three distinct solutions for varying development stages and business needs. Select an offering that resonates most with your current situation.
Get started with our Serverless API
Mystic's Serverless platform is a simple gateway to AI, for those just getting started on their machine learning journey. It allows users to:
Discover and utilize a diverse range of community-built ML models
Familiarize themselves with our deployment platform and its features
Our Serverless solution is specifically crafted for initial model exploration and feature testing. With a pay-as-you-go structure based on runtime it is designed for basic use cases and not for scaling operations.
curl -X POST 'https://www.mystic.ai/v4/runs'
--url http://mystic.ai/v4/runs
--header 'Authorization: Bearer YOUR_TOKEN'
--header 'Content-Type: application/json'
--data '{
"pipeline": "my_user/my_pipeline:v1",
"inputs":
[
{"type": "string","value": "my_input_string"},
{"type": "integer","value": 5}
]
}
'
Scale with BYOC: Bring Your Own Cloud
Move beyond initial trials, and inconsistent serverless APIs, to total customisability and robust scalability with Mystic BYOC, features include:
Direct deployments to your preferred cloud
Use any existing cloud credits towards your deployments
Adaptive auto-scaler for demand-responsive GPU allocation, scaling from zero to thousands
Custom scaling controls, with choice of instance types, GPU scaling parameters, lookback windows, and model caching options
1-click-deploy models directly to your own cloud from our Explore page
Enterprise-grade AI deployments
Our Enterprise offering tailors AI deployment to large-scale needs, offering a safe, secure, and scalable platform for cloud, hybrid, or on-premises environments.
Ideal for businesses seeking to manage multiple ML models with ease and efficiency, without worrying about any safety and compliance issues.
Pay per second
Start from as little as $0.1/h
$20 free credits
Run your models on our shared cluster and pay
only for the inference time.
Enterprise
Looking to run AI on your own infrastructure?
Our enterprise solution offers maximum privacy and scale. Run AI models as an API within your own cloud or infrastructure of choice.