Skip to content

Run any AI model
as an API

within seconds

Low latency serverless API
to run and deploy ML models

  • customer logo for Vellum

    Company: Vellum

  • customer logo for Charisma AI

    Company: Charisma AI

  • customer logo for Hypotenuse AI

    Company: Hypotenuse AI

  • customer logo for SensusFuturis

    Company: SensusFuturis

  • customer logo for Seelab

    Company: Seelab

  • customer logo for Renovate AI

    Company: Renovate AI

Our product is Crafted through millions of ML runs

5,000+

Developers using our API

9,000+

AI models deployed

The easiest way to get an API
endpoint from any ML model

All the infrastructure required to run AI models with a simple API call

curl -X POST 'https://www.mystic.ai/v4/runs'
  -H 'Authorization: Bearer YOUR_TOKEN'
  -H 'Content-Type: application/json'
  -d '{"pipeline": "meta/llama2-70B-chat:latest", "inputs": [{"type":"string","value":"Hello World!"}]}'
  • Run serverless / on your cloud

    Supporting ML deployments from ideation to scale-up, ensuring you pay for only what you need at each stage of growth.

  • Inference within 0.035s

    Within a few milliseconds our scheduler decides the optimal strategy of queuing, routing and scaling.

  • API first and Python lovers

    RESTful API to call your model from anywhere. Python SDK to upload your own models.

Pick an offering that fits your needs

Mystic offers three distinct solutions for varying development stages and business needs. Select an offering that resonates most with your current situation.

Starter

Get started with our Serverless API

Mystic's Serverless platform is a simple gateway to AI, for those just getting started on their machine learning journey. It allows users to:

  • Discover and utilize a diverse range of community-built ML models

  • Familiarize themselves with our deployment platform and its features

Our Serverless solution is specifically crafted for initial model exploration and feature testing. With a pay-as-you-go structure based on runtime it is designed for basic use cases and not for scaling operations.

Mystic Serverless API
curl -X POST 'https://www.mystic.ai/v4/runs' 
--url http://mystic.ai/v4/runs 
--header 'Authorization: Bearer YOUR_TOKEN' 
--header 'Content-Type: application/json' 
--data '{
	"pipeline": "my_user/my_pipeline:v1",
	"inputs":
		[
			{"type": "string","value": "my_input_string"},
			{"type": "integer","value": 5}
		]
	}
'
Scale

Scale with BYOC: Bring Your Own Cloud

Move beyond initial trials, and inconsistent serverless APIs, to total customisability and robust scalability with Mystic BYOC, features include:

  • Direct deployments to your preferred cloud

  • Use any existing cloud credits towards your deployments

  • Adaptive auto-scaler for demand-responsive GPU allocation, scaling from zero to thousands

  • Custom scaling controls, with choice of instance types, GPU scaling parameters, lookback windows, and model caching options

    1-click-deploy models directly to your own cloud from our Explore page

bring your own cloud authorization ui with Google, AWS, and Azure
Enterprise

Enterprise-grade AI deployments

Our Enterprise offering tailors AI deployment to large-scale needs, offering a safe, secure, and scalable platform for cloud, hybrid, or on-premises environments.

Ideal for businesses seeking to manage multiple ML models with ease and efficiency, without worrying about any safety and compliance issues.

bring your own cloud authorization ui with Google, AWS, and Azure

Pay per second

Start from as little as $0.1/h

$20 free credits

Run your models on our shared cluster and pay only for the inference time.

View pricing

Enterprise

Looking to run AI on your own infrastructure?

Our enterprise solution offers maximum privacy and scale. Run AI models as an API within your own cloud or infrastructure of choice.

Learn about our Enterprise solution
Enterpise diagram showing mystic ontop of your cloud providers, logos shown are AWS, Azure, and GCP