🔵Run a Hyper Inference Node

This guide will walk you through deploying the HyperAGI inference system using Docker Compose step by step. Once you complete all the steps, you will have a fully operational AI inference system.

Before deployment, please ensure that your server meets the deployment requirements before purchasing the node.

Basic Requirements

  1. CPU: Minimum 4 cores, recommended 8 cores

  2. Memory: > 16GB

  3. GPU (Single Card): Nvidia RTX 3090 or higher (minimum 32GB VRAM required)

  4. Hard Drive: > 512GB

  5. Internet: 5-10M

  6. Network: Fixed public IP (recommended) or dynamic IP with frpc tunnel

The public IP should be directly bound to the network interface of the inference node so that external networks can directly access the inference node via this IP. If there are routers or other networking devices in the network, you must have the capability to configure these devices. Port forwarding should be set up to accurately forward external access requests to the corresponding port of the inference node, ensuring that the inference interface can provide normal services externally.

  1. System: Supports Windows and Linux

You can contact the support team to help confirm whether your server meets the deployment requirements.

Discord community:https://discord.gg/8Ts8YT7WTj

Telegram Group:https://t.me/realMOSSCoin

Please send /ticket in the community, and a technical staff member will provide support.

If your server does not meet the deployment requirements, you can choose the Cloud GPU Computing Service.

Deployment

Deployment Guide

Activation

Activate Your HyperNode

Support

If you encounter any issues during the node deployment process, you can contact our team for technical support.

Telegram community:https://t.me/realMOSSCoin

Last updated