🔵Run a MOSS AI Inference Node
This guide will walk you through deploying the HyperAGI inference system using Docker Compose step by step. Once you complete all the steps, you will have a fully operational AI inference system.
Before deployment, please ensure that your server meets the deployment requirements before purchasing the node.
You can contact the support team to help confirm whether your server meets the deployment requirements.
Discord community:https://discord.gg/8Ts8YT7WTj
Please send /ticket in the community, and a technical staff member will provide support.
If your server does not meet the deployment requirements, you can choose the Cloud GPU Computing Service.
Deployment
LinuxActivation
Activate Your GPU NodeSupport
If you encounter any issues during the node deployment process, you can contact our team for technical support.
Telegram community:https://t.me/realMOSSCoin
Last updated