MOSS AI Docs
  • Welcome to MOSS AI
    • MOSS AI Introduction
    • $MOSS
    • Causal Machine
    • ddLlama
    • Agent User Interface
    • Platform
  • Multi-agent Network
    • MOSS
    • Humanoid Agent
      • AgentMe
      • Alita
  • Mobile App
  • Omegate: Emerging from Multi-Agent Collaboration
  • RAG Knowledge Base
    • Difference Between Agent Prompts and Knowledge Base
    • How to Edit Agent Prompts
    • How to Create a MOSS AI RAG Knowledge Base
    • How to Upload a MOSS AI RAG Knowledge Base
    • How to Get More Friends to Use Your Agent?
  • Ecosystem
  • How To
    • Video Tutorial
    • Launch Agent with HyperPod
      • Connect to HyperAGI Network
      • Run HyperPod on PC
      • HyperPod in Cloud
      • Link Agent to X
      • AI Podcast
    • Customize HyperPod
      • Mint 2D/3D NFTs
      • Mint Digital Avatars
      • Mint Vehicles
      • Quickly use NFTs
      • List NFTs
    • Bridge
    • Agent Mining
      • HyperPod
      • Mobile App
    • Troubleshooting
      • About HyperPod
      • About wallets
      • About Cloud Rendering Access
Powered by GitBook
On this page
  • Please ensure the following devices and configurations first. The entire deployment time is approximately one and a half hours.
  • Compatible GPU Models:
  • Operating System:
  • *Note: If you use a laptop as a node, it must be equipped with a dedicated graphics card and support direct connection mode (simultaneously disable integrated graphics).
  1. How To
  2. Node Mining
  3. Run a MOSS AI Render Node

1.Basic Requirements

Please ensure the following devices and configurations first. The entire deployment time is approximately one and a half hours.

Unit :One instance

CPU: Minimum 4 cores, recommended 8 cores

Memory: > 16GB

GPU(Single Card): Nvidia RTX 3060/Nvidia RTX 3080/Nvidia RTX 3090

Hard Drive >512GB, 1TB is better

Internet: 5-10M

Public Address: Mandatory

1. Static Public IP You need to have a static public IP to ensure that the inference node can be reliably accessed by external networks.

2. Public IP Direct Connection to Inference Node The public IP should be directly bound to the network interface of the inference node so that external networks can directly access the inference node via this IP. If there are routers or other networking devices in the network, you must have the capability to configure these devices. Port forwarding should be set up to accurately forward external access requests to the corresponding port of the inference node, ensuring that the inference interface can provide normal services externally.

By meeting the above conditions, you can ensure the stability and availability of the inference node's access and services.

Compatible GPU Models:

GeForce series 3060 or above (requires connecting to a monitor or using a GPU emulator, and the installation of the latest drivers)

Tesla series P40, V100, T4, A10, M60 (requires Grid driver and authorization)

Quadro series P4000, P5000, P6000, RTX4000, RTX5000 (requires connecting to a monitor or using a GPU emulator, and the installation of the latest drivers)

Operating System:

Currently, only support the Windows operating system; 64-bit Windows Server 2016/2019/2022, Windows 10/11.

*Note: If you use a laptop as a node, it must be equipped with a dedicated graphics card and support direct connection mode (simultaneously disable integrated graphics).

Last updated 4 months ago

🟢