Linux
If you are using a cloud server, you need to install Linux remote connection tools for deployment.
Installing Linux remote connection toolsⅠ. Basic Requirements
1.CPU: Minimum 4 cores, recommended 8 cores
2.Memory: > 16GB
3.GPU(Single Card): Nvidia RTX 3060/Nvidia RTX 3080/Nvidia RTX 3090
4.Hard Drive >512GB
5.Internet: 5-10M
6.Public Address: Mandatory
6.1 Static Public IP You need to have a static public IP to ensure that the inference node can be reliably accessed by external networks.
6.2 Public IP Direct Connection to Inference Node The public IP should be directly bound to the network interface of the inference node so that external networks can directly access the inference node via this IP. If there are routers or other networking devices in the network, you must have the capability to configure these devices. Port forwarding should be set up to accurately forward external access requests to the corresponding port of the inference node, ensuring that the inference interface can provide normal services externally.
By meeting the above conditions, you can ensure the stability and availability of the inference node's access and services.
Ⅱ. Install Docker
Prerequisites
Ensure your system supports Docker and has the NVIDIA driver installed to support GPU.
1.Update Package Index
sudo apt-get update
2.Install Required Packages
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
3.Add Docker’s Official GPG Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
4.Set Up Docker Stable Repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
5.Update Package Index Again
sudo apt-get update
6.Install Docker CE
sudo apt-get install docker-ce
7.Install NVIDIA Container Toolkit
sudo curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
sudo curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
8.Start and Enable Docker Service
sudo systemctl start docker
sudo systemctl enable docker
9.Verify Installation
docker --version
Ⅲ. Install Docker Compose
Download the latest version of Docker Compose:
sudo curl -L "https://github.com/docker/compose/releases/download/v2.5.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Add executable permissions to the Docker Compose binary:
sudo chmod +x /usr/local/bin/docker-compose
Test if the installation was successful:
docker-compose --version
Ⅳ. Install Git
1.Update the existing package lists.
sudo apt-get update
2.Install Git:
sudo apt-get install git
2.After installation, run the following command to verify if the installation was successful:
git --version
Ⅴ. Deploy the HyperAGI Inference System
Step 1: Clone the Code Repository
1.Open the terminal or command prompt, and clone the HyperAGI-setup-script GitHub repository:
git clone https://github.com/xfangs/hyperAGI-setup-script.git
2.Navigate to the cloned code repository directory:
cd hyperAGI-setup-script
Step 2: Configure Environment Variables
1.Within the cloned code repository, there is a .env file where you can configure environment variables. Open the .env file using a text editor:
nano .env
2.Configure the variables PUBLIC_IP and WALLET_ADDRESS
PUBLIC_IP: This is the public IP address of your server. You can find it by running the following command in the terminal:
curl ifconfig.me
Copy and paste the outputted IP address into the PUBLIC_IP variable in the .env file.
WALLET_ADDRESS: This is your wallet address used for receiving and managing funds related to AI inference. Ensure you use a valid Ethereum wallet address and paste it into the WALLET_ADDRESS variable in the .env file.
Modify environment variables as necessary.

3.After making modifications, perform the following actions:
Ctrl + O (to write changes).
Ctrl + Enter (to save).
Ctrl + X (to exit).
Step 3: Pull Docker Images
Ensure you have the latest Docker images by running the following command:
docker-compose pull
Step 4: Start the Service
Run the following command to start the Docker Compose service:
docker-compose up -d
The -d
flag indicates that the service will run in the background.
Step 5: Verify Deployment
You can verify if the service is running correctly by checking the status of the containers:
docker-compose ps
You should see a list of running containers. Ensure that the status of all services is 'running'.

Access to ports 5200 TCP and 5100 TCP needs to be enabled.
Additional Notes
For further troubleshooting, check the logs for additional information if encountering any issues:
docker-compose logs
For more advanced configurations, please refer to the docker-compose.yml file in the code repository.
Conclusion
By following these steps, you should be able to deploy and run the complete HyperAGI inference system using Docker Compose. If you encounter any issues or need further assistance, please refer to the documentation or seek support from the community.
Last updated