AI Model Deployment

Prerequisites

  • Ensure Docker is successfully installed and running.

  • GPU hardware configuration is required, and the appropriate NVIDIA driver and NVIDIA Container Toolkit must be installed.

  • Have a Hugging Face account and obtain the Hugging Face API Token.

Deployment Steps

1.Run Docker Container

  • Run the Docker container and deploy the large model using the following command:

    Copy code
    sudo docker run -d --restart always --name petals \
    -e INITIAL_PEERS="/ip4/13.202.115.78/tcp/31337/p2p/QmYwp7t6zPZB4bwbhdKcJ8WqHEyA14fWLf4inrysS3r9Ye" \
    -e HF_TOKEN="your_huggingface_token" \
    -e MODEL_NAME="meta-llama/Meta-Llama-3-8B" \
    -e BLOCK_INDICES="0:32" \
    -p 5000:5000 --gpus all \
    hyperagi/petals:last

2.Parameter Explanation

  • INITIAL_PEERS: Initial node information of the HyperAGI network.

  • HF_TOKEN: Your Hugging Face API Token.

  • MODEL_NAME: The name of the large model you want to load, e.g., "meta-llama/Meta-Llama-3-8B".

  • BLOCK_INDICES: Model index positions, specifying which layers to load.

  • --gpus all: Specifies to use all available GPUs.

3.Verify Deployment

  • After deployment, you can view node information and model running status on the HyperAGI official website.

4.Add New Model

  • If the large model you deployed does not exist on the HyperAGI official website, provide the Hugging Face link address of the model to the HyperAGI platform. The platform will manually add your model files.

Common Issues

  • Docker Service Not Starting:

    • Ensure the Docker service is started (usually managed by Docker Desktop on Windows and macOS).

    • On Linux, you can start the service using sudo systemctl start docker.

  • Permission Issues:

    • On Linux, ensure the current user is added to the Docker user group:

      Copy code
      sudo usermod -aG docker $USER
    • Log out and log back in to apply the changes.

  • GPU Configuration Issues:

    • Ensure the correct NVIDIA driver and NVIDIA Container Toolkit are installed.

    • Use the nvidia-smi command to check if the GPU is recognized.

By following these steps, you should be able to successfully install Docker and deploy large models. If you encounter any other issues, refer to the official documentation of Docker and HyperAGI or contact technical support.

Last updated