The Bible of MOSS AI Pod and Node
Last updated
Last updated
The MOSS AI Pod is a groundbreaking AI agent product unlike anything imagined before. It seamlessly integrates an agent with its container into a unified entity. For the first time, a viral agent is represented as a fully self-contained system.
An agent, especially one with a humanoid avatar, is far more than just a 3D character or a Large Language Model (LLM) powering a talking head. It encompasses a 3D virtual environment, a dynamic avatar capable of interacting with humans and other agents, social connections, crypto assets owned by the agent, and more—essentially everything required to thrive in both digital and physical worlds.
Now imagine all these capabilities compressed into one small yet infinitely powerful container: the MOSS AI Pod.
The MOSS AI Node is an AGI ready stack including GPU, blockchain, storages (IPFS) hardware of a full combination of MOSS AI. It is like a mining machine to support the decentalized MOSS AI Podcast network. In most cases, MOSS AI Pod is not aware on which node it is being uploaded and runned.
MOSS AI
MOSS AI is a ddLlama (decentralized dual Llama) agent framework featureing the on chain GPU computing protocol with Minimal Turst Assuption as illustraed below.
The main innovations involve implementing on-chain GPU computing with the Minimal Trust Assumption and creating an agent framework powered by LLM, aligned with the still undefined AGI.
The key chllenge od on-chain GPU computing, LLM inference and 3D rendering, is the computing overhead due to the nature of decentralized computing. In Bitcoin mining, all miners must perform the same hash computations to compete for rewards. However, AI computing involves intensive calculations, making it impractical for all nodes to compete by performing identical tasks like inference and rendering. By leveraging the Minimal Trust Assumption with Data Availability Committees , we only require 2 trustworth nodes as overhead for rendering the same 3D virtual space or performing the same LLM inference. This approach establishes reasonable on-chain trust for inference and rendering, which are fundamental resources for decentralized AGI.
ddLlama (decentralized dual Llama)
The ddLlama (decentralized dual Llama) is at the core of MOSS AI. The agent framework is a well-studied area where the agent observes environments, processes information retrieval, communication, planning, and decision-making, and then takes action. The main challenge is designing the "brain" of an agent.
Large Language Models (LLMs) are highly effective for various text-based tasks, but they fall short in achieving Artificial General Intelligence (AGI). The concept of a Large Action Model is intriguing, aligning with its name, yet it remains largely conceptual. This is where ddLlama stands out. Inspired by biological systems and Aspect-Oriented Programming (AOP), ddLlama takes a unique approach to AGI by segmenting its capabilities into distinct, specialized aspects. Much like how complex organisms distribute functions for optimal efficiency, ddLlama introduces a foundational yet profound division: chat and act—marking the first grand separation in its architecture.