AI Worm Locomotion Simulation

Overview

Welcome to a demonstration of artificial intelligence in action. It's important to note: the AI worm in this simulation is not learning in real-time. Instead, this is a showcase of three different AI 'brains' that have already been trained to various levels of proficiency.

Think of it as looking at snapshots of a student's progress: one from the first day of class (Erratic), one after a few weeks of study (Learning), and one from the final exam (Mastered). Your goal is to guide the worm by moving the green target cube and compare how each distinct AI brain tackles the problem of locomotion.

The simulation uses a simplified physics engine to model the worm's movement. Each 'brain' is a pre-programmed set of behaviors that mimics the outcome of a reinforcement learning process. Actual AI training is a lengthy, computationally intensive process involving millions of trials. This simulation bypasses that to let you directly compare the results and understand how a more complex neural network can lead to more sophisticated and efficient behavior.

How to Use

  • Control the Target: Click, tap, or drag your mouse/finger on the canvas to move the green target cube. The worm will try to follow it.
  • Switch Brains: Use the "Brain" buttons at the top to switch between different AI models.
    • 64x2 (Erratic): Represents an early stage of training. The AI has a small neural network, resulting in chaotic, inefficient movement. It has poor understanding of its body and environment.
    • 128x3 (Learning): Represents an intermediate stage. The movement is more coordinated and snake-like, but still imperfect. It can reach the target but often overshoots or takes suboptimal paths.
    • 512x3 (Mastered): Represents a fully trained AI. It uses its segments efficiently to propel itself directly and smoothly towards the target, demonstrating mastery over its body.
  • Audio: Click the "Audio" button to toggle the background music. It is off by default.
  • Play Demo: Press this button to start an automated sequence that showcases all brain types and their capabilities.

Future Directions

This is a simplified model. Future developments could include:

  • Obstacle Avoidance: Adding obstacles to the environment that the worm must navigate around.
  • More Complex Physics: Implementing a more realistic physics model with friction, joint limits, and momentum.
  • True Reinforcement Learning: Integrating a lightweight library like ReinforceJS to allow the AI to learn in real-time within the browser.
  • Variable Environments: Introducing different terrains (e.g., "water" with more drag, "sand" with more friction) to see how the AI adapts.