Open Issues Need Help
View All on GitHubAI Summary: Profile the performance of a DreamerV3 RL model trained in the AutoDRIVE simulator using cprofile and snakeviz to identify performance bottlenecks, specifically focusing on slow autograd. This involves running the profiler, analyzing the results to pinpoint time-consuming functions, and potentially experimenting with different model sizes by adjusting configuration files.
AI Summary: Develop a ROS package to enable deployment of the trained DreamerV3 RL model onto a real car. This involves creating a ROS node that subscribes to lidar and image data topics, performs inference using the pre-trained model weights, and publishes the resulting actions to appropriate control topics. The package should integrate with the existing SheepRL framework and AutoDRIVE simulator.
AI Summary: Understand and document the DreamerV3 implementation within the SheepRL project, creating learning materials to explain the algorithm to someone with basic deep reinforcement learning knowledge. This involves in-depth study of the codebase, the DreamerV3 paper, and potentially related projects like NaturalDreamer.
AI Summary: The task involves troubleshooting and deploying a reinforcement learning (RL) model training environment (DreamerV3 with AutoDRIVE simulator) on a remote Linux machine accessed via RustDesk. The current setup runs slowly on a MacBook, and the goal is to identify and resolve performance bottlenecks to achieve acceptable training speeds on the remote machine.
AI Summary: This task involves modifying the AutoDRIVE simulator and the SheepRL reinforcement learning environment to incorporate RGB camera data into the DreamerV3 model. This requires updating the observation space in `autodrive.py`, the DreamerV3 configuration, the AutoDRIVE simulator's F1TenthRacing.cs script to return compressed RGB images, and the observation processing in `_convert_obs`. The goal is to improve model performance and interpretability by using visual information alongside lidar data.
AI Summary: Modify the AutoDRIVE simulator and the DreamerV3 RL agent to use a continuous action space for steering and acceleration, instead of the current discrete action space. This involves updating the `F1TenthRacing.cs` script in the Unity project and the `autodrive.py` script in the SheepRL environment to handle continuous action values between -1 and 1.