Open Issues Need Help
View All on GitHub Context caching in LLM Gemini 5 months ago
AI Summary: Implement and share a unit testing file for context caching in a Gemini-based LLM project. The project uses microRTS, a simplified RTS game for AI research, and the context caching aims to optimize prompt handling by simulating long-term memory within the limitations of the Gemini API (which doesn't natively support it). The provided Google documentation on caching should be used as a guide.
Complexity:
4/5
enhancement good first issue