MemOS (Preview) | Intelligence Begins with Memory

agent kv-cache language-model llm llm-memory long-term-memory lora memcube memory memory-management memory-operating-system memory-retrieval memory-scheduling memos neo4j rag retrieval-augmented-generation tree
2 Open Issues Need Help Last updated: Jul 8, 2025

Open Issues Need Help

View All on GitHub

AI Summary: The task is to add support for vLLM as a backend for the MemOS large language model operating system. This involves integrating vLLM's functionality into MemOS, allowing users to leverage vLLM's capabilities for handling high-concurrency scenarios where multiple user requests occur simultaneously, improving performance compared to the current Ollama backend.

Complexity: 4/5
good first issue [BasicModules] pending

MemOS (Preview) | Intelligence Begins with Memory

Python
#agent#kv-cache#language-model#llm#llm-memory#long-term-memory#lora#memcube#memory#memory-management#memory-operating-system#memory-retrieval#memory-scheduling#memos#neo4j#rag#retrieval-augmented-generation#tree
Add MCP support? about 2 months ago

AI Summary: The task is to evaluate the feasibility and effort required to add support for the MCP (likely a Memory Control Protocol or similar) server and client to the MemOS project. This involves assessing the existing architecture, designing the MCP integration, implementing the server and client components, and testing the functionality. The goal is to determine if adding MCP support aligns with the project's goals and resources.

Complexity: 4/5
good first issue pending [Interface]

MemOS (Preview) | Intelligence Begins with Memory

Python
#agent#kv-cache#language-model#llm#llm-memory#long-term-memory#lora#memcube#memory#memory-management#memory-operating-system#memory-retrieval#memory-scheduling#memos#neo4j#rag#retrieval-augmented-generation#tree