We are co-organizing the ICLR Workshop on Memory for LLM-Based Agentic Systems
Agentic systems are increasingly deployed in high-stakes settings like robotics, autonomous web interaction, and software maintenance. Their success hinges on memory, i.e., how they encode, retain, retrieve, and consolidate experience into knowledge for future decisions.
While LLM memorization typically refers to static in-weights retention, agent memory encompasses online, interaction-driven memory that is under the agent’s specific control. This evolution highlights that agent capabilities hinge not only on raw model power, but also on the ability to write policies and assign temporal credit across episodes. MemAgents is a focused forum that explores foundational memory architectures and representations, such as episodic, semantic, and working memory, as well as their interfaces with external stores and parametric knowledge. Building from these foundational topics, the workshop will also discuss the broader concept of the memory layer that underwrites agent behavior across domains, including software tools, embodied tasks, and multi-agent settings. We bridge three perspectives:
Memory Architectures: Episodic, semantic, working, and parametric memory.
Systems & Evaluation: Data structures, retrieval pipelines, and long-horizon benchmarks.
Neuroscience-Inspired Memory: Complementary learning systems and hippocampal-cortical consolidation as design inspiration.
Check more detailed information here.