top of page

Knowledge graphs constructed by LTM does not seem really accurate. Is that a problem?

Writer's picture: LaylaLayla

Updated: Mar 30, 2024

The more observant will notice that the knowledge graph constructed by the long-term memory app after ingesting a conversation is not really accurate.


This is not as big a problem as one initially believes.


The knowledge graph is constructed via "embeddings", not "words". In other words, it is a machine representation of the knowledge in the conversation shard, which may not necessarily make sense to humans. The knowledge graph corresponds to the L1 cache in LTM, which are used as more of a "heuristic" rather than an actual "subject -> entity" relation. These words and relations are what the LLM thinks are key concepts. They are primarily used to access the L2 cache, which contains factual summaries of the conversation shard.


So, in short, don't fret too much if the entity relations doesn't seem make sense - they do - they make sense to the LLM.


example knowledge graph of long term memory

154 views0 comments

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Email

Location

Gold Coast, Australia

SUBSCRIBE

Sign up to receive news and updates from Layla

Thanks for submitting!

© 2024 by Layla Network

  • Discord
  • Facebook
  • X
  • Youtube
  • Instagram
bottom of page