- It confuses the llm as it pushes the logits towards json garbage, rather than natural language
- onnx bert?
- bert.cpp / jna?
- using llama.cpp embeddings directly?
- Add memories during converse into associative mem
- Retrieve similar memories (if any) => guids (associate)
- For prompt: frecency keywords => memories by keyword => most_similar => summarize top-k => context: {kw: summary}
- For converse: most_similar => summarize top-k => context
- Inject similar memories in user request context
- https://github.com/nmslib/hnswlib (via JNA clong clang?)
- https://github.com/vioshyvo/mrpt (no persistence)
- get it to work simple example
- get it to work persistence, how to deal with scoping to user?!
- get it to work with embeddings and real data (find or generate some dataset)
- summarize using simple lexrank on imaginations?
- how to generate the proper amount of context? (summarizing???)
- [X] Discoverability of input bar — Maybe show imagination?
- [X] Nicer / intelligent placeholder?
- [ ] Button for archive or other commands?
- [ ] Dark mode
- [ ] Logo + favicon
- [X] Loading indicator login
- [ ] Accessibility and ARIA
- [X] Disable text input while in modal confirm
- StableDiffusion locally?
- maybe StableDiffusion v0.9 (new version)
- integrate image describe into memory
- https://github.com/deep-floyd/IF maybe?
- store description and image in memory contents
- use image describe as context for converse