⭐️ tg-ollama migrated to
tg-local-llm (not depending on ollama anymore)
- Reworked prompting
- Streaming responses (doesn't wait for each message to be fully generated before responding)
- "typing" note
- Ability to respond with a message before using a tool, and edit it with an actual response afterwards
- Leverages structured outputs to constraint response format, making responses much more consistent and reliable
- Security measures (filesystem, network access via tools)
- Proper .env usage
- /ai current preferences
- /ai [field] [value] to control preferences
- limit preference to enable display message usage limit
- Deno rewrite
- More solutions, bigger README
➡️
Previous update➡️
GitHub