Skip to content

Releases: alondmnt/joplin-plugin-jarvis

v0.8.5

04 Jun 01:58
Compare
Choose a tag to compare
  • new: separate settings sections for chat, related notes, annotations and research
  • fix: set default values for API keys
    • this is a workaround that ensures that keys are saved securely to your keychain (where available)
  • changed default settings
    • context tokens: increased to 2048
    • annotation: tags method changed to existing tags

v0.8.4

15 May 12:06
Compare
Choose a tag to compare
  • fix: work with Joplin <v3

v0.8.3

15 May 00:08
Compare
Choose a tag to compare
  • note that the min Joplin app version for this release is v3.0
  • improve: exclude notes in trash from db
  • improve: OpenAI model updates
    • added gpt-4o (latest model)
    • deprecated legacy gpt-3.5-turbo-16k
      • gpt-3.5-turbo points to a newer version of this model
    • deprecated legacy gpt-4
    • all legacy models are still accessible via the openai-custom model setting
    • improved model descriptions with tokens / price category

v0.8.0

04 May 03:48
Compare
Choose a tag to compare

new features

  • revamped Edit with Jarvis command (@jakubjezek001) (screenshot below)
  • Auto-complete with Jarvis command to autocomplete any text at the current cursor position
  • scroll to line of a found note chunk from the panel
  • chat context preview dialog (screenshot below)
  • token counter command
  • display note similarity score in panel

new models

  • OpenAI
    • replace text-davinci (deprecated) models with gpt-3.5-turbo-instruct (@jakubjezek001)
    • 3rd generation embedding / notes models text-embedding-3-small and text-embedding-3-large
    • chat model gpt-4-turbo: an efficient, strong model with a context window of 128K tokens
  • Google AI
    • deprecated PaLM
    • chat models gemini-1-pro and gemini-1.5-pro (a strong model with a context window of 1M tokens!)
    • embedding / notes models embedding-001 and text-embedding-004

new settings

  • Notes: Context tokens: the number of context tokens to extract from notes in "Chat with your notes" (previously used Chat: Memory tokens)
  • Notes: Context history: the number of user prompts to base notes context on for "Chat with your notes"
  • Notes: Custom prompt: the prompt (or additional instructions) to use for generating "Chat with your notes" responses
  • Notes: Parallel jobs: the number of parallel jobs to use for calculating text embeddings

chat improvements

  • chat display format (screenshot below)
  • chat with notes default prompt
  • chat parsing

general improvements

  • CodeMirror 6 / beta editor support
  • load USE from cache instead of re-downloading every time
  • faster model test on startup / model switch
  • various fixes

ux

  • new standard dialog style

Screenshot 1: New edit dialog
image

Screenshot 2: New chat context preview
image

Screenshot 3: New chat display format
image

v0.8.0-alpha.2

20 Apr 14:31
Compare
Choose a tag to compare
v0.8.0-alpha.2 Pre-release
Pre-release

pre-release with a number of features planned for v0.8.0. bold: added in this release.

  • new features
    • revamped note edit command interface (@jakubjezek001)
    • scroll to line of a found note chunk from the panel
    • chat context preview dialog
    • token counter command
    • display note similarity score in panel
  • new settings
    • Notes: Context tokens: the number of context tokens to extract from notes in "Chat with your notes" (previously used Chat: Memory tokens)
    • Notes: Context history: the number of user prompts to base notes context on for "Chat with your notes"
    • Notes: Custom prompt: the prompt (or additional instructions) to use for generating "Chat with your notes" responses
    • Notes: Parallel jobs: the number of parallel jobs to use for calculating text embeddings
  • chat improvements
    • chat display format
    • chat with notes default prompt
    • chat parsing
  • general improvements
    • CodeMirror 6 / beta editor support
    • load USE from cache instead of re-downloading every time
    • faster model test
  • new models
    • replace text-davinci models with gpt-3.5-turbo-instruct (@jakubjezek001)
  • ux
    • new standard dialog style

v0.8.0-alpha.1

18 Apr 01:11
Compare
Choose a tag to compare
v0.8.0-alpha.1 Pre-release
Pre-release

pre-release with a number of features planned for v0.8.0.

  • new features
    • revamped note edit command interface (@jakubjezek001)
    • scroll to line of a found note chunk from the panel
    • chat context preview dialog
    • display note similarity score in panel
  • new settings
    • Notes: Context tokens: the number of context tokens to extract from notes in "Chat with your notes" (previously used Chat: Memory tokens)
  • chat improvements
    • default chat context based on the last user prompt
    • chat with notes prompt
    • chat parsing
  • general improvements
    • CodeMirror 6 / beta editor support
    • load USE from cache instead of re-downloading every time
    • faster model test
  • new models
    • replace text-davinci models with gpt-3.5-turbo-instruct (@jakubjezek001)
  • ux
    • new standard dialog style

v0.7.0

31 Aug 19:09
Compare
Choose a tag to compare

Jarvis can now work completely offline! (Continue reading)
This release adds two new model interfaces.

Google PaLM

  • If you have access to it (it's free), you can use it for chat and for related notes.

Custom OpenAI-like APIs

  • This allows Jarvis to use custom endpoints and models that have an OpenAI-compatible interface.
  • Example: [tested] OpenRouter (for ebc000) setup guide
  • Example: [not tested] Azure OpenAI (previously requested)
  • Example: [tested] Locally served GPT4All (for laurent, and everyone else who showed interest) setup guide
    • This is an open source, offline model (you may in fact choose from several available models), that you can install and run on a laptop. It can be used for chat, and potentially also for related notes (embeddings didn't work for me, probably due to a gpt4all issue, but related notes already support the USE offline model).
    • This solution for an offline model is not ideal, as it may be technically challenging for a user to run their own server, but at the moment this workaround looks like the only viable solution, and doesn't involve a lot of steps.
  • Example: [not tested] LocalAI
    • This is another self-hosted server that supports many models, in case you run into issues with GPT4All.

Full Changelog: v0.6.0...v0.7.0

v0.6.0

09 Aug 20:57
Compare
Choose a tag to compare
  • Annotations
    • This release introduces the toolbar button / command Annotate note with Jarvis. It can automatically annotate a note based on its content in 4 ways: By setting the title of the note; by adding a summary section; by adding links to related notes; and by adding tags. (gpt-4 is recommended for tags.) Each of these 4 features can be turned on or off in the settings in order to customize the behavior of the command. In addition, each sub-command can be run separately. For more information see this guide.
  • System message
    • You may edit it in the settings to inform Jarvis who he is, what is his purpose, and provide more information about yourself and your interests, in order to customize Jarvis' responses.

Full Changelog: v0.5.3...v0.6.0

v0.5.3

20 Jul 19:10
Compare
Choose a tag to compare
  • new: custom OpenAI model IDs (closes #12)
    • select Chat: Model "(online) OpenAI: custom"
    • in the Advanced Settings section, set Chat: OpenAI custom model ID, for example: gpt-4-0314

v0.5.2

19 Jul 19:04
Compare
Choose a tag to compare
  • new: search box in the related notes panel
    • use free text to semantically search for related notes
    • in the example below, the notes are sorted by their relevance to the query in the box
      • within each note, its sections are sorted by their relevance
    • you may hide it in the settings ("Notes: Show search box")
image
  • new: global commands for chat with notes
    • any command that appears in a "jarvis" code block will set its default behavior for the current chat / note

    • you may override this default by using the command again within a specific prompt in the chat

    • for example:

            ```jarvis
            Context: This is the default context for each prompt in the chat.
            Search: This is the default search query.
            Not context: This is the default text that will be excluded from semantic search, and appended to every prompt.
            ```
      

Full Changelog: v0.5.1...v0.5.2