v2.0.0
genai
@ 2.0.0
Generative AI Tooling for IPythonic Platforms.
Install genai
in your notebook environment now!
π¦Ύ Get GPT help with code, SQL queries, DataFrames, Exceptions, and more in IPython.
π Supports all Jupyter environments, including IPython, JupyterLab, Jupyter Notebook, and Noteable.
%pip install genai
%load_ext genai
Enhancements:
Added
- π Keep conversations flowing with
%%assist
(#66) - πΌοΈ Emit suggestions as
Markdown
instead of creating new cells (#66) - π Model selection made easy with the
--model
flag for%%assist
(#65) - π‘ Introducing
GenaiMarkdown
β a dynamic Markdown display (#61) - π Create a
%%prompt
magic for setting the default prompts for assistance and exceptions (#71, #69)
Changed
-
π§ͺ Craft a more ipythonic context manager (#62, #66)
- Meet the new
Context
class: capture IPython history and make it ChatCompletion-friendly - Farewell
get_historical_context
, hellobuild_context
: context construction using the new Context class - Reduce messages sent to GPT models by trimming based on estimated number of tokens (#57)
- Meet the new
-
π― Type annotations step in! (#59)
Improved
- π Token length checks now available in %%assist (#57)
- π§Ή Code refactoring: introducing
craft_message
,repr_genai_pandas
, andrepr_genai
for more organized and readable code - π Enhanced pandas support: optimized DataFrame and Series representation for Large Language Model consumption using Markdown format
- π° Token management: a new module
tokens.py
featuringnum_tokens_from_messages
andtrim_messages_to_fit_token_limit
to help you stay within model limitations and budget - π Update assist magic documentation (#70)
Removed
- π«
%%assist
no longer generates new code cells. It now creates Markdown output instead (#66)- Relatedly,
in-place
is no longer an option since we do not change the cells
- Relatedly,
Changes:
craft_user_message
now relies on the newcraft_message
functioncraft_output_message
has been upgraded to use the newrepr_genai
functionget_historical_context
now sports an additionalmodel
parameter and utilizestokens.trim_messages_to_fit_token_limit
- For clarity, the
ignore_tokens
list now uses the term "first line" instead of "start" - GPT-4 token counting and message trimming now supported in
tokens.py