Skip to content

Commit

Permalink
Add ConversationExistsAsync method (#81)
Browse files Browse the repository at this point in the history
  • Loading branch information
marcominerva committed Jun 26, 2023
2 parents 93e2355 + 4485a83 commit e4e3b72
Show file tree
Hide file tree
Showing 18 changed files with 109 additions and 18 deletions.
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,12 @@ If necessary, it is possibile to provide a custom Cache by implementing the [ICh
localCache.Remove(conversationId);
return Task.CompletedTask;
}

public Task<bool> ExistsAsync(Guid conversationId, CancellationToken cancellationToken = default)
{
var exists = localCache.ContainsKey(conversationId);
return Task.FromResult(exists);
}
}

// Registers the custom cache at application startup.
Expand Down
4 changes: 2 additions & 2 deletions docs/ChatGptNet.Models/OpenAIChatGptModels.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ public static class OpenAIChatGptModels
| name | description |
| --- | --- |
| const [Gpt35Turbo](OpenAIChatGptModels/Gpt35Turbo.md) | GPT-3.5 model can understand and generate natural language or code and it is optimized for chat. |
| const [Gpt35Turbo_16k](OpenAIChatGptModels/Gpt35Turbo_16k.md) | A model with the same capabilities as the standard [`Gpt35Turbo`](./OpenAIChatGptModels/Gpt35Turbo.md) model but with 4 times the context. |
| const [Gpt35Turbo_16k](OpenAIChatGptModels/Gpt35Turbo_16k.md) | A model with the same capabilities as the standard [`Gpt35Turbo`](./OpenAIChatGptModels/Gpt35Turbo.md) model but with 4 times the token limit of [`Gpt35Turbo`](./OpenAIChatGptModels/Gpt35Turbo.md). |
| const [Gpt4](OpenAIChatGptModels/Gpt4.md) | GPT-4 is a large multimodal model that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities. is optimized for chat but works well for traditional completions tasks. |
| const [Gpt4_32k](OpenAIChatGptModels/Gpt4_32k.md) | A model with the same capabilities as the base [`Gpt4`](./OpenAIChatGptModels/Gpt4.md) model but with 4x the context length. |
| const [Gpt4_32k](OpenAIChatGptModels/Gpt4_32k.md) | A model with the same capabilities as the base [`Gpt4`](./OpenAIChatGptModels/Gpt4.md) model but with 4 times the token limit of [`Gpt4`](./OpenAIChatGptModels/Gpt4.md). |

## Remarks

Expand Down
2 changes: 1 addition & 1 deletion docs/ChatGptNet.Models/OpenAIChatGptModels/Gpt35Turbo.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ public const string Gpt35Turbo;

## Remarks

See [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5) for more information.
This model supports 4.096 tokens. See [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5) for more information.

## See Also

Expand Down
4 changes: 2 additions & 2 deletions docs/ChatGptNet.Models/OpenAIChatGptModels/Gpt35Turbo_16k.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# OpenAIChatGptModels.Gpt35Turbo_16k field

A model with the same capabilities as the standard [`Gpt35Turbo`](./Gpt35Turbo.md) model but with 4 times the context.
A model with the same capabilities as the standard [`Gpt35Turbo`](./Gpt35Turbo.md) model but with 4 times the token limit of [`Gpt35Turbo`](./Gpt35Turbo.md).

```csharp
public const string Gpt35Turbo_16k;
```

## Remarks

See [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5) for more information.
This model supports 16.384 tokens. See [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5) for more information.

## See Also

Expand Down
2 changes: 1 addition & 1 deletion docs/ChatGptNet.Models/OpenAIChatGptModels/Gpt4.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ public const string Gpt4;

## Remarks

This model is currently in a limited beta and only accessible to those who have been granted access. See [GPT-4](https://platform.openai.com/docs/models/gpt-4) for more information.
This model supports 8.192 tokens and is currently in a limited beta and only accessible to those who have been granted access. See [GPT-4](https://platform.openai.com/docs/models/gpt-4) for more information.

## See Also

Expand Down
4 changes: 2 additions & 2 deletions docs/ChatGptNet.Models/OpenAIChatGptModels/Gpt4_32k.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# OpenAIChatGptModels.Gpt4_32k field

A model with the same capabilities as the base [`Gpt4`](./Gpt4.md) model but with 4x the context length.
A model with the same capabilities as the base [`Gpt4`](./Gpt4.md) model but with 4 times the token limit of [`Gpt4`](./Gpt4.md).

```csharp
public const string Gpt4_32k;
```

## Remarks

This model is currently in a limited beta and only accessible to those who have been granted access. See [GPT-4](https://platform.openai.com/docs/models/gpt-4) for more information.
This model supports 32.768 tokens and is currently in a limited beta and only accessible to those who have been granted access. See [GPT-4](https://platform.openai.com/docs/models/gpt-4) for more information.

## See Also

Expand Down
1 change: 1 addition & 0 deletions docs/ChatGptNet/IChatGptCache.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ public interface IChatGptCache

| name | description |
| --- | --- |
| [ExistsAsync](IChatGptCache/ExistsAsync.md)(…) | Gets a value that indicates whether the given conversation exists in the cache. |
| [GetAsync](IChatGptCache/GetAsync.md)(…) | Gets the list of messages for the given *conversationId*. |
| [RemoveAsync](IChatGptCache/RemoveAsync.md)(…) | Removes from the cache all the message for the given *conversationId*. |
| [SetAsync](IChatGptCache/SetAsync.md)(…) | Saves the list of messages for the given *conversationId*, using the specified *expiration*. |
Expand Down
23 changes: 23 additions & 0 deletions docs/ChatGptNet/IChatGptCache/ExistsAsync.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# IChatGptCache.ExistsAsync method

Gets a value that indicates whether the given conversation exists in the cache.

```csharp
public Task<bool> ExistsAsync(Guid conversationId, CancellationToken cancellationToken = default)
```

| parameter | description |
| --- | --- |
| conversationId | The unique identifier of the conversation. |
| cancellationToken | The token to monitor for cancellation requests. |

## Return Value

The Task corresponding to the asynchronous operation.

## See Also

* interface [IChatGptCache](../IChatGptCache.md)
* namespace [ChatGptNet](../../ChatGptNet.md)

<!-- DO NOT EDIT: generated by xmldocmd for ChatGptNet.dll -->
1 change: 1 addition & 0 deletions docs/ChatGptNet/IChatGptClient.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ public interface IChatGptClient
| [AddFunctionResponseAsync](IChatGptClient/AddFunctionResponseAsync.md)(…) | Adds a function response to the conversation history. |
| [AskAsync](IChatGptClient/AskAsync.md)(…) | Requests a new chat interaction using the default completion model specified in the [`DefaultModel`](./ChatGptOptions/DefaultModel.md) property. (4 methods) |
| [AskStreamAsync](IChatGptClient/AskStreamAsync.md)(…) | Requests a new chat interaction (using the default completion model specified in the [`DefaultModel`](./ChatGptOptions/DefaultModel.md) property) with streaming response, like in ChatGPT. (2 methods) |
| [ConversationExistsAsync](IChatGptClient/ConversationExistsAsync.md)(…) | Determines if a chat conversation exists. |
| [DeleteConversationAsync](IChatGptClient/DeleteConversationAsync.md)(…) | Deletes a chat conversation, clearing all the history. |
| [GetConversationAsync](IChatGptClient/GetConversationAsync.md)(…) | Retrieves a chat conversation from the cache. |
| [LoadConversationAsync](IChatGptClient/LoadConversationAsync.md)(…) | Loads messages into a new conversation. (2 methods) |
Expand Down
24 changes: 24 additions & 0 deletions docs/ChatGptNet/IChatGptClient/ConversationExistsAsync.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# IChatGptClient.ConversationExistsAsync method

Determines if a chat conversation exists.

```csharp
public Task<bool> ConversationExistsAsync(Guid conversationId,
CancellationToken cancellationToken = default)
```

| parameter | description |
| --- | --- |
| conversationId | The unique identifier of the conversation. |
| cancellationToken | The token to monitor for cancellation requests. |

## Return Value

`true` if the conversation exists; otherwise, `false`.

## See Also

* interface [IChatGptClient](../IChatGptClient.md)
* namespace [ChatGptNet](../../ChatGptNet.md)

<!-- DO NOT EDIT: generated by xmldocmd for ChatGptNet.dll -->
6 changes: 6 additions & 0 deletions samples/ChatGptConsole/Program.cs
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,10 @@ public Task RemoveAsync(Guid conversationId, CancellationToken cancellationToken
localCache.Remove(conversationId);
return Task.CompletedTask;
}

public Task<bool> ExistsAsync(Guid conversationId, CancellationToken cancellationToken = default)
{
var exists = localCache.ContainsKey(conversationId);
return Task.FromResult(exists);
}
}
6 changes: 6 additions & 0 deletions samples/ChatGptFunctionCallingConsole/Program.cs
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,10 @@ public Task RemoveAsync(Guid conversationId, CancellationToken cancellationToken
localCache.Remove(conversationId);
return Task.CompletedTask;
}

public Task<bool> ExistsAsync(Guid conversationId, CancellationToken cancellationToken = default)
{
var exists = localCache.ContainsKey(conversationId);
return Task.FromResult(exists);
}
}
6 changes: 6 additions & 0 deletions src/ChatGptNet/ChatGptClient.cs
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,12 @@ public async Task<IEnumerable<ChatGptMessage>> GetConversationAsync(Guid convers
return messages;
}

public async Task<bool> ConversationExistsAsync(Guid conversationId, CancellationToken cancellationToken = default)
{
var exists = await cache.ExistsAsync(conversationId, cancellationToken);
return exists;
}

public async Task DeleteConversationAsync(Guid conversationId, bool preserveSetup = false, CancellationToken cancellationToken = default)
{
if (!preserveSetup)
Expand Down
6 changes: 6 additions & 0 deletions src/ChatGptNet/ChatGptMemoryCache.cs
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,10 @@ public Task RemoveAsync(Guid conversationId, CancellationToken cancellationToken
cache.Remove(conversationId);
return Task.CompletedTask;
}

public Task<bool> ExistsAsync(Guid conversationId, CancellationToken cancellationToken = default)
{
var exists = cache.TryGetValue(conversationId, out _);
return Task.FromResult(exists);
}
}
4 changes: 0 additions & 4 deletions src/ChatGptNet/ChatGptNet.csproj
Original file line number Diff line number Diff line change
Expand Up @@ -30,17 +30,13 @@
<PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="6.0.0" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection.Abstractions" Version="6.0.0" />
<PackageReference Include="Microsoft.Extensions.Http" Version="6.0.0" />
<PackageReference Include="System.Net.Http.Json" Version="6.0.1" />
<PackageReference Include="System.Text.Json" Version="6.0.8" />
</ItemGroup>

<ItemGroup Condition="'$(TargetFramework)' == 'net7.0'">
<PackageReference Include="Microsoft.Extensions.Caching.Memory" Version="7.0.0" />
<PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="7.0.4" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection.Abstractions" Version="7.0.0" />
<PackageReference Include="Microsoft.Extensions.Http" Version="7.0.0" />
<PackageReference Include="System.Net.Http.Json" Version="7.0.1" />
<PackageReference Include="System.Text.Json" Version="7.0.3" />
</ItemGroup>

<ItemGroup>
Expand Down
8 changes: 8 additions & 0 deletions src/ChatGptNet/IChatGptCache.cs
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,12 @@ public interface IChatGptCache
/// <param name="cancellationToken">The token to monitor for cancellation requests.</param>
/// <returns>The <see cref="Task"/> corresponding to the asynchronous operation.</returns>
Task RemoveAsync(Guid conversationId, CancellationToken cancellationToken = default);

/// <summary>
/// Gets a value that indicates whether the given conversation exists in the cache.
/// </summary>
/// <param name="conversationId">The unique identifier of the conversation.</param>
/// <param name="cancellationToken">The token to monitor for cancellation requests.</param>
/// <returns>The <see cref="Task"/> corresponding to the asynchronous operation.</returns>
Task<bool> ExistsAsync(Guid conversationId, CancellationToken cancellationToken = default);
}
8 changes: 8 additions & 0 deletions src/ChatGptNet/IChatGptClient.cs
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,14 @@ Task<Guid> LoadConversationAsync(IEnumerable<ChatGptMessage> messages, Cancellat
/// <seealso cref="ChatGptOptions.MessageLimit"/>
Task<Guid> LoadConversationAsync(Guid conversationId, IEnumerable<ChatGptMessage> messages, bool replaceHistory = true, CancellationToken cancellationToken = default);

/// <summary>
/// Determines if a chat conversation exists.
/// </summary>
/// <param name="conversationId">The unique identifier of the conversation.</param>
/// <param name="cancellationToken">The token to monitor for cancellation requests.</param>
/// <returns><see langword="true"/> if the conversation exists; otherwise, <see langword="false"/>.</returns>
public Task<bool> ConversationExistsAsync(Guid conversationId, CancellationToken cancellationToken = default);

/// <summary>
/// Deletes a chat conversation, clearing all the history.
/// </summary>
Expand Down
12 changes: 6 additions & 6 deletions src/ChatGptNet/Models/OpenAIChatGptModels.cs
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,16 @@ public static class OpenAIChatGptModels
/// GPT-3.5 model can understand and generate natural language or code and it is optimized for chat.
/// </summary>
/// <remarks>
/// See <see href="https://platform.openai.com/docs/models/gpt-3-5">GPT-3.5</see> for more information.
/// This model supports 4.096 tokens. See <see href="https://platform.openai.com/docs/models/gpt-3-5">GPT-3.5</see> for more information.
/// </remarks>
/// <seealso cref="Gpt35Turbo_16k"/>
public const string Gpt35Turbo = "gpt-3.5-turbo";

/// <summary>
/// A model with the same capabilities as the standard <see cref="Gpt35Turbo"/> model but with 4 times the context.
/// A model with the same capabilities as the standard <see cref="Gpt35Turbo"/> model but with 4 times the token limit of <see cref="Gpt35Turbo"/>.
/// </summary>
/// <remarks>
/// See <see href="https://platform.openai.com/docs/models/gpt-3-5">GPT-3.5</see> for more information.
/// This model supports 16.384 tokens. See <see href="https://platform.openai.com/docs/models/gpt-3-5">GPT-3.5</see> for more information.
/// </remarks>
/// <seealso cref="Gpt35Turbo"/>
public const string Gpt35Turbo_16k = "gpt-3.5-turbo-16k";
Expand All @@ -33,16 +33,16 @@ public static class OpenAIChatGptModels
/// GPT-4 is a large multimodal model that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities. is optimized for chat but works well for traditional completions tasks.
/// </summary>
/// <remarks>
/// This model is currently in a limited beta and only accessible to those who have been granted access. See <see href="https://platform.openai.com/docs/models/gpt-4">GPT-4</see> for more information.
/// This model supports 8.192 tokens and is currently in a limited beta and only accessible to those who have been granted access. See <see href="https://platform.openai.com/docs/models/gpt-4">GPT-4</see> for more information.
/// </remarks>
/// <seealso cref="Gpt4_32k"/>
public const string Gpt4 = "gpt-4";

/// <summary>
/// A model with the same capabilities as the base <see cref="Gpt4"/> model but with 4x the context length.
/// A model with the same capabilities as the base <see cref="Gpt4"/> model but with 4 times the token limit of <see cref="Gpt4"/>.
/// </summary>
/// <remarks>
/// This model is currently in a limited beta and only accessible to those who have been granted access. See <see href="https://platform.openai.com/docs/models/gpt-4">GPT-4</see> for more information.
/// This model supports 32.768 tokens and is currently in a limited beta and only accessible to those who have been granted access. See <see href="https://platform.openai.com/docs/models/gpt-4">GPT-4</see> for more information.
/// </remarks>
/// <seealso cref="Gpt4"/>
public const string Gpt4_32k = "gpt-4-32k";
Expand Down

0 comments on commit e4e3b72

Please sign in to comment.