Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Updating safety settings for gemini when using Vertex Ai #13081

Closed
1 task done
PremNitinP opened this issue Apr 24, 2024 · 3 comments
Closed
1 task done
Labels
question Further information is requested

Comments

@PremNitinP
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

I am currently using gemini from vertex ai. Everything is working fine but i am not able to update the safety settings of the model. How can I do that?

Another question is what does the parameter "max_retries" do in the Vertex() class

@PremNitinP PremNitinP added the question Further information is requested label Apr 24, 2024
Copy link

dosubot bot commented Apr 24, 2024

To update the safety settings of your Gemini model in Vertex AI, you'll need to use the safety_settings parameter when initializing the GeminiMultiModal class. This parameter accepts a configuration object that allows you to specify your desired safety settings. Here's a quick example:

from google.generativeai import types as genai_types

# Define your safety settings
safety_settings = genai_types.SafetySettingOptions(
    # Your safety settings configuration here
)

# Initialize the Gemini model with custom safety settings
gemini_model = GeminiMultiModal(
    model_name="models/gemini-pro-vision",
    temperature=0.7,
    max_tokens=100,
    safety_settings=safety_settings,
)

As for the "max_retries" parameter, it seems there might have been some confusion, as it is not directly associated with the Vertex() class or its functionality regarding the Gemini model. Instead, "max_retries" is typically used in contexts where operations might fail and require retries, such as with query engines in some systems. It specifies the maximum number of attempts to perform an operation before giving up.

If you're dealing with operations that might fail and are looking to implement a retry mechanism, you would typically use "max_retries" in those specific contexts, not directly with the Gemini model initialization in Vertex AI.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@PremNitinP
Copy link
Author

@dosu Vertex is used using the import statement "from llama_index.llms.vertex import Vertex" and it has no parameters safety settings

Copy link

dosubot bot commented Apr 24, 2024

To address your concern about updating safety settings for the Gemini model in Vertex AI and the use of the max_retries parameter within the context of from llama_index.llms.vertex import Vertex, it appears there's a bit of a mix-up regarding the direct application of these concepts.

For updating safety settings specifically for the Gemini model, it's important to consult the documentation or source code related to the llama_index.llms.vertex and Gemini model integration. Typically, safety settings would be configured at the model initialization or through specific API calls designed for model configuration. However, based on the information you've provided, there isn't a direct method named safety_settings in the Vertex class.

Regarding the max_retries parameter, this is generally used to specify the maximum number of attempts for an operation before giving up, which is crucial for handling transient errors or rate limits when interacting with external services like Vertex AI. In the context of llama_index.llms.vertex.Vertex, if retry logic is a concern, it would be implemented as part of the operation calls rather than at the initialization of the Vertex class. This approach helps in making your application more resilient by retrying failed operations due to temporary issues.

If you're looking to implement or adjust safety settings and retry mechanisms, you'll likely need to extend or modify the existing classes or functions you're working with. For retries, consider wrapping your operation calls in a retry logic block, which could be custom-built or utilize existing libraries like tenacity for this purpose.

For more precise guidance, reviewing the specific documentation or source code of the llama_index.llms.vertex module and how it integrates with Vertex AI's Gemini model would be beneficial. This review can provide insights into the available methods for configuring safety settings and implementing retry logic tailored to your needs.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant