Skip to content

Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience

License

Notifications You must be signed in to change notification settings

Monadical-SAS/zippy-avatar-ai

Repository files navigation

demo.mp4

Zippy Talking Avatar with Azure Cognitive and Langchain

Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience.

How it works

Zippy seamlessly blends the power of multiple AI technologies to create a natural and engaging conversational experience:

  1. Text Input: Start the conversation by typing your message in the provided text box.
  2. OpenAI API Response Generation: Your text is forwarded to OpenAI API, which crafts a coherent and meaningful response.
  3. Azure Cognitive Services: Speech Synthesis: Azure's text-to-speech capabilities transform OpenAI API's response into natural-sounding audio.
  4. Viseme Generation: Azure creates accurate visemes (visual representations of speech sounds) to match the audio.
  5. Synchronized Delivery: The generated audio and visemes are delivered to Zippy, bringing the avatar to life with synchronized lip movements and spoken words.

Logo

Getting Started

Prerequisites

  1. Azure subscription - Create a free account.
  2. Create a Speech resource in the Azure portal.
  3. Your Speech resource key and region. After your Speech resource is deployed, select Go to resource to view and manage keys. For more information about Azure AI services resources, see Get the keys for your resource.
  4. OpenAI subscription - Create one.
  5. Creare a new secret key in the OpenAI portal.
  6. Node.js and npm (or yarn)

Installation

  1. Clone this repository
git clone [email protected]:Monadical-SAS/zippy-avatar-ai.git
  1. Navigate to the project directory
cd zippy-avatar-ai
  1. Install dependencies
npm install
# or
yarn install
  1. Create a .env.development file in the root directory of the project and add the following environment variables:
# AZURE
NEXT_PUBLIC_SPEECH_KEY=<YOUR_AZURE_SPEECH_KEY>
NEXT_PUBLIC_SPEECH_REGION=<YOUR_AZURE_SPEECH_REGION>

# OPENAI
NEXT_PUBLIC_OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
  1. Run the development server:
npm run dev
# or
yarn dev

Open http://localhost:3000 with your browser to see the result.

Additional information

Learn More

To learn more about Next.js, take a look at the following resources:

You can check out the Next.js GitHub repository - your feedback and contributions are welcome!

Deploy on Vercel

The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.

Check out our Next.js deployment documentation for more details.

About

Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages