Skip to content

Commit

Permalink
Update RESOURCES.md (Typo "Azure Open AI"→"Azure OpenAI")
Browse files Browse the repository at this point in the history
  • Loading branch information
hyoshioka0128 committed May 5, 2024
1 parent 0cd80f1 commit f7bb4ea
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion 18-fine-tuning/RESOURCES.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The lesson was built using a number of core resources from OpenAI and Azure Open
| Title/Link | Description |
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Fine-tuning with OpenAI Models](https://platform.openai.com/docs/guides/fine-tuning?WT.mc_id=academic-105485-koreyst) | Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, saving you costs, improving response quality, and enabling lower-latency requests. **Get an overview of fine-tuning from OpenAI.** |
| [What is Fine-Tuning with Azure Open AI?](https://learn.microsoft.com/azure/ai-services/openai/concepts/fine-tuning-considerations#what-is-fine-tuning-with-azure-openai?WT.mc_id=academic-105485-koreyst) | Understand **what fine-tuning is (concept)**, why you should look at it (motivating problem), what data to use (training) and measuring the quality |
| [What is Fine-Tuning with Azure OpenAI?](https://learn.microsoft.com/azure/ai-services/openai/concepts/fine-tuning-considerations#what-is-fine-tuning-with-azure-openai?WT.mc_id=academic-105485-koreyst) | Understand **what fine-tuning is (concept)**, why you should look at it (motivating problem), what data to use (training) and measuring the quality |
| [Customize a model with fine-tuning](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython&pivots=programming-language-studio#continuous-fine-tuning?WT.mc_id=academic-105485-koreyst) | Azure OpenAI Service lets you tailor our models to your personal datasets using fine-tuning. Learn **how to fine-tune (process)** select models using Azure AI Studio, Python SDK or REST API. |
| [Recommendations for LLM fine-tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | LLMs may not perform well on specific domains, tasks, or datasets, or may produce inaccurate or misleading outputs. **When should you consider fine-tuning** as a possible solution to this? |
| [Continuous Fine Tuning](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython&pivots=programming-language-studio#continuous-fine-tuning?WT.mc_id=academic-105485-koreyst) | Continuous fine-tuning is the iterative process of selecting an already fine-tuned model as a base model and **fine-tuning it further** on new sets of training examples. |
Expand Down

0 comments on commit f7bb4ea

Please sign in to comment.