Code and data for the paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"
-
Updated
Apr 11, 2024 - Jupyter Notebook
Code and data for the paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"
A curated list of reinforcement learning with human feedback resources[awesome-RLHF-Turkish] (continually updated)
Add a description, image, and links to the value-alignment topic page so that developers can more easily learn about it.
To associate your repository with the value-alignment topic, visit your repo's landing page and select "manage topics."