(AISTATS 2024) "Looping in the Human: Collaborative and Explainable Bayesian Optimization"
-
Updated
Mar 4, 2024 - Jupyter Notebook
(AISTATS 2024) "Looping in the Human: Collaborative and Explainable Bayesian Optimization"
Survey of preference alignment algorithms
Constructive Preference Elicitation for Social Choice With Setwise max-margin Learning.
Aligning LLM Agents by Learning Latent Preference from User Edits
An analysis of preference comparisons based on the Bayes factor
APReL: Active preference-based reward learning for human-robot interaction. Utilizing "Mountain Car" environment, learn from human preferences to reach the goal state. Applications in robotics and adaptability to other learning methods.
Code for the paper "Reward Design for Justifiable Sequential Decision-Making"; ICLR 2024
Project about experiments of the use of ILASP as a post-hoc method over black-box models, in which we also study and approach technical issues like exponential time execution.
Preference Learning with Gaussian Processes and Bayesian Optimization
Preferences Learning JS app for visual images
In this project, we design a recurrent neural network to simulate a cognitive model of decision-making called Multi Alternative Decision Field Theory (MDFT). We train this RNN to learn the parameters of MDFT.
learning-to-rank
Project on preference learning - ENSAE ParisTech
[P]reference and [R]ule [L]earning algorithm implementation for Python 3 (https://arxiv.org/abs/1812.07895)
Data and models for the paper "Configurable Safety Tuning of Language Models with Synthetic Preference Data"
Bayesian Spatial Bradley--Terry
A paper under AAAI-20 review
Java framework for Preference Learning
Code for the project: "Analysis of Recommendation-systems based on User Preferences".
Add a description, image, and links to the preference-learning topic page so that developers can more easily learn about it.
To associate your repository with the preference-learning topic, visit your repo's landing page and select "manage topics."