Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weights computation for MAML in RL Setting #2

Open
smiler80 opened this issue Jan 27, 2019 · 0 comments
Open

Weights computation for MAML in RL Setting #2

smiler80 opened this issue Jan 27, 2019 · 0 comments

Comments

@smiler80
Copy link

Hello @sudharsan13296

Thank you for this very interesting work.

I have a question regarding section 6.3 "MAML in Supervised Learning".
While in Supervised learning setting, Step 3: (inner loop) is quite obvious, I'm still not sure how to implement it for Reinforcement learning setting. In fact Di consists of K trajectories each one of horizon H. How should theta'i be computed?

A- For each of the Ks trajectories?
B- At the end of the all Ks trajectories training?

In both cases, do you have an idea on how should gradient-descent/losses be operated (eventually aggregated) to obtain theta'i?

Best Regards,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant