Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a new adaptive load balancer #12

Open
achimnol opened this issue Jul 6, 2015 · 0 comments
Open

Implement a new adaptive load balancer #12

achimnol opened this issue Jul 6, 2015 · 0 comments
Assignees

Comments

@achimnol
Copy link
Member

achimnol commented Jul 6, 2015

Goals:

  • Improve the convergence time
  • Evaluation on non-emulation modes and different machines
@achimnol achimnol self-assigned this Jul 6, 2015
achimnol added a commit that referenced this issue Jul 7, 2015
 * IMPORTANT CHANGE: We no longer use random variables in our load
   balancers!  Instead, load balancers calculate how many batches should
   be sent to the CPU while the GPU always take the coproc-ppdepth
   number of batches.  This greatly stabilizes the PPC of GPUs.

 * Now load balancers track the CPU ratio per NUMA node.
achimnol added a commit that referenced this issue Jul 7, 2015
achimnol added a commit that referenced this issue Jul 8, 2015
 * Changed host_port pair arguments of the `run_*.py` experiment scripts
   to comma-separated lists.
achimnol added a commit that referenced this issue Jul 8, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant