Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can we make this work long term? #483

Open
physiii opened this issue Aug 29, 2023 · 2 comments
Open

How can we make this work long term? #483

physiii opened this issue Aug 29, 2023 · 2 comments

Comments

@physiii
Copy link

physiii commented Aug 29, 2023

I do not understand how this can be stable since you are relying on donating memory and compute?

I would love to use this concept in production over AWS but I do not see how this will work long term unless you have some way of bartering.

For example, if I have a 10x more "powerful" computer (cpu/gpu/tpu etc) then you have to give 10 computes for every 1 of my computes.

This way the system is balanced otherwise you are relying on philanthropy which is not stable.

Is there any way to incorporate a memory/compute based economy?

@shkarlsson
Copy link

shkarlsson commented Oct 22, 2023

I agree. Traditional torrenting generally can rely on people donating resources because some hard drive space and network has a very low cost. Donating GPU, which is essentially what happens at petals, seems too steep of a cost for donors and doesn't seem to work in practice (judging on the short list of donors over at https://health.petals.dev).

Is there some sort of discussion or project addressing this? Maybe an equivalent to closed torrenting sites that require some minimum donation ratio? Or built in micropayments with crypto as I think @earonesty suggests here?

@ELigoP
Copy link

ELigoP commented Apr 29, 2024

I would look from psychological perspective, too: if there would be clear objectives what users are going to do on GPUs (e.g. research, benchmarks etc.), it would attract attention and increase participation of people with GPUs.

E.g. with SETI@home, folding@home there is clearly a clear cause to donate GPU time.

If the cause would be to train LLM for specific tasks, train private+RAG LLMs, benchmark existing LLMs that will benefit individual owners of clusters - that would help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants