Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would you like regularly release newly trained model checkpoint? #53

Open
yhyu13 opened this issue Apr 7, 2023 · 2 comments
Open

Would you like regularly release newly trained model checkpoint? #53

yhyu13 opened this issue Apr 7, 2023 · 2 comments

Comments

@yhyu13
Copy link

yhyu13 commented Apr 7, 2023

Hi,

I have just checkout out the HF repo for this project. I seems most models are not updated for a week.

msedge_hLgqbxeTIi

I know FT would cost time. I wonder if you can release a schedule for regularly updating fined tuned models based on latest dataset included in this project? Or does it mean releaseing your own fine tuned ckpt is no longer within the scope of this project?

Thanks!

@PhoebusSi
Copy link
Owner

We will interview the intentions of followers later and select the 5 to 10 combinations of 'data + llm' that have the highest attention, followed by completing the finetuning and open source checkpoint. In the long run, we will open source all possible checkpoints. Please follow us for a long time.

@ziwang-com
Copy link

强烈建议发布,目前hf极度缺乏这种不同时间周期的连续训练模型,而这种模型和数据,是很多llm优化项目说需要的。
在《zero-lora零训练llm调参算法》当中,其中的一个关注要点就是:
https://github.com/ziwang-com/zero-lora
基于时间(不同训练周期检查点)、空间(不同token权重对比)、深度(不同模型的tok权重映射)等多种维度的lora权重优化体系。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants