Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server mode #118

Open
sjpotter opened this issue Sep 12, 2023 · 3 comments
Open

server mode #118

sjpotter opened this issue Sep 12, 2023 · 3 comments

Comments

@sjpotter
Copy link

nyuu suffers from a few things in my opinion that would be fixed if it had a "server" (i.e. persistent) mode, instead of what I'd call its ephemeral nature of calling it once per nzb one wants to generate.

  1. one can't tell directly from viewing the file system structure if it succeeded or not (i.e. if it fails, it will leave a partial nzb). One can examine the nzb to see if it has a closing tag and one can monitor the exit code of the process itself, but the fact that it leaves a partial nzb in case of failure, seems problematic

  2. one cannot stop it in the middle of processing to continue where one left off. Conceptually (haven't tried), I guess one can SIGSTOP/SIGCONT the process, but this wouldn't let one reconfigure the "server" settings.

  3. if one loses the nzb, one cannot regenerate it, so the info is lost

  4. if one runs nyuu in a loop, I've seen it have issues to reconnect on loop iterations (I've had to make my retry count high).

there are possibly other reasons, but those are what I've run into.

Therefore, I believe nyuu should have a server mode where it can run persistently, one can add "jobs" (directories / subject / poster/ nzb pass / perhaps others) and nyuu will store the info in a DB (say sqlite) and update the db as it goes. As this DB will have all the info, one would be easily able to only generate the nzb file on completion. If one has to stop the process in the middle (or perhaps due to a crash), one can restart it where one left off. And since the data will be stored in a local DB, one can regenerate the nzb as needed.

And since the nyuu process will be persistent between "jobs", one doesn't have a problem reconnecting between them, as the network connections will be reused, just like they are reused between parts within a single job.

It could also enable new forms of obfuscation. While an end user might obfuscate the files (say requiring par2 and the nzb to put them all together), all files from a single job are grouped together today (especially if using a single server). With a persistent process and multiple jobs (say even hundreds), one could interleave the upload amongst all the jobs, making it much harder (without the nzb) to associate files together

ex:. upload file 1 of job 1, file 1 of job 2, file 1 of job 3,

or even better, pick random job that is still active, pick random file not yet uploaded from said job.

@animetosho
Copy link
Owner

Thanks for the suggestions!

  1. I thought it wrote the closing tag on error? It'll be missing segments, if terminated early, but the XML should be "readable". This obviously won't happen if the process crashes or is killed (if this is a concern, the --nzb-cork option can be used to buffer the whole file, resulting in a 0 byte output if it's killed early)
  2. I recall resumption being suggested in the past. It's something that'd be nice to have, though has its complexities
  3. I'm not sure how your suggestion necessarily solves this. You mention a database, but if that gets lost, there's no restoring that

A design with a managed job queue would be a fair change for Nyuu, I think. Maybe some day, but it's not really a focus at the moment unfortunately.
A number of GUI based applications might be more suited to this design philosophy.

@sjpotter
Copy link
Author

sjpotter commented Sep 12, 2023

ok, I actually haven't tested 1'st bullet, but that's even worse? you might think it was all uploaded and it wasn't.

re 3'rd bullet, yes, the database would be needed and yes, it could conceptually get lost, but the database isn't constantly moved around. The NZBs one creates will be moved about, so the likelyhood of the DB being lost (relative to the NZBs) is minimal (IMO).

@animetosho
Copy link
Owner

but that's even worse? you might think it was all uploaded and it wasn't.

The idea is to allow the NZB to be parsed and salvaged if need be. One case may be that posting is complete, but it failed on check.
I could offer a 'delete NZB on failure' I suppose. Though the error code was supposed to be the indicator of success/failure.

but the database isn't constantly moved around. The NZBs one creates will be moved about, so the likelyhood of the DB being lost (relative to the NZBs) is minimal (IMO).

To me, that just sounds like a case of how one manages NZBs, which is outside of Nyuu's scope. You could just make a second copy of all the NZBs to avoid that, unless I'm missing something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants