Skip to content
This repository has been archived by the owner on Nov 9, 2022. It is now read-only.

Fix for Issue 832 - Replicate should only work for hosts > 3. #845

Open
wants to merge 4 commits into
base: dev
Choose a base branch
from

Conversation

jamsilvia
Copy link
Contributor

@jamsilvia jamsilvia commented Sep 6, 2017

When trying to generate replicas for forests, boostrap checked for number of hosts > 1, and if not, it skipped replication. However, wipe did not perform this check and failed if a boostrap with replicas was attempted on a single host.
Both of these cases are wrong in a way. And both should be treated consistently. For both, replicas should not be attempted unless the number of hosts > 3. When doing internal replicas on a cluster of < 3 hosts (which requires an explicit command line argument) - a failure should be returned. When doing content replication on a cluster of < 3 hosts, it should only cause a warning. Since the same ml-config may be used for a production cluster and a single node in dev, this should not cause a failure.
Warnings are reported as a result of the bootstrap in this case. #832

grtjn and others added 4 commits August 19, 2017 10:41
…en hosts > 3Bootstrap originally only checked for hosts >1, and wipe did not check at all.But both cases should check for number of hosts > 3.
…en hosts > 3

Bootstrap originally only checked for hosts >1, and wipe did not check at all.
But both cases should check for number of hosts > 3.
@grtjn grtjn changed the base branch from master to dev September 18, 2017 13:15
@grtjn
Copy link
Contributor

grtjn commented Sep 18, 2017

PR was against master branch. I edited to rebase against dev instead..

@grtjn
Copy link
Contributor

grtjn commented Sep 18, 2017

Let me run a test with this. It was intentionally ignoring replication and such in a single-host situation, assuming someone was attempting to deploy the project on a local box for dev-only purposes.

It would be nice if that is still possible, without needing to edit ml-config.xml..

@grtjn
Copy link
Contributor

grtjn commented Sep 18, 2017

Ah, reading the description above, the single dev node case should be handled, but running a test won't harm anyhow..

@grtjn
Copy link
Contributor

grtjn commented Sep 18, 2017

A quick initial test seems to show it works as described:

$ ./ml local bootstrap --replicate-internals
ERROR: Forest replication and failover requires at least 3 nodes to function properly.  This cluster has only 1 node(s).

and

$ ./ml ml9 bootstrap
Bootstrapping your project into MarkLogic 9 on ml9-ml1...
WARNING: Forest replication and failover requires at least 3 nodes to function properly.  This cluster only has 1 node(s).  Defaulting to using no replicas.

The warning is printed twice though. I might take a second look at that..

@grtjn grtjn self-assigned this Sep 18, 2017
@grtjn grtjn added this to the October 2017 milestone Sep 18, 2017
map:put( ., "messages", () ),
.
);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably use map:new and map:entry here, particularly since it uses the ! map operator. It works though. But apparently the same trick was used directly below before..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I generally use the ! operator when creating a map, since it is much more performant than map:new and map:entry. It obviously matters little when creating a small map, but for larger maps the performance can be significantly worse (especially when a module is being called repeatedly).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, hadn't considered performance, nor was I expecting so much difference. My consideration was simply that map:new/map:entry gives code that is easier to understand.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never mind. I might clean up in future. Or never..

@grtjn
Copy link
Contributor

grtjn commented Sep 19, 2017

I took me a while before the penny dropped. The hosts check is repeated for each database and each forest. I think we should optimize that a bit. Let me give some thought on how best to improve that..

@jamsilvia
Copy link
Contributor Author

jamsilvia commented Sep 19, 2017 via email

@grtjn
Copy link
Contributor

grtjn commented Sep 19, 2017

Whoever is quickest.. :)

The current code is not entirely right either actually. Replication only works properly with a cluster of at least three nodes, but that is not what count($hosts) < 3 is checking. $hosts contains group-hosts, not all cluster hosts.

I was considering the thought of a kind of pre-flight check. So, instead of checking this during bootstrap itself, the bootstrap function in Ruby would invoke a different function first. Perhaps something like setup:check-config. Or, for the sake of simplicity, setup:do-setup could invoke a setup:check-config before invoking all the creates. And that check-config would then check for replication and hosts counts and such (and potentially other checks in future), and pass back the extra messages (preventing need for the $messages map:map)..

Or something like that..

@jamsilvia
Copy link
Contributor Author

jamsilvia commented Sep 20, 2017 via email

@grtjn
Copy link
Contributor

grtjn commented Sep 20, 2017

Yeah, I didnt mean parsing and checking the config with ruby. Setup:check-config would live in setup.xqy..

@jamsilvia
Copy link
Contributor Author

Oh, I see what you mean. I didn't take the "setup:check-config" as a call back to xqy. Yes, that makes sense!

I plan to take some time tomorrow to work on it - let me know if you get to it first, so we are not both working on the same thing.

Thanks!

@grtjn
Copy link
Contributor

grtjn commented Sep 25, 2017

FYI: I started poking at this..

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants