Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are applications deployed highly available? #106

Open
rnbokade opened this issue Feb 21, 2023 · 8 comments
Open

Are applications deployed highly available? #106

rnbokade opened this issue Feb 21, 2023 · 8 comments
Assignees
Labels
question Further information is requested

Comments

@rnbokade
Copy link

Although as far as I understand from docs they are not, but it will be great if we could implement deploy as highly available system, which basically tries to create 3 copies of application with pod affinity to deploy on different nodes ensuring high availability

@jfmatth
Copy link

jfmatth commented Feb 22, 2023

I haven't tested this (yet), but I think the app has settings for this? Under that App, resources, you can setup autoscaling and min/max pods.

Are you looking for a pod on each node? that's a daemonset no? in K8s world that's typically setup because your app needs node information, not to be highly available. If you want HA, then the multiple pods option should cover it in K8s land.

HTH

@mms-gianni
Copy link
Member

It is possible to configure the pod affinity. But only by editing the CRD. It is not implemented in the UI yet.

https://github.com/kubero-dev/kubero-operator/blob/main/helm-charts/kuberoapp/templates/deployment-web.yaml#L157

But this would be a nice feature.

@mms-gianni mms-gianni added the enhancement New feature or request label Feb 22, 2023
@mms-gianni mms-gianni self-assigned this Feb 22, 2023
@jfmatth
Copy link

jfmatth commented Feb 22, 2023

but why pod affinity, that's not the best HA solution.

@mms-gianni
Copy link
Member

mms-gianni commented Feb 23, 2023

@jfmatth the problem with just adding more pods without affinity is, you are never sure where they are gonna be deployed. In the worst case, all pods are deployed on the same node. And if this particular node goes down, you will have a service interrupt until they are spun up on a different node (if there are enough resources left.)

So having an affinity in place, distributes the pods as soon as possible.

@jfmatth
Copy link

jfmatth commented Feb 23, 2023

Not to be-labor the point, I still don't think affinity is the right solution to HA, which is what the user wanted.

To your point, if all pods are deployed on the same node, then IMHO there are other issues with the cluster, but I've not worked in large clusters before so can't say for sure.

Seems like using PA for HA isn't the right approach.

@rnbokade
Copy link
Author

Actually use case of Daemonset and what I am proposing are different... daemonsets could be log collectors and services like that which need to run on all nods...
However HA apps don't need to be running on all nodes but should atleast have 1 copy each on 3 or 5 nodes....
Why specifically 3 or 5 and not 2 or 4 etc.... this is because how consensus work....
Let's say there are 9 nodes in your cluster, you just need pod on atleast 3, why 3?
Because for concensus to work more than 50% node should agree upon state.
Maybe, there could be a switch in UI which says "Deploy in HA" mode, when turned on there should be 3 or 5 pods spun up with proper pod affinity settings.

@jfmatth
Copy link

jfmatth commented Feb 26, 2023

I can appreciate the use case, but for Kubero, it just seems too specific and fraught with complexity for a system that is trying to make pushing apps to K8s easy.

I suspect if you need such consensus, you should build the app in a container, and deploy with Helm or something that is closer to the 'metal' than Kubero.

But, I'm not the maintainer of Kubero, so he'll have to decide 😄

@mms-gianni mms-gianni added question Further information is requested and removed enhancement New feature or request labels Mar 8, 2023
@denes16
Copy link

denes16 commented Aug 25, 2023

We can try with topologySpreadConstaints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants