Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add design for only restore volume data. #7481

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

blackpiglet
Copy link
Contributor

@blackpiglet blackpiglet commented Feb 29, 2024

Thank you for contributing to Velero!

Please add a summary of your change

Does your change fix a particular issue?

Fixes #(issue)
This is the design document for issue #7345.

Please indicate you've done the following:

  • Accepted the DCO. Commits without the DCO will delay acceptance.
  • Created a changelog file or added /kind changelog-not-required as a comment on this pull request.
  • Updated the corresponding documentation in site/content/docs/main.

@github-actions github-actions bot added the Area/Design Design Documents label Feb 29, 2024
@blackpiglet blackpiglet added the kind/changelog-not-required PR does not require a user changelog. Often for docs, website, or build changes label Feb 29, 2024
@reasonerjt reasonerjt self-requested a review February 29, 2024 09:25
@blackpiglet blackpiglet marked this pull request as ready for review February 29, 2024 09:26
@draghuram
Copy link
Contributor

@blackpiglet, Can you please rebase the PR? It is currently showing what appears to be many unrelated commits.

Few comments/questions:

  • How are the new PVC names generated? I doesn't look like users can specify the target names so I guess Velero will rename them in a certain way?
  • Since "pods" is not included in the resource type, I guess this restore is not supported for FSB?
  • Does Snapshot data mover restore work for PVCs without specifying pods as resource type? I will verify it but it can work I guess because a mover pod is started?

I personally think that in-place restore is going to be more useful, especially if files can be selected for restore. If the whole PVC is being created any way, users will need to restart pods to attach the PVCs so it is almost like restoring pods themselves.

@anshulahuja98
Copy link
Collaborator

Can #6354 be used in a generic way to achieve this use case?
CC: @kaovilai

@kaovilai
Copy link
Contributor

kaovilai commented Mar 4, 2024

Can #6354 be used in a generic way to achieve this use case?
CC: @kaovilai

So restore limiting resources to a labeled pv (or namespaced PVC) with recreate flag?

@blackpiglet
Copy link
Contributor Author

blackpiglet commented Mar 4, 2024

@anshulahuja98
If the in-place restore is required, the recreating Existing Resource Policy cannot do that.

@blackpiglet
Copy link
Contributor Author

blackpiglet commented Mar 4, 2024

@draghuram
Thanks. I rebased the branch, but there is still some problem with GPRC generating.

First, could you share some thoughts about why the in-place restore is more useful? I recently also heard some requirements about the GitOps and DevOps pipeline scenario. In those cases, some tools guarantee that the workload in k8s always running as expected. The in-place restore is needed in those cases, but I don't know whether those are common cases for k8s usage.

Second, to answer your question.

  • How are the new PVC names generated? I doesn't look like users can specify the target names so I guess Velero will rename them in a certain way? - User cannot specify the generated name. I haven't decided how to build the name yet. Maybe something like rename-pvc-<uuid>.
    Since "pods" is not included in the resource type, I guess this restore is not supported for FSB? - FSB restore is supported, because the Generate Backup/Restore will first create an intermediate pod and PVC to mount the PV. After the backup/restore, the intermediate pod and PVC are deleted.
    Does Snapshot data mover restore work for PVCs without specifying pods as resource type? I will verify it but it can work I guess because a mover pod is started? - Snapshot data mover can work without the pod, the reason is the same as the second item.

- Can filter the volumes that needs data restore.

## Non Goals
- Will not support the in-place volume data restore. To achieve data integrity, new PVC and PV are created, and wait for the user to mount the restored volume.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heard from US community meeting that this will be moved into Goals.

Would have to discuss how we're handling volumes that are already mounted.. or do we say, you have to pre-delete/scaledown so volumes are not mounted by restore time.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kaovilai For in-place, we can't scale down unless we create a dummy pod to mount the PVC, since there must be a mounting pod for kopia to have access to it from the node agent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have dummy pod pattern from data mover work to follow

Comment on lines +23 to +26
// DataOnly specify whether to only restore volume data and related k8s resources
// without any other k8s resources.
DataOnly *bool `json:"dataOnly"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work via CSI only? file system backup only? or both?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use the data mover like approach where we write to a new pvc using file system backup through a dummy pod.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we go with the in-place restore, only filesystem backup is supported.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An advanced CSI in-place could auto patch deployments to a new pvc restored from snapshots.

Comment on lines 72 to 74
// DataOnlyPvcMap is a map of backed-up PVC name and the data-only
// restored PVC name.
DataOnlyPvcMap map[types.NamespacedName]types.NamespaceName `json:"dataOnlyPvcMap,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible to add labelSelector to all resources that user expect to be swapping to restored PVC name, and having velero do the swapping on labeled (core k8s only for now) resources in the namespace.

kubectl label deployment deployName velero.io/sync-restored-pvc-name=<original-pvc-in-backup-name>

Any PVC restored using this method with a new name will have annotation velero.io/restored-pvc-original-name=<original name> that velero can use as reference for subsequent backup/restore pvc name syncing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we go with the in-place way, the name-mapping logic is unnecessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right.. for FSB with deleting PVC prior to restoring to the same name.

@draghuram
Copy link
Contributor

@blackpiglet, gitops is certainly one reason for in place restore. Another use case is the need to periodically update data in an alternate standby cluster. Finally, if some application files are deleted or corrupted, user may only want to restore those files.

@blackpiglet
Copy link
Contributor Author

@blackpiglet, gitops is certainly one reason for in place restore. Another use case is the need to periodically update data in an alternate standby cluster. Finally, if some application files are deleted or corrupted, user may only want to restore those files.

@draghuram
Thanks for the feedback.
If we go for the in-place data restore way, there will be some limitations to the using scenario. Only the filesystem uploader backup is supported. For the snapshot-based backup, Achieving the in-place restore result is impossible.

Is this acceptable? Data-only restore cannot support snapshot-based backups.
Involve more maintainers in this discussion.
@anshulahuja98 @kaovilai @sseago

@kaovilai
Copy link
Contributor

FSB only is acceptable.

Tho I think restore from CSI snapshot to new PVC and then patching to use new PVC could be considered in-place with the caveat that it requires pod restart.

@blackpiglet
Copy link
Contributor Author

blackpiglet commented Mar 20, 2024

However I think restore from CSI snapshot to new PVC and then patching to use new PVC could be considered in-place with the caveat that it requires pod restart.

Thanks for the quick response, then I will follow the filesystem-only way for now.
I got your point. There was a similar discussion in the Velero team. It does work, although it seems a bit inefficient. We can continue the discussion.
The idea of the filesystem data-only restore will also cause the pod to restart because the PodVolumeRestore way is preferred. That will introduce a new InitContainer in the pod. The advantage is that the pod's data operation will not interrupt the data restore process.

@blackpiglet blackpiglet force-pushed the 7345_design branch 2 times, most recently from d9dbdc5 to f80b339 Compare March 26, 2024 06:58
Copy link

codecov bot commented Mar 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 61.71%. Comparing base (2518824) to head (156685a).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #7481   +/-   ##
=======================================
  Coverage   61.71%   61.71%           
=======================================
  Files         263      263           
  Lines       28869    28869           
=======================================
  Hits        17816    17816           
  Misses       9793     9793           
  Partials     1260     1260           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@anshulahuja98
Copy link
Collaborator

@blackpiglet, gitops is certainly one reason for in place restore. Another use case is the need to periodically update data in an alternate standby cluster. Finally, if some application files are deleted or corrupted, user may only want to restore those files.

@draghuram Thanks for the feedback. If we go for the in-place data restore way, there will be some limitations to the using scenario. Only the filesystem uploader backup is supported. For the snapshot-based backup, Achieving the in-place restore result is impossible.

Is this acceptable? Data-only restore cannot support snapshot-based backups. Involve more maintainers in this discussion. @anshulahuja98 @kaovilai @sseago

I went through further over the discussion
I understand a pure in-place restore can't be achieved without pod downtime.

With that caveat, I want to push for CSI snapshot based in-place restore. Here in-place restore refers mainly to Detaching PVC from workload, deleting PVC and re-creating PVC.

This is useful for Disaster recovery scenarios.

@blackpiglet
Copy link
Contributor Author

blackpiglet commented Mar 26, 2024

With that caveat, I want to push for CSI snapshot based in-place restore. Here in-place restore refers mainly to Detaching PVC from workload, deleting PVC and re-creating PVC.

This proposal also cannot avoid pod downtime, so I suppose it aims to make the data-only restore can support more types of backups, right?

I understand a pure in-place restore can't be achieved without pod downtime.

If no downtime is a must-have feature, we can achieve that by ignoring the PodVolumeRestore's related pod InitContainer check logic, although that will compromise the data integrity, and data write could fail due to conflict.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area/Design Design Documents kind/changelog-not-required PR does not require a user changelog. Often for docs, website, or build changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants