Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add per-slot option for casync seed device #1420

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

tvdstaaij
Copy link

As mentioned in #1200, using casync in blob mode with UBI volumes is kind of pointless at the moment, because casync cannot be effective without a seed.

The solution proposed in the comment in casync_extract_image sounds good and would make this use case effortless from a user perspective, but it would require some development effort to implement said logic.

This solution instead leaves the mapping logic to the user, which is likely pretty simple or even nothing at all (e.g. in the case of a squashfs root filesystem, which by necessity already has a ubiblock).

A typical configuration would look like this:

[slot.rootfs.0]
device=/dev/ubi0_0
casync-seed-device=/dev/ubiblock0_0
type=ubivol

Of course it is still possible for someone to implement the ubiblock mapping logic in RAUC in the future if they see a need for it. But with this change at least the use case is viable with minimal integration work.


Additional details

This patch was tested using the following configuration on an i.MX6 target:

[system]
compatible=testmachine
bootloader=uboot
boot-attempts=3
bundle-formats=-plain
data-directory=/var/rauc

[keyring]
path=/etc/rauc/ca.cert.pem

[slot.rootfs.0]
device=/dev/ubi0_0
casync-seed-device=/dev/ubiblock0_0
type=ubivol
bootname=A

[slot.rootfs.1]
device=/dev/ubi0_1
casync-seed-device=/dev/ubiblock0_1
type=ubivol
bootname=B

And the following bundle:

Compatible: 	'testmachine'
Version:    	'20240514150445'
Description:	'testbundle'
Build:      	'testbuild'
Hooks:      	''
Bundle Format: 	verity
  Verity Salt: 	'7a8e7d2a00d94b44d15e918695d536a8b37a8d49f368f3d89376ccbdd447f413'
  Verity Hash: 	'2a783ee00fcdc4d12e5669a43eca6cc6196df91e0bc6fc590b92693888998d68'
  Verity Size: 	503808
Manifest Hash:	'49696809d12177f71a723d7158bd5e3221914dc91f11ed04590941277e7f76ea'

1 Image:
  [rootfs]
	Filename:  rootfs.squashfs
	Checksum:  5122f3529dd35b8ad0c44ad936b42e2ff44a76a9eaa9536558d19084a65fb90e
	Size:      63.9?MB (63856640 bytes)

Note that for testing this feature it is also necessary to apply #1419, because that bug is a dealbreaker for casync blob updates on UBI volumes.

As mentioned in rauc#1200, using casync in blob mode with UBI volumes is
kind of pointless at the moment, because casync cannot be effective
without a seed.

The solution proposed in the comment in casync_extract_image sounds
good and would make this use case effortless from a user perspective,
but it would require some development effort to implement said logic.

This solution instead leaves the mapping logic to the user, which is
likely pretty simple or even nothing at all (e.g. in the case of a
squashfs root filesystem, which by necessity already has a ubiblock).

A typical configuration would look like this:

[slot.rootfs.0]
device=/dev/ubi0_0
casync-seed-device=/dev/ubiblock0_0
type=ubivol

Of course it is still possible for someone to implement the ubiblock
mapping logic in RAUC in the future if they see a need for it. But with
this change at least the use case is viable with minimal integration
work.

Signed-off-by: Tim van der Staaij <[email protected]>
@jluebbe
Copy link
Member

jluebbe commented May 14, 2024

As far as I can see, the /dev/ubiblockX_Y always maps to /dev/ubiX_Y. So, instead of requiring the user to configure this, we could simply replace the prefix in RAUC if the slot device is a chardev and has the prefix /dev/ubi (e.g. check with g_str_has_prefix). If that ubiblock exists, because it was already set up at the system level, just use it as a seed.

Expanding a bit on #1200 (comment): due to the complexity of casync (and the inactive upstream), we would suggest moving to adaptive updates, at least for new projects. Is there a specific reason you chose casync?

Which casync implementation do you use? The patched original version, https://github.com/florolf/casync-nano or desync?

@tvdstaaij
Copy link
Author

As far as I can see, the /dev/ubiblockX_Y always maps to /dev/ubiX_Y. So, instead of requiring the user to configure this, we could simply replace the prefix in RAUC if the slot device is a chardev and has the prefix /dev/ubi (e.g. check with g_str_has_prefix). If that ubiblock exists, because it was already set up at the system level, just use it as a seed.

That approach did also come to mind. I went for the config approach for two reasons: it was the most straightforward to implement, and considering that it's the user's responsibility to make sure that the device exists, I thought it appropriate that it is also explicitly configured by the user instead of "magicallly". But I suppose the path rewrite approach would also be fine if this detail is mentioned in the manual. If you prefer this approach I'll try to find time for a rewrite (should it be a new PR?).

Expanding a bit on #1200 (comment): due to the complexity of casync (and the inactive upstream), we would suggest moving to adaptive updates, at least for new projects. Is there a specific reason you chose casync?

Back when I made the choice to go with casync I wasn't aware of the complexity/upstream issue. It might be worth clarifying this in the manual, it states "both are supported for now" which didn't really suggest to me that adaptive updates are preferred.

Anyhow, there were a couple of reasons to go with casync:

  1. The manual implied that casync is compatible with UBI, while adaptive updates are not. Since my project needs UBI it made sense to go with a solution that should work out of the box instead of one that requires new feature development in RAUC.
  2. Keeping the download size to a minimum is very important for my project. I went through some articles about how both methods work (primarily this for casync and this for adaptive updates). I got a good sense for how casync works and an impression that its algorithm is quite robust against various kinds of changes/offsets in the image. This was less clear to me with adaptive updates. Maybe the performance isn't worse (or even better?) but I haven't seen any comparisons. The manual does have a comparison section but it doesn't really list any advantages over casync other than not needing a chunk store.
  3. My testing with casync showed that its performance in terms of keeping download size to a minimum was satisfactory (compared to an rdiff delta benchmark), so there was little incentive to also try adaptive updates.

Which casync implementation do you use? The patched original version, https://github.com/florolf/casync-nano or desync?

The patched original version for now. The casync-nano readme mentions it only supports block devices, so no UBI. I haven't looked into desync.

@tvdstaaij
Copy link
Author

The casync-nano readme mentions it only supports block devices, so no UBI.

Never mind that, I suppose it would probably work with the same trick that is being used by RAUC to make UBI work with original casync. But I don't see a reason to try casync-nano currently since I don't (yet) have a need to optimize for root filesystem size as long as incremental download size is small.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants