You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just went through a lxd recover after an accidental apt-get autopurge -y snapd (don't ask or maybe around a beer). This nuked LXD's DB but (fortunately) left the zpool intact for lxd recover to recover all the data.
On this server, there was this ganymede container configured with security.idmap.isolated=true and presumably volatile.idmap.base=1065536. This container has many volumes attached to it so idmap details need to be right otherwise those won't be remapped and will be inaccessible.
After a successful lxd recover, here's what the config of the last snapshot for that container looks like:
In there, we see that both volatile.idmap.current and volatile.last_state.idmap use a hostid of 1065536. We also see that volatile.idmap.next uses a different hostid of 1131072, presumably due to volatile.idmap.base being set to this value.
This will cause the container to go through an ID remapping which could have been avoided had volatile.idmap.next been set identically to volatile.idmap.current.
In otherwords, why is the volatile.idmap.base changed during recovery?
Additional information:
# snap list lxd
Name Version Rev Tracking Publisher Notes
lxd 5.21.1-d46c406 28460 5.21/stable canonical✓ -
The text was updated successfully, but these errors were encountered:
I just went through a
lxd recover
after an accidentalapt-get autopurge -y snapd
(don't ask or maybe around a beer). This nuked LXD's DB but (fortunately) left the zpool intact forlxd recover
to recover all the data.On this server, there was this
ganymede
container configured withsecurity.idmap.isolated=true
and presumablyvolatile.idmap.base=1065536
. This container has many volumes attached to it so idmap details need to be right otherwise those won't be remapped and will be inaccessible.After a successful
lxd recover
, here's what the config of the last snapshot for that container looks like:In there, we see that both
volatile.idmap.current
andvolatile.last_state.idmap
use ahostid
of1065536
. We also see thatvolatile.idmap.next
uses a differenthostid
of1131072
, presumably due tovolatile.idmap.base
being set to this value.This will cause the container to go through an ID remapping which could have been avoided had
volatile.idmap.next
been set identically tovolatile.idmap.current
.In otherwords, why is the
volatile.idmap.base
changed during recovery?Additional information:
# snap list lxd Name Version Rev Tracking Publisher Notes lxd 5.21.1-d46c406 28460 5.21/stable canonical✓ -
The text was updated successfully, but these errors were encountered: