there is interesting thread on zones-discuss the situation
- start with s10u8 zone
- zoneadm attach -u to s10u9 hosts
- v2v to s11 express as solaris10 brand zone
- update the host to solaris 11 from solaris 11 express
- run /usr/lib/brand/shared/dsconvert
- # zoneadm -z sandpit boot (failed)
zone 'sandpit': WARNING: vnic3:1: no matching subnet found in
netmasks(4): 172.25.48.101; using default of 255.255.0.0.
zone 'sandpit': Error: The installed version of Solaris 10 is not supported.
zone 'sandpit': SPARC systems require patch 142909-17
zone 'sandpit': x86/x64 systems require patch 142910-17
zone 'sandpit': exec /usr/lib/brand/solaris10/s10_boot sandpit
/zoneRoot/sandpit failed
zone 'sandpit': ERROR: unable to unmount /zoneRoot/sandpit/root.
- to recover
1. Reboot to the Solaris 11 Express BE
root@global# beadm activate <s11express-be-name>
root@global# init 6
2. Partially revert the work done by dsconvert
In this example, the zone's zonepath is /zones/s10.
root@global# zfs list -r /zones/s10
rpool/zones/s10 3.18G 11.3G 51K /zones/s10
rpool/zones/s10/rpool 3.18G 11.3G 31K /rpool
rpool/zones/s10/rpool/ROOT 3.18G 11.3G 31K legacy
rpool/zones/s10/rpool/ROOT/zbe-0 3.18G 11.3G 3.18G /
rpool/zones/s10/rpool/export 62K 11.3G 31K /export
rpool/zones/s10/rpool/export/home 31K 11.3G 31K /export/home
The goal here is to move rpool/zones/s10/rpool/ROOT up one level. We
need to do a bit of a dance to get it there. Do not reboot or issue
'zfs mount -a' in the middle of this. If something goes wrong and a
reboot happens, it won't be disasterous - you will just need to
complete the procedure when the next boot stops with
svc:/filesystem/local problems.
root@global# zfs set mountpoint=legacy rpool/zones/s10/rpool/ROOT/zbe-0
root@global# zfs set zoned=off rpool/zones/s10/rpool
root@global# zfs rename rpool/zones/s10/rpool/ROOT/zbe-0 \
rpool/zones/s10/ROOT
root@global# zfs set zoned=on rpool/zones/s10/rpool
root@global# zfs set zoned=on rpool/zones/s10/ROOT
Now the zone's dataset layout should look like:
root@global# zfs list -r /zones/s10
NAME USED AVAIL REFER MOUNTPOINT
rpool/zones/s10 3.19G 11.3G 51K /zones/s10
rpool/zones/s10/ROOT 3.19G 11.3G 31K legacy
rpool/zones/s10/ROOT/zbe-0 3.19G 11.3G 3.19G legacy
rpool/zones/s10/rpool 93K 11.3G 31K /rpool
rpool/zones/s10/rpool/export 62K 11.3G 31K /export
rpool/zones/s10/rpool/export/home 31K 11.3G 31K /export/home
3. Boot the zone and patch
root@global# zoneadm -z s10 boot
root@global# zlogin s10
root@s10# ... (apply required patches)
- 119254/119255 rev 75 (patch utils)
- u9 kernale patch 14299/142910-17(SPARC/x86)
4. Shutdown the zone
root@s10# init 0
5. Revert the dataset layout to the way that dsconvert left it.
Again, try to avoid reboots during this step.
root@global# zfs set zoned=off rpool/zones/s10/ROOT
root@global# zfs set zoned=off rpool/zones/s10/rpool
root@global# zfs rename rpool/zones/s10/ROOT rpool/zones/s10/rpool/ROOT
root@global# zfs set zoned=on rpool/zones/s10/rpool
root@global# zfs inherit zoned rpool/zones/s10/rpool/ROOT
6. Reboot to Solaris 11
root@global# beadm activate <solaris11-be-name>
root@global# init 6
At this point, the zone should be bootable on Solaris 11.
Observations
- since this is zoneadm attach -u from u8 to u9, only min update so it is not really a u9 zone
- one should really do zoneadm attach -U from u8 to u9
- RTFM, to support SVR4 pkg and patching one need to install 119254-75 (SPARC), 119534-24, and 140914-02 or 119255-75, 119535-24 and 140915-02 (x86/x64), or later versions in solaris 10 before created archive
No comments:
Post a Comment