This is interesting link that talk about co-existence of vxvm with ZFS
To reuse a ZFS disk as a VxVM disk
Remove the disk from the zpool, or destroy the zpool.
See the Oracle documentation for details.
Clear the signature block using the dd command:
# dd if=/dev/zero of=/dev/rdsk/c#t#d#s# oseek=16 bs=512 count=1
Where c#t#d#s# is the disk slice on which the ZFS device is
configured. If the whole disk is used as the ZFS device, clear the
signature block on slice 0.
You can now initialize the disk as a VxVM device using the vxdiskadm command or the vxdisksetup command.
To reuse a VxVM disk as a ZFS disk
If the disk is in a disk group, remove the disk from the disk group or destroy the disk group.
To remove the disk from the disk group:
# vxdg [-g diskgroup] rmdisk diskname
To destroy the disk group:
# vxdg destroy diskgroup
Remove the disk from VxVM control
# /usr/lib/vxvm/bin/vxdiskunsetup diskname
You can now initialize the disk as a ZFS device using ZFS tools.
See the Oracle documentation for details.
You must perform step 1 and step 2 in order for VxVM to recognize a disk as ZFS device
Showing posts with label Solaris. Show all posts
Showing posts with label Solaris. Show all posts
Wednesday, March 28, 2012
Friday, January 6, 2012
change mirrored rpool into rpool and dpool
there were discussion on change the mirrored rpool to two zpool: rpool and dpool
on the ZFS troubleshoot guide list some methods based on snap
this blog present another simple method that is based on live upgrade (tested on s10u10)
problem: two hdd c0t0d0 and c0t1d0 as mirrored rpool, user want to split the rpool into two zpool: rpool and dpool
on the ZFS troubleshoot guide list some methods based on snap
this blog present another simple method that is based on live upgrade (tested on s10u10)
problem: two hdd c0t0d0 and c0t1d0 as mirrored rpool, user want to split the rpool into two zpool: rpool and dpool
- zpool split rpool rpool2 c0t1d0s0
- zpool destroy rpool2
- partition c0t1d0s0 into two slice c0t1d0s0 and c0t1d0s1
- zpool create -f rpool2 c0t1d0s0
- lucreate -c c0t0d0s0 -n c0t1d0s0 -p rpool2
- luactivate c0t1d0s0
- init 6
- you have two BE c0t0d0s0 and c0t1d0s0 and new root pool is rpool2
- ludelete -f c0t0d0s0
- zpool destroy -f rpool
- partition c0t0d0s0 into two slice c0t0d0s0 and c0t0d0s1 (or use VTOC)
- zpool create -f rpool c0t0d0s0
- lucreate -n ct0d0s0 -p rpool
- luactivate c0t0d0s0
- init 6
- ludelete -f c0t1d0s0
- zpool destroy rpool2
- zpool attach -f rpool c0t0d0s0 c0t1d0s0
- zpool create -f dpool mirror c0t0d0s1 c0t1d0s1
Wednesday, December 14, 2011
NFS in Solaris 11
Changes in This Release : Solaris 11
The following enhancements are included in the Oracle Solaris 11 release:
- The configuration parameters that used to be set by editing the/etc/default/autofs and /etc/default/nfs can now be set in the SMF repository.
- The NFS service provides support for mirror mounts. Mirror mounts enable an NFSv4 client to traverse shared file system mount points in the server namespace. For NFSv4 mounts, the automounter will perform a mount of the server namespace root and rely on mirror mounts to access its file systems. The main advantage that mirror mounts offer over the traditional automounter is that mounting a file system using mirror mounts does not require the overhead associated with administering automount maps. Mirror mounts provide these features:
- Namespace changes are immediately visible to all clients.
- New shared file systems are discovered instantly and mounted automatically.
- File systems unmount automatically after a designated inactivity period.
- NFS referrals have been added to the NFS service. Referrals are server-based redirections that an NFSv4 client can follow to find a file system. The NFS server supports referrals created by the nfsref(1M)command, and the NFSv4 client will follow them to mount the file system from the actual location. This facility can be used to replace many uses of the automounter, with creation of referrals replacing the editing of automounter map. NFS referrals provide these features:
- All of the features of mirror mounts listed above
- Automounter-like functionality without any dependence on the automounter.
- No setup required at either the client or server.
- The ability to mount the per-DNS-domain root of a Federated File System name space has been added. This mount point can be used with NFS referrals to bridge from one file server to another, building an arbitrarily large namespace.
- The sharectl utility is included. This utility enables you to configure and manage file sharing protocols, such as NFS. For example, this utility allows you to set client and server operational properties, display property values for a specific protocol, and obtain the status of a protocol.
- The NFS version 4 domain can be defined.
Significant Changes in Earlier Releases : Solaris 10
The Solaris 10 11/06 release provides support for a file system monitoring tool. See the following:
- fsstat Command for a description and examples
- fsstat(1M) man page for more information
Additionally, this guide provides a more detailed description of the nfsmapiddaemon. For information about nfsmapid, see the following:
- nfsmapid(1M) man page
Starting in the Solaris 10 release, NFS version 4 is the default. For information about features in NFS version 4 and other changes, refer to the following:
Additionally, the NFS service is managed by the Service Management Facility.
- Administrative actions on this service, such as enabling, disabling, or restarting, can be performed by using the svcadm command.
- The service's status can be queried by using the svcs command.
solaris10 zone to solaris10 brand in s11 recovery
there is interesting thread on zones-discuss the situation
- start with s10u8 zone
- zoneadm attach -u to s10u9 hosts
- v2v to s11 express as solaris10 brand zone
- update the host to solaris 11 from solaris 11 express
- run /usr/lib/brand/shared/dsconvert
- # zoneadm -z sandpit boot (failed)
zone 'sandpit': WARNING: vnic3:1: no matching subnet found in netmasks(4): 172.25.48.101; using default of 255.255.0.0.
zone 'sandpit': Error: The installed version of Solaris 10 is not supported.
zone 'sandpit': SPARC systems require patch 142909-17
zone 'sandpit': x86/x64 systems require patch 142910-17
zone 'sandpit': exec /usr/lib/brand/solaris10/s10_boot sandpit /zoneRoot/sandpit failed
zone 'sandpit': ERROR: unable to unmount /zoneRoot/sandpit/root. - to recover
1. Reboot to the Solaris 11 Express BE
root@global# beadm activate <s11express-be-name>
root@global# init 6
2. Partially revert the work done by dsconvert
In this example, the zone's zonepath is /zones/s10.
root@global# zfs list -r /zones/s10
rpool/zones/s10 3.18G 11.3G 51K /zones/s10
rpool/zones/s10/rpool 3.18G 11.3G 31K /rpool
rpool/zones/s10/rpool/ROOT 3.18G 11.3G 31K legacy
rpool/zones/s10/rpool/ROOT/zbe-0 3.18G 11.3G 3.18G /
rpool/zones/s10/rpool/export 62K 11.3G 31K /export
rpool/zones/s10/rpool/export/home 31K 11.3G 31K /export/home
The goal here is to move rpool/zones/s10/rpool/ROOT up one level. We
need to do a bit of a dance to get it there. Do not reboot or issue
'zfs mount -a' in the middle of this. If something goes wrong and a
reboot happens, it won't be disasterous - you will just need to
complete the procedure when the next boot stops with
svc:/filesystem/local problems.
root@global# zfs set mountpoint=legacy rpool/zones/s10/rpool/ROOT/zbe-0
root@global# zfs set zoned=off rpool/zones/s10/rpool
root@global# zfs rename rpool/zones/s10/rpool/ROOT/zbe-0 \
rpool/zones/s10/ROOT
root@global# zfs set zoned=on rpool/zones/s10/rpool
root@global# zfs set zoned=on rpool/zones/s10/ROOT
Now the zone's dataset layout should look like:
root@global# zfs list -r /zones/s10
NAME USED AVAIL REFER MOUNTPOINT
rpool/zones/s10 3.19G 11.3G 51K /zones/s10
rpool/zones/s10/ROOT 3.19G 11.3G 31K legacy
rpool/zones/s10/ROOT/zbe-0 3.19G 11.3G 3.19G legacy
rpool/zones/s10/rpool 93K 11.3G 31K /rpool
rpool/zones/s10/rpool/export 62K 11.3G 31K /export
rpool/zones/s10/rpool/export/home 31K 11.3G 31K /export/home
3. Boot the zone and patch
root@global# zoneadm -z s10 boot
root@global# zlogin s10
root@s10# ... (apply required patches)
- 119254/119255 rev 75 (patch utils)
- u9 kernale patch 14299/142910-17(SPARC/x86)
4. Shutdown the zone root@s10# init 0 5. Revert the dataset layout to the way that dsconvert left it. Again, try to avoid reboots during this step. root@global# zfs set zoned=off rpool/zones/s10/ROOT root@global# zfs set zoned=off rpool/zones/s10/rpool root@global# zfs rename rpool/zones/s10/ROOT rpool/zones/s10/rpool/ROOT root@global# zfs set zoned=on rpool/zones/s10/rpool root@global# zfs inherit zoned rpool/zones/s10/rpool/ROOT 6. Reboot to Solaris 11 root@global# beadm activate <solaris11-be-name> root@global# init 6 At this point, the zone should be bootable on Solaris 11.
Observations
- since this is zoneadm attach -u from u8 to u9, only min update so it is not really a u9 zone
- one should really do zoneadm attach -U from u8 to u9
- RTFM, to support SVR4 pkg and patching one need to install 119254-75 (SPARC), 119534-24, and 140914-02 or 119255-75, 119535-24 and 140915-02 (x86/x64), or later versions in solaris 10 before created archive
zoneadm attach -U
s10u9 introduce zoneadm attach -U option in addition to the -u option
s10u8 with patch 142910-17/142909-17 ( the update 9 Kernel patch, SPARC/x86 ), once that is installed -U is available, install latest rev of 119254/11955 first, if patching though, it's the patchutils patch
man page
attach [-u | -U] [-b patchid]... [-F] [-n path] [brand-specific options]
It seems that to get to u9 from u8 with all the patches and update one should use -U option
one should not use -F at all
In any case always have a backup copy of the zone so one can always restore the zone to its original state
s10u8 with patch 142910-17/142909-17 ( the update 9 Kernel patch, SPARC/x86 ), once that is installed -U is available, install latest rev of 119254/11955 first, if patching though, it's the patchutils patch
man page
The attach subcommand takes a zone that has been detached from one system and attaches the zone onto a new system. Therefore, it is advised (though not required) that the detach subcommand should be run before the “attach” takes place. Once you have the new zone in the configured state, use the attach subcommand to set up the zone root instead of installing the zone as a new zone.
For native zones, zoneadm checks package and patch levels on the machine to which the zone is to be attached. If the packages/patches that the zone depends on from the global zone are different (have different revision numbers) from the dependent packages/patches on the source machine, zoneadm reports these conflicts and does not perform the attach. If the destination system has only newer dependent packages/patches (higher revision numbers) than those on the source system, you can use the -u or -U options. The -u option updates the minimum number of packages within the attached zone to match the higher-revision packages and patches that exist on the new system. The -U option updates all packages in the attached zone that are also installed in the global zone. With -u or -U, as in the default behavior, zoneadm does not perform an attach if outdated packages/patches are found on the target system.
For native zones, one or more -b options can be used to specify a patch ID for a patch installed in the zone. These patches will be backed out before the zone is attached or, if -u was also specified, updated.
The -F option can be used to force the zone into the “installed” state with no validation. This option should be used with care since it can leave the zone in an unsupportable state if it was moved from a source system to a target system that is unable to properly host the zone. The -n option can be used to perform a “dry run” of the attach subcommand. It uses the output of the “detach -n” subcommand as input and is useful to identify any conflicting issues, such as the network device being incompatible, and can also determine whether the host is capable of supporting the zone. The path can be “-”, to read the input from standard input.It seems that to get to u9 from u8 with all the patches and update one should use -U option
one should not use -F at all
In any case always have a backup copy of the zone so one can always restore the zone to its original state
Wednesday, December 7, 2011
what's new in solaris 11
this link list the what's new
- installation
- Automated Installer
- installation framework for automated system provisioning
- network installation
- manifest
- system configuration
- SW pkg
- zone
- bootable image
- Jumpstart migration utility js2ai
- interactive Text installations
- server configuration
- automatic or manual network configuration
- no GUI desktop
- audio or wireless drivers
- Live Media Installation (x86)
- automatic network configuration
- full GUI desktop
- GNU partition Edition
- Distribution Constructor
- CML tool for building pre-configured bootable customized s11 installation image
- use manifest description
- target disk
- SW pkg
- basic system configuration
- gold image
- packaging
- Image Packing System (IPS0
- framework for complete SW lifecycle mgmt
- installation
- upgrade
- remove
- integrated with ZFS
- safe upgrade with ZFS clone FS
- network based package repositores
- with full automatic dependency checking
- any SW that is required is sutomatically installed or update
- boot to different boot env
- can lock down individua pkg
- fast boot feature
- on by default in x86
- off by default in SPARC
- support SVR4 pkg
- no legacy patching tool
- System configuration
- SMF
- Name service
- nscfg
- /etc/nsswitch.conf svc:/system/name-service/switch
- /etc/resolv.conf svc:/network/dns/client
- /etc/nodename svc:/idenitty:node
- /etc/defaultdomain svc:/system/identity:domain
- /etc/default/init svc:/system/environment:init
- /etc/driver/drv/driver.conf
- sysconfig
- replace sys-unconfig, sysidtool
- unconfiguring
- reconfiuring
- SMF, FMA
- SNMP trap
- SMTP notification
- ASR
- v12
- zones are easier to create and manage
- solaris10 zone
- p2v
- v2v
- zonep2vchk
- NFS server in zone
- exclusive-IP zones by default
- anet for exclusive-IP zone
- administer network flow within NGZ
- bandwidth
- priority control based on IP address, subnet tramsport protocols and port
- flowadm
- flowstat
- Delegated Administration
- admin zone based on RBAC
- zone boot env
- ZFS boot env: ZBE
- beadm inside zone
- improved zones dataset layout
- NGZ mimic GZ
- NGZ support different ZFS dataset
- immutable zones
- read-only root for zones
- Mandatory Write Access Control (MWAC
- cleanly shutdown zones
- zoneadm -z <z> shutdown
- zonestat
- observation of system resources
- memory, CPU, resource control limit
- exclusive-IP: network device utilization on data-links, vlink and zones
- libzonestat
- svc:/system/zones-monitoring.default
- tecla CLi editing library for zonecfg
- emacs mode :default
- vi mode
- tecla(5)
- Security
- Role Authentication
- root is a role by default
- 1st user account is assigned root role
- user assume root role
- user or role passwd
- Trusted Platform Module (TPM)
- TPM chip is a HW device on MB
- protected storage
- protected capabilities on an inexpensive components with restricted resource
- s11 provide drivers
- TCG 12 spec
- TSS SW to provide cryptographic openationd on secre device and adm toll for manageing the YPM and PKCS11 provider
- labeled Ipsec
- trusted extension
- IPsec support AES FMAC Cryptographic Algorithm
- data integrity of AES Galoris/Counter Mode (AES GCM) but without acturally encrypting the data
- Kerberos Dtrace Providers
- RFC4120
- Trusted Extensions Enhancements
- enables per-label and pe-user credentials to request a unique passwd for each label
- tncfg :
- create, modify and display networking properties
- label network packets received from remote hosts
- set security lables on ZFS dataset
- Support ssh X.509 Certificate Extension
- Solaris Cryptographic Framework
- NSA Suite B algorithms
- T4 support AES CFG mode used by table space encryption of oracle DB advanced Securiy option
- support Intel Advanced Encrytion Stnadards (AES-NI0
- Oracle key managemeny system now be used for AES key storage using the new pkcs11 kms plugin
- In-kernel pfexec ZForced and Basic Privileges
- Nwtrorking
- re-architecture to unify, simplify and enhance observation and interoperability of NIC
- GLDv3 driver framework
- VLAN
- link aggregation
- MAC layer for Ethernet, Wi-Fi and IB
- dladm
- Network v12n and resource mgmt
- V12N
- VNIC
- vswitch
- VLANs
- routing
- firewall
- tight integration with zone exclusive-ip
- Resource Mgmt
- QoS
- bandwidth limits
- CPU limit
- interrupt-driven to polling
- Manual and Automatic Networking
- network profile svc:/network/physical:default
- switch between automatic and manual networking by enabling Automatic or DefaultFixed profile through netadm and netcfg
- Live Media install (LiveCD) use Automatic networking, useful for laptop
- Default Names for Datalinks
- net0, net1 etc
- can be reverted
- Changing MAC Address with dladm
- persistent across reboots
- IB Enabled and Optimized
- improved support for Sockets Direct Protocol (SDP)
- support RDMA; zero-copy data transfer
- netstat, truss, pfiles mdb kmdb
- NGZ for exclusive-IP and Shared-IP)
- RDSv3 for Oracle RAC
- Registration of VLANs
- ability for broadcasting VLAN ID
- VNIC support
- Link Layer Discovery Protocol Support (LLDP)
- one-way link layer protocol that allow an IEEE802 LAN station to advertise the capabilities and current status of the system
- lldpadm: enable/disable LLDP agent on physical datalink
- New Sockets Architecture
- no longer use STREAMS
- significant performance improvements
- simplified developer interface for new socket types
- Load Balancing
- Integrated L3/L4 LB
- stateless DSR and NAT modes
- CLI
- configuration API
- Link Protection
- prevent guest VM sending harmful packets to network
- basic threats: IP, DHCP, MAC, L2 fram spoofing
- use ipf for inbound filtering and customizable filter rules
- Bridging and Tunneling
- Bridging
- Spanning Tree Protocol (STP, IEEE 802.ID-1998)
- TRILL protocol
- Tunneling
- iptun
- wireshark
- snoop
- IP observability
- wireshark: packet sniffing tool and snoop
- dlstat: runtme statistics for data link
- IP Multipathing(IPMP)
- re-architecture
- ipadm
- Transitive probe: new failure detection mode
- without aditioning test IP address
- svccfg -a svc:/network/ipmp setprop config/transitive-probing=true
- svcadm refresh svc:/network/ipmp:default
- in.mpathd
- managed by SMF service svc:/network/ipmp
- I/O Enhancements to netcat
- new FTP server
- proftpd replace WU-ftpd
- Dtrace Networking Provider
- tcp
- udp
- ipv4/IPv6
- Storage
- ZFS is root FS
- easy upgrade with IPS
- ZFS data Encrytion
- ZFS deduplication :(need RAM, L2ARC with SSD)
- ZFS Shadow Migration (local or NFS FS0
- ZFS backup with NDMP with ZFS send/receive
- Temporary ZFS mountpoint
- ZFS snapshot Alias with zfs snap (snapshot)
- Recursive ZFS send (dataset and descendents)
- ZFS snapshot Diff
- NFSv4 Client and Server Migration Support
- SMB for Micosoft interoperability
- Dtrace Storage Provider
- SMB
- iscsi
- COMSTART SCSI target Frameworks
- SCSI device type: disk, tape with FC
- iSCSI Extensions for RDMA (iSER)
- SCSI RDMA Protocol (SRP) for IB HCA
- iSCSI
- Fibre Channel over Ethernet (FCoE)
- Dtrace Provider:
- SCSI Target Mode Framework (STMF)
- SCSI Block Device (SBD)
- Kernel/Platform Support
- SPARC T4
- 2GB page size
- ISA cryptographic HW optimization
- CPU and DRAM performance counter support
- L3 cache support
- 20%-40% gain for various ciper and hash instruction
- gain for SSL and direct cryptographic acceleration for DB 11.2.0.2
- Critical Threads
- dynamic allocation of HW resource to provide boots in performance
- matching a thread's HW requirements with the amount of exclusive access to specific HW resources
- Single-root I/O v12n (SR-IOV)
- extension to PCIe to allow efficient sharing of PCIe devices among VMs both in HW and SW
- NUMA I/O
- allow kernel threads, interrupts and memory to be placed on physical resources according to the physical topology pf the machines
- specific high-level affinity requirements of I/O frameworks, actual load, resource control and power mgmt policies
- Intel Advanced Vector Extensions(AVX)
- new instructions vector floating point operations
- image, video, audio processing, 3D modeling, scientific simulation and financial analytics
- Sandy Bridge and beyond
- Dynamic Intimate Shared Memory (DISM) performance Improvements
- for large memory system 8x oracle DB start up improvement for ISM and DISM creation, locking, destruction
- Suspend and resume to RAM
- Improved HW supported
- FMA
- generic topology enumeration
- generic hotplug framework
- latest Intel microprocessor
- Intel's Latency TOP and Dtrace to measured latency
- Dtrace cpc Provider
- cycles executed
- instructions executed
- cache missed
- TLB misses
- user Environment
- 850 open source pkg in IPS
- Java SE 6, 7
- GCC 4.5.2
- Python 2.7
- Perl 5.1.2
- Ruby 1.8.7
- PHP 5.2.17
- complete web stack
- Desktop env
- GNOME 2.30.3
- Firefox 6
- Thunderbird 6
- GNU
- in /usr/bin
- in /usr/gnu/bin
- Default shell:
- user: bash
- system: ksh93
- Removable Media
- HAL
- D-Bus messaging passing system
- new sound system
- search for content in MAN pages
- man -K searchstring
- Virtual Console Terminals
- svc:/system/vtdaemon:defaul
- svc:/system/console-login:vt*
- Alt-Ctrl-F#
- Time Slider Snapshot Mgmt
- use home
- Gui
- Common UNIX Printing System (CUPS) printing
- Lp wrap CUPS functionality
- libc Familiarity
- improve familiarity with linux and BSD
- paths.h Path Name Definitions
- /usr/include/paths.h
- /usr/include/sys/paths.h
- locale and languages (200+)
- TrueType Fonts
solaris branded zone in solaris 11
Solaris branded zone is the default zone in Solaris 11
- whole-root type only
- immutable (read-only zone root) zone with file-mac-profile (mandatory acccess control)
- none: standard read-write
- strict: read-only FS, no exceptions, only logged remotely
- fixed-configuration: permits updates to /var except systome configuration
- flexible-configuration:permit change
- /etc
- root home directory
- /var
- zonecfg add dataset: read-only dataset
- zonecfg add fs, can mount read-only FS
- IPS packing
- install, detach, attach and P2V
- NGZ root is a ZFS dataset
- use boot env: beadm
- All enabled IPS pkg repositories must be accessible while installing a zone
- zone SW is minimized
- default exclusive-IP with Automatic NET (anet) VNIC
- support
- ZFS encryption
- Network V12n and QoS
- SMB and NFS
- can be NFS server
- not supported
- DHCP address assignment in a shared-IP zone
- ndmpd
- SMB server
- SSL proxy server
- ZFS pool administration through zpool cmd
- zonestat: report CPU, memeory resource control, network bandwidth for exclusive-ip zone
- admin resource
- user
- auths
- solari.zone.login
- solaris.zone.manage
- solaris.zone.clonefrom
- resources pool association
- dedicated-cpu
- ncpus
- importance
- capped-cpu
- capped-memory
- physical
- swap
- locked
- zone network interface
- shared-IP
- shared a network interface with GZ
- use ipadm
- net resource properties
- address
- physical
- exclusive-IP
- must have dedicated network interface
- anet resource, a dedicated VNIC is automatically created and assigned to zone
- can use pre-configured VNIC
- default
- support
- DHCP v4 and v6
- IP filter
- IPMP
- IP routing
- ipadm for setting TCP/UDP/SCTP and IP/ARP
- IPsec and IKE
- snoop
- dtadm
- sysconfig
- hostid
- disk format: uscsi
- devices: /dev in zone
- zone-wide resource
- zone.cpu-cap
- zone.cpu-shares
- zone.max-locked-memeory
- zone.max-lofi
- zone.max-lwps
- zne.max-msg-ids
- zpne.max-processes
- zone.max-sem-ids
- zone.mac-shm-ids
- zone.max-shm-memory
- zone.max-swap
- use attr for comment
solaris10 branded zone in solaris 11
Due to change in package system (SRV4 to IPS) there is no direct upgrade from S10 to S11 one can use
- P2Vconverting s10 physical system to solaris10 branded zone in s11
- V2V converting s10 native full root zone to solaris10 branded zone in s11
- s10u9 or earily with patch 142909-17(SPARC) or 142910-17(x86) or later
- 32bit and 64-bit solaris 10 apps
- zone must reside on its own zfs dataset
- delegated ZFS dataset configuration is currently experimental and has not yet been tested
- para-v xvm domain is experimental and know problem for 64-bit apps
- /dev/sound device can not be conigured
- file-mac-profile property used to create read-only zones is not available
- quota(1M0 to retrieve UFS FS info is not available
- the following ndd parms are not available
- ip_squeue_fanout
- ip_soft_rings_cnt
- ip_ire_pathmtu_interval
- tcp_mdt_max_pbufs
- Networking features that are different
- Mobile IP is not available in s11
- /dev/net ;VNIC are not supported by libdlpi in s11 but support by libslpi(3LIB) in s10
- IPMP output are not the same
- mdb and dtrace are not fully functional when used in global zone to examine processes executing within solaris10 zone
- zonep2vchk (can be copied for s11 to s10) is used to generate info needed for P2V
- solaris10 zone does not supported statically linked binaries
- to support SVR4 pkg and patching one need to install 119254-75 (SPARC), 119534-24, and 140914-02 or 119255-75, 119535-24 and 140915-02 (x86/x64), or later versions in solaris 10 before created archive
- in s11 pkg:/system/zones/brand/solaris10 must be installed
- zonecfg: use create -t SYSsolaris10 or set brand=solaris10
- can set hostid
- support sysidcfg
- support migration between two s11 hosts
Tuesday, December 6, 2011
oracle DB/RAC 11.2.0.3
Oracle DB/RAc 11.2.0.3
11/10/11: Patch Set 11.2.0.3 for Linux, Solaris, Windows, AIX and HP-UX Itanium is now available on support.oracle.com. Note: it is a full installation (you do not need to download 11.2.0.1 first). See the README for more info (login to My Oracle Support required).
For solaris 11 you need 11.2.0.3
osc4.0 docs and download
dec/4 2011 oracle Solaris 4 is out
download
http://www.oracle.com/technetwork/server-storage/solaris-cluster/downloads/index.html
docs links
Oracle Solaris Cluster
download
http://www.oracle.com/technetwork/server-storage/solaris-cluster/downloads/index.html
docs links
Oracle Solaris Cluster
- The Oracle Solaris Cluster environment extends the Oracle Solaris
Operating System into a cluster operating system to provide highly
available and scalable services.
- Release Notes:
Oracle Solaris Cluster 4.0 | Oracle Solaris Cluster 3.3
Read about new features, new qualifications, and workarounds for known issues. - Installation:
Oracle Solaris Cluster 4.0 | Oracle Solaris Cluster 3.3
Learn everything you need to know to install the cluster.
Oracle Solaris Cluster Geographic Edition 4.0 | Oracle Solaris Cluster Geographic Edition 3.3
Learn everything you need to know to install the Geographic Edition software. - Administration:
Oracle Solaris Cluster 4.0 | Oracle Solaris Cluster 3.3
Learn how to keep your enterprise and data highly available. - How-To Guides and White Papers:
Access all the Oracle Solaris Cluster How-To Guides and White Papers.
| How-To Guides | ||
| How-To Install and Configure a Two-Node Cluster (Oracle Solaris 11) This article provides a step-by-step process for quickly and easily installing and configuring Oracle Solaris Cluster software for two nodes, including the configuration of a quorum device. | ||
| How-To Create a Failover Zone in a Cluster (Oracle Solaris 11) This how-to-guide describes how to quickly and easily configure an Oracle Solaris Zone in failover mode using the Oracle Solaris Cluster High Availability (HA) agent for Oracle Solaris Zones, which supports both Oracle Solaris 10 and 11 Zones. | ||
Other Resources
Oracle Solaris Cluster Features and Benefits What's New in Oracle Solaris Cluster Oracle Solaris Cluster Frequently Asked Questions Oracle Solaris Cluster System Requiremen
Observations: there are very few data service docs:full docs link
- failover zone example in release note
- SUNW.nfs agent
Oracle Solaris Cluster 4 Webcast Dec 06 2011 12noon EST webast
Oracle Solaris Cluster 4.0 Launch Webcast - Join the webcast on Tuesday, 12/6/11 at 9am PT.
Register Today and learn about Oracle Solaris Cluster 4.0, the first release providing high availability (HA) and disaster recovery (DR) capabilities for Oracle Solaris 11, the first cloud OS. Bill Nesheim, VP, Oracle Solaris Platform Engineering, will present how Oracle Solaris Cluster extends Oracle Solaris to provide the HA and DR infrastructure required for deploying mission critical workloads , in private, public and hybrid clouds deployments as well as enterprise data centers.
Register Now!
oracle solaris Summit at LISA 2011 (Dec/06/2011)
Dec 06 2011 the live stream of Solaris Day @LISA2011
http://psav.mediasite.com/mediasite/Viewer/?peid=b31d4d7b75fe4fcc8b3798a42d3d6b711d
http://psav.mediasite.com/mediasite/Viewer/?peid=b31d4d7b75fe4fcc8b3798a42d3d6b711d
| Agenda | |
| 8:00 a.m. | Registration |
| 9:00 a.m. | Oracle Solaris 11 Strategy Markus Flierl, VP Software Development, Oracle |
| 9:30 a.m. | Next Generation OS Lifecycle Management with Oracle Solaris 11 Dave Miner, Principal Software Engineer, Oracle Bart Smaalders, Principal Software Engineer, Oracle |
| 11:00 a.m. | Data Management with ZFS Mark Maybee, Principal Software Engineer, Oracle |
| 12:00 noon | Lunch |
| 1:00 p.m. | Oracle Solaris Virtualization and Oracle Solaris Networking Mike Gerdts, Senior Software Engineer, Oracle Sebastian Roy, Software Engineer, Oracle |
| 2:30 p.m. | Security in your Oracle Solaris Cloud Environment Glen Faden, Sr. Principal Software Engineer, Oracle |
| 3:15 p.m. | Break |
| 3:30 p.m. | Oracle Solaris – The Best Platform to run your Oracle Applications David Brean, Principal Software Engineer, Oracle |
| 4:15 p.m. | Oracle Solaris Cluster – HA in the Cloud Gia-Khanh Nguyen, Principal Software Engineer, Oracle |
| 5:00 p.m. | Networking Reception sponsored by Oracle Solaris Cluster |
oracle DB and RAC and v12N/partition and Solaris 11
this link list the Supported V12N and partitioning tech for Oracle DB and RAC
I just want to highlight the solaris part as of 12/06/2011
Oracle Solaris Sparc Notes
Oracle Solaris x86-64 Notes
I just want to highlight the solaris part as of 12/06/2011
- The Solaris 11 will support oracle 11.2.0.3 and above
- Solaris zone will work with oracle clusterware only with DB and RAC
| Platform | Virtualization Technology | Operating System | Certified Oracle Single Instance Database Releases | Certified Oracle RAC Database Releases |
| Oracle Solaris Sparc | Dynamic Domain | Solaris 10 |
|
|
| Solaris 9 |
|
| ||
| Oracle VM Server for SPARC | Solaris 10 | |||
| Solaris 11 |
|
| ||
| Oracle Solaris Containers | Solaris 10 | |||
| Solaris 11 |
|
| ||
| Oracle Solaris 8 Branded Zone Oracle Solaris 9 Branded Zone | Solaris 10 |
| ||
| Oracle Solaris x86-64 | Oracle Solaris Containers | Solaris 10 | ||
| Solaris 11 |
|
| ||
| Oracle VM Server for x86-64 | Solaris 10 | |||
| Solaris 11 |
|
|
Oracle Solaris Sparc Notes
- For 11gR1, please apply Oracle patch 8799617 and OS patch 138888-07 on Solaris 10 10/08 (Update6) or later.
- Oracle
supports the single instance database in "Oracle Solaris 8 Containers"
& "Oracle Solaris 9 Containers" (also known as Solaris 8 Branded
Zones & Solaris 9 Branded Zones) on a host running Solaris 10.
Supported versions are Solaris 8 Containers 1.0.1 and Solaris 9
Containers 1.0.1 running on Solaris 10 Update 10/8 and later. Please
check the documentation for the appropriate Oracle Database version to
ensure the corresponding Solaris versions are supported. See Documents
for further information:
- Using Oracle RAC on Oracle Solaris Containers within an Oracle VM Server for SPARC (LDoms) is not supported. Oracle Single Instance database on Oracle Solaris Containers within an Oracle VM Server for SPARC (LDoms) is supported
- 11gR1 (11.1.0.7) Solaris 10 Logical Domains on Sparc 64-bit requires patch 7535429
- Oracle Solaris Logical Domains are supported with Oracle RAC 10gR2, 11gR1 and 11gR2 with Solaris version 10 Update 8 or later with Oracle Solaris Cluster 3.2 1/09 and later versions of 3.2 and Oracle Solaris Cluster 3.3
- Oracle Solaris Logical Domains are supported with Oracle RAC 11gR2 with Solaris version 10 Update 6 or later (patches 142900-12, 141870-03) with Oracle Clusterware 11.2
- Please reference My Oracle Support note 317257.1 for best practices document for deploying Oracle Single Instance database in a Solaris Container.
- Please reference the RAC/Container Best Practices document for deploying Oracle RAC in Solaris Containers.
- Oracle Solaris Containers are supported with Oracle RAC 9iR2 (9.2.0.5 and above), 10gR2 and 11gR1 (with Oracle Solaris Cluster on SPARC). Solaris version 10 Update 7 or later (patches 141444-09, 143055-01, 142900-06, 143137-04 "md patch") with Oracle Solaris Cluster 3.3 and 3.2u2 patched to 126106-39 or later.
- Oracle Solaris Containers are supported with Oracle RAC 11.2.0.2 with patch 12419331 (for Oracle Solaris Cluster on SPARC). Solaris 10 9/10 (Update 9) or later (patch 142909-17) with Oracle Solaris Cluster 3.3.
- Oracle Solaris Containers are
supported with Oracle RAC 10gR2, 11gR1 and 11gR2 with Oracle Clusterware
on SPARC. Solaris version 10 Update 8 or later (patches 142900-14,
143055-01) with
- Oracle Clusterware 10.2 (patch 9352164)
- Oracle Clusterware 11.1 (patch 9207257, 9352179)
- Oracle Clusterware 11.2.0.1 (patch 11840629)
- Oracle Clusterware 11.2.0.2 (patch 12419353)
Oracle Solaris x86-64 Notes
- Please reference My Oracle Support note 317257.1 for best practices document for deploying Oracle Single Instance database in a Solaris Container.
- Please reference the RAC/Container Best Practices document for deploying Oracle RAC in Solaris Containers.
- Oracle
Solaris Containers are supported with Oracle RAC 10gR2 and 11gR2 with
Oracle Clusterware on Solaris x86-64. Solaris 10 10/09 (Update 8) or
later (patches 142901-15, 142934-02) with
- Oracle Clusterware 10.2 (patch 7172531)
- Oracle Clusterware 11.2.0.1 (patch 11840629)
- Oracle Clusterware 11.2.0.2 (patch 12419353)
- Oracle Solaris Containers are supported with Oracle RAC 10gR2 (patch 9654991) and 11gR2 (11.2.0.2 with patch 12419331) with Oracle Solaris Cluster on Solaris x86-64. Solaris 10 9/10 (Update 9) or later (patch 142910-17) with Oracle Solaris Cluster 3.3.
- Please
check the documentation for the appropriate Oracle Database version to
ensure the corresponding Solaris versions are supported. See the
following document for further information:
Monday, December 5, 2011
oracle solaris cluster 33 pricing
In light of dec/4/2011 OSC 4.0 announcement, I just want to keep a record on the OSC3.3.1 pricing
There is only one OSC Enterprise Edition per processor pricing
There is only one OSC Enterprise Edition per processor pricing
- 2 years: $1050
- 3 years: $1500
- 4 years: $1800
- 5 years: $2100
- Perpetual:$3000
Wednesday, November 9, 2011
Oracle Solaris 11 launch
Today Oracle announce the Solaris 11 ,1st Cloud OS
It could/should also take the name Solaris 12c
press release highlight some keep features
the download page for s11 (11/11/2011)
Oracle Solaris 11 page that include how to and whitpaper
Solaris 11 documents
One can watch the webcast later
There will be 100 city Solaris 11 tour
These are the some interesting points during the launch


















It could/should also take the name Solaris 12c
press release highlight some keep features
the download page for s11 (11/11/2011)
Oracle Solaris 11 page that include how to and whitpaper
Solaris 11 documents
One can watch the webcast later
There will be 100 city Solaris 11 tour
These are the some interesting points during the launch
- There is no change in open source policy for Oracle Solaris
- S10 will be supported for a long time, T5 and M4 come out next year will support s10
- S11 zone as NFS server feature will not be backport to s10
- Solaris is looking at Ksplice technology
- Oracle Solaris does not have any plan for KVM
- Ops-center for S11 will come out next year
- Oracle Solaris Cluster will support s11 (I did not see any docs or announcement)
- SPARC super cluster will be GA in Dec/2011, SPARC will run S11 and Cell will run OEL, Oracle Solaris Cluster is optional addon
- Exalogic run Solaris 11 will support zone
- There will be many White paper/How To on S11
- There is no tool for in place "Defragment" of zpool for s10 , if one send/receive zpool it will get "defragement", S11 handle this better
- There is no in place upgrade from s10 to s11, but one can move s10 to s10 zone in s11; p2v or v2v
Subscribe to:
Comments (Atom)